Showing posts with label technology. Show all posts
Showing posts with label technology. Show all posts

Jul 9, 2013

What Big Data reveals about us as a civilisation

Before showing up in Star Trek¹, the Dyson sphere was a thought experiment.

Physicist and mathematician Freeman Dyson wondered what would be the logical consequences of millenia of technological progress on a civilisation. In particular, what tell-tale outward signs would such a civilisation emanate that can be observed from afar?

One defining feature of technological progress is an escalation of energy needs. As we move up the ladder, we typically extract orders of magnitude more energy from the environment. Dyson conjectured that this hunger for ever more energy will never go away.

At a sufficiently advanced stage, a civilisation will come to view its primary source of energy - the star that it orbits (its Sun) - in a new light. Rather than just remaining passive harvesters of the tiny fraction of stellar energy the planet does intercept, its people will use all their know-how to extend their cosmic grasp beyond themselves.

One way they could achieve this is by building a sphere around their star, designed to capture and redirect potentially all of its radiated energy. This is the Dyson sphere².

Work by later enthusiasts has built upon these initial ideas and refined them - not just to aid and abet the search for extraterrestrial intelligence³, but as a regular prop in sci-fi story-telling⁴ and also as an indulgence in a shared thought experiment.

There’s the risk that this post is beginning to resemble a Wikipedia page with the barest excuse of a plot, but there’s also more. In 1964, a Soviet astronomer named Nikolai Kardashev proposed a scale to measure the technological advancement of civilisations.

On this scale, a Type I Kardashev civilisation would be one that uses all of the energy impinging on its planet. A Type II, on the other hand, would girdle and harness all the energy of its star, down to the very last drop⁵. They have splurged on a Dyson sphere.

* * *

Richard Dawkins begins his immortal book⁶ ‘The Selfish Gene’ by surmising “If superior creatures from space ever visit earth, the first question they will ask, in order to assess the level of our civilisation, is: “Have they discovered evolution yet?”⁷

It’s quite probable that this cosmic IQ test has more than one question and in that case, I would be surprised if “Have they discovered Big Data?” doesn’t figure prominently as well.

Why Big Data? Agreed, it helps businesses to make better decisions, enables public institutions to frame more informed policies and practices, allows science to bypass theories and causation altogether, prods governments to cosplay as Agent Smith⁸ and enfranchises statistical analysts to hold forth in meetings. But what civilisational pole-vault does Big Data represent?

To put it simply, Big Data represents a quiet backflip on the information equivalent of the Kardashev scale⁹.

As Big Data thinking seeps into every fold of business, science and governance, we are beginning to make the transition from asking what data impinges on us, to seeking what data can we intercept now and store for later. From seeing data as an incident and often-times helpful feature of our world, to treating it as a raw resource to be fracked out of every nook and search. From lamenting that we have more data than we can ever use, to turning our attention to all the data previously going uncaptured and building apps (and satellites¹⁰) to change that.

In other words, we have crossed the metaphorical watershed between contemplating the beauty of a sunset and contemplating the blueprint of a Dyson sphere.

Facebook represents one (but not the only) ambitious and centralised attempt to cloak the world’s population in an information Dyson sphere. Enclosed and interconnected by its very fabric, every quantum of information each one of us ever radiates is to be captured, stored and ultimately harnessed in the service of some decision, sometime and someplace else. None shall be allowed the luxury of impermanence. (Much like its sci-fi and energy counterpart, however, it’s already apparent that successful long-term attempts to do this will more likely be decentralised and loosely co-ordinated¹¹ - not Dyson spheres but Dyson swarms.)

Not to forget, there’s also the non-trivial question of where all this information - this 2.5 quintillion of data created and captured every single day - will reside. What else does this non-fictional Library of Babel¹², quietly purring and awaiting the deft touch of data-whisperers, have in store?

Running these massive data centers already consumes around 1.5 percent of the world’s energy resources¹³. It’s not inconceivable that some time in the near future they may come to consume closer to 20% of our total energy - the very amount of our caloric intake that’s diverted to our brains.

When that happens, things could get interesting. We may end up with an experiment in thought¹⁴.

Reference and Notes:
1. A full Dyson sphere is featured in the Relics episode of Star Trek : The Next Generation (Dyson spheres in popular culture)
2. You can read more about the Dyson sphere at its Wikipedia page.
3. As it so happens, Dyson spheres are actually difficult to distinguish from natural astronomical objects, like heavy dust clouds of birthing or dying stars, that may radiate similar signatures (Signs of Life). However, the search continues (Dyson sphere searches).
4. This Wikipedia page (Dyson spheres in popular culture) keeps track of all occurrences of Dyson spheres in fictional worlds.
5. A Type III Kardashev civilisation is also in the scale - this is one that harnesses all the energy of its galaxy (Kardashev Scale).
6. ‘The Selfish Gene’ is indeed one of my all-time favourite books but this is just a sly reference to the title Richard Dawkins now wishes he had given the book (The Immortal Gene), considering the misunderstandings the original title has led to (The Selfish Gene).
7. Chapter 1, The Selfish Gene by Richard Dawkins.
8. The antagonist of The Matrix trilogy.
9. I am unaware if such a scale has been already conjectured. But if not, I propose the name Ovshinsky scale, for Stanford Ovshinsky, who recognised that “information really is encoded energy” (Interview ith Stanford Ovshinsky).
10. Wired 21.07 has a fascinating piece on a startup which is launching a fleet of cheap, small and ultra-efficient satellites that send back up-to-minute hi-res pictures of all of the earth, which could change the way we measure the economic health of countries, industries and, even individual businesses. (The Watchers)
11. Jeff Stibel makes a compelling case why Facebook’s is set on the path to implosion rather than unrestrained growth. (Is Facebook Too Big To Survive)
12. ‘La Biblioteca de Babel’ is a short story by Jorge Luis Borges in which there exists a library containing all possible books, covering every possible ordering of alphabets (The Library of Babel). Much of the content of the library is gibberish leading its users to suicidal despair, but contained within its vastness are also untold and unimaginable treasures - like translations of all books in all languages.
13. Statistic from Google Throws Open Doors To Its Top Secret Data Center.
14. In Wired 13.08: We Are The Web, Kevin Kelly writes “This planet-sized computer is comparable in complexity to a human brain. Both the brain and the Web have hundreds of billions of neurons (or Web pages). Each biological neuron sprouts synaptic links to thousands of other neurons, while each Web page branches into dozens of hyperlinks. That adds up to a trillion "synapses" between the static pages on the Web. The human brain has about 100 times that number - but brains are not doubling in size every few years. The Machine is.”
My own conjecture here is that it might not be the total number of neurons, or connections, that could emerge as a threshold for the singularity moment, but the amount of energy the global mind draws from the world’s energy resources. It's one good reason why the global mind might convince us to build (it) a Dyson sphere.

Jun 25, 2013

The Retailification of Online Publishing

Less than a week for Doomsday¹, and two things continue to surprise me.

First, the number of Google Reader² devotees (including me) who are yet to find a replacement. With other dead products walking, finding a replacement is top priority. With GR, bedside vigil and mourning have taken precedence³.

The second is how everyone - even GR devotees - seem all too willing to perpetuate Google’s preferred script explaining away the summary execution of a much beloved product. RSS usage has been declining, only geeks use it, on the litany goes⁴.

It’s almost as if one is trapped between the covers of a detective novel in which, as expected, an unnatural death has occurred - but the detective du jour is conspicuously absent. Everyone you encounter has no alternative but to trust and repeat hearsay, even the perpetrators’ own mea-not-culpas.

So, what’s this alternate vision of the future in whose service GR was sacrificed? Google’s lullaby is a klutzy mashup of how much things have already changed and what the benevolent future has in store for us.

“As a culture we have moved into a realm where the consumption of news is a near-constant process. Users with smartphones and tablets are consuming news in bits and bites throughout the course of the day — replacing the old standard behaviors of news consumption over breakfast along with a leisurely read at the end of the day.”⁵
And of course, Google is working on “pervasive means to surface news across [Google's] products to address each user’s interest with the right information at the right time via the most appropriate means.”5

To me that sounds an awful lot like being constantly, and involuntarily, drip-fed the information equivalent of burgers, fries and supersize colas every waking hour - with matching nutritional value and info-calories.

It turns out, quite a few of us still believe in the opposite⁶ - in the benefits of a sumptuous and healthy breakfast of quality, hand-picked and slow-published reading material, each morsel chewed and ruminated upon and, occasionally, unsubscribed if found wanting - and new discoveries cheerily subscribed, if found nourishing.

* * *

Anyone who has ever visited an Ikea store - or a shopping mall - will know this feeling: You walk in on an errand, knowing exactly what that errand is. But before long, you’ve lost track of where you are in the store and where you were supposed to be.

Not everyone realises what inevitably ensues: they end up buying much more than they set out to. They have become the victims of a ploy in shopping mall design called the Gruen Transfer⁷.

The Gruen Transfer refers to that critical moment when shoppers get overwhelmed and disorientated by the deliberately confusing layout and cues of the store (presumably while excoriating themselves for not being up to the task.) Controlled ambient factors and store displays wear out their focus and decision making faculties. Literally, their eyes glaze and their jaws slacken. And in a snap, they become impulse buyers, sacrificial offerings to the highest bidders for shelfspace.

In those futuristic visions of how we will consume content online, there’s ample room for every crafty trick discovered and perfected by retailers⁸. But, primarily, there will be no escape from the online equivalent of the Gruen Transfer. You head online to read the news and before you know it, you are clicking palpably on “22 more reasons why Neo (eventually) regretted taking the red pill (Now in Slideshow Mode).”

Far from being a product with no future, GR I suspect was a cannibalising thorn in this vision. A thorn that in traitorous alliance with the social web’s bees and pollinators⁹ would leak much of the transfer out of Gruen.

It represented a stand by a cohort of content pro-sumers - the supposed minority of supposed power users who conscientiously wanted to tick their list of to-reads every day. God, how 20th century is that annoying habit?

Putting the deathwish on RSS¹⁰ is Google’s deliberate ploy to tip us collectively into a world of bluish reality, a world where we harbour no hopes of hanging on to our errand lists when we check in online. Instead, we submit to being passive and impulsive consumers. And those recurring pangs of anxiety? Just chew on some algorithmic manna and you’ll be eventually cured of them.¹¹

While it’s unclear if RSS will thrive in the future, all indications suggest it will fight another day¹². An outcome hardly to Google’s liking, whose concerted actions¹³ seemed to have hinged on making RSS the first technology in history to plunge into permanent disuse¹⁴.

So, farewell sweet Reader, may angels sing thee to thy hard-earned rest. And as for you dear reader, stay safe and hope you can find your way back again.

References and Notes:
1. Bearing the news of Google Reader’s demise, The Economist’s Babbage blog put it thus : “Users, meanwhile, worry about impending newslessness.” (Have I Got News For You?) For a less restrained (and funny) response to the news, check out Hitler’s reaction. (Hitler Finds Out Google Reader Is Shutting Down)
2. If you do not know anything about Google Reader or its underlying RSS technology, David Pogue has a helpful introduction to both, along with his recommendation for a replacement. (Google’s Aggregator Gives Way To An Heir)
3. A typical sentiment is the one expressed by Tim Harford in this tweet. Robin Sloan expresses a more extreme, but not uncommon, commitment to use the product till its very last day. To be fair, there’s also the moved-on-and-loved-it brigade captured in this tweet by Chris Anderson.
4. The following paragraph from a recent WIRED piece (Why Google Reader Really Got The Axe) repeats Google’s position verbatim but in a faux objective tone: “Obviously Google had to have a good reason to shut Reader down. The company has reams of data on how we use its products, and would not shutter a product that was providing sufficient food to its info-hungry maw. While some users remained devoted, the usage numbers just didn’t add up. The announcement shouldn’t have been too unexpected. Google hadn’t iterated on the service for years. It even went down for a few days in February.”
5. The words of Richard Gingras, Senior Director, News & Social Products at Google as reported in a recent WIRED piece. (Why Google Reader Really Got The Axe)
6. Just 2 weeks after the announcement of Google Reader’s demise (A Second Spring of Cleaning), Feedly acquired over 3 million new subscribers (TechCrunch). There are many more where they came from.
7. The Gruen Transfer is named for Viktor Gruen (ABC TV), the inventor in the 1950s of the shopping mall. He actually disavowed the manipulative techniques that were given his name, but ironically the name stuck.
8. Entire books have been written about the shenanigans of retailers. This piece provides a basic introduction: The Psychology of Retailing Revealed.
9. MG Siegler at Techcrunch argues that Google Reader’s underappreciated power users actually constitute the bees who pollinate the web and through their unique leverage keep the social web blooming and aplenty : “The first is that Reader’s users, while again, relatively small in number, are hugely influential in the spread of news around the web. In a sense, Reader is the flower that allows the news bees to pollinate the social web. You know all those links you click on and re-share on Twitter and Facebook? They have to first be found somewhere, by someone. And I’d guess a lot of that discovery happens by news junkies using Reader.” (What If The Google Reader Readers Just Don’t Come Back)
10. This is also the view of Dave Winer, the developer and populariser of the original RSS format. (E-mail with Max Levchin & As July 1 Approaches)
11. Evegeny Morozov, and others, have argued persuasively for the greater harm Google might inflict on us with their blind deference to “algorithmic neutrality.” (Don’t BE Evil)
12. The attention and activity around many old challengers and upcoming RSS readers promises that this could turn out to be a blip that a revitalised and Google-free market will redress. (Have I Got News For You?)
13. Following the KO-ing of Google Reader, Google also shut down the RSS Add-On in its Chrome Browser. (It’s Not Just Reader: Google Kills Chrome RSS Add-On Too)
14. Kevin Kelly has claimed and demonstrated to critics that technologies never go away (not even ancient Roman bridge making techniques) and persist for a very long time, though popularly deemed to be extinct. (Technologies Don’t Go Extinct)

Aug 9, 2011

Fashion in the Future

In a post accompanied by a composite picture from Burning Man, Kevin Kelly recently wrote "Someday we'll all dress like this."

His contention is that the sterile, streamlined and 'uniform' clothing depicted by sci-fi movies is just plain wrong. The future - in his vision - will be one continuous and unabashed festival of self-expression; a point he re-iterates in the comments  "... I look forward to a time when we do nothing but art. When getting dressed up is the whole point of life!"

On this occasion, I found myself disagreeing with Kevin. I have no doubt, in the future there will probably be many more people who'll dress for self-expression, just as there will still be people who'll prefer the utilitarian and simple in clothing. In many cases, both extremes may be preferred by the same people, but at different times - driven by fashion, season and personal situation.

My belief stemmed from what I know about myself (and others of my ilk) and what fashion choices I'd like to make in the future - accounting for the slender possibility that I may surprise myself with more metamorphosis than I'm used to.

But reading this Economist article about the futility of expecting to be anonymous in the future made me think twice.

The article in question reports not on fashion in the future but on the growing menace and sophistication of face recognition software. In one recent experiment, for eg., an off-the-shelf face-recognition program correctly identified as many as a third of participants simply by comparing their images with publicly available profiles on Facebook.

It seems unavoidable that in the future you could be identified by anyone who takes the trouble, simply based on info widely available on the net.

So how will dressing up outlandishly like you're at Burning Man change that? By pushing the face-recognition algorithms to their limit, or even past it. If you can't outlaw them, you might as well do all you can to throw them off-track.

The point is, dressing simply and predictably at all times only trains the software to zero in on you (especially if you can't control who takes your pic when and tags it online) - and there's only so many ways you can dress functionally. On the other hand, there are an unlimited number of looks - and ways to dress - if you opt for 'self-expression' instead.

My guess is that a lot of people bothered about the growing spread and reach of CCTV cameras will dress artistically and give full vent to their self-expression. And not just on occasions where they would like to avoid scrutiny - at protest rallies for eg - but even for occasions that don't demand the veil of being unidentifiable. It'll be hard to justify, or practice, as 'self-expression' if you only indulge in it at opportunistic moments.

The same holds for wearing just masks - which is really all you need for beating face recognition algorithms; but dressing 'self-expressively' overall gives you the freedom to cover your face creatively in public without seeming like a potential bank robber.

The goal will not be to beat face recognition one hundred per cent but to the raise its cost by adding the need for (expensive) human oversight or to introduce a reasonable cause for doubt.

There will, of course, continue to be people for whom self-expression for its own sake is worth the exorbitant cost. But for others, a little paranoia should offer a helping hand.

Feb 7, 2011

The real "Internet of Things"

Everytime I encounter the term "The Internet of Things", I feel a tinge of disappointment which arises from knowing what it means but concurrently hoping it meant something else. Something that is inherently implied in its name, at least to the uninitiated.

But first, here's what the "Internet of Things" is currently defined as:
"... the physical world itself is becoming a type of information system. In what’s called the Internet of Things, sensors and actuators embedded in physical objects—from roadways to pacemakers—are linked through wired and wireless networks, often using the same Internet Protocol (IP) that connects the Internet. These networks churn out huge volumes of data that flow to computers for analysis. When objects can both sense the environment and communicate, they become tools for understanding complexity and responding to it swiftly. What’s revolutionary in all this is that these physical information systems are now beginning to be deployed, and some of them even work largely without human intervention.
Even being familiar with that prosaic truth doesn't stop me every time from thinking for a very brief second that the term could - or rightfully should - refer to a network where physical atomy things are moved around an Internet-like network.

For those of you who have also responded with the same anticipation-disappointment to the phrase, this post doesn't bring tidings of a real "Internet of Things." But it brings the next best thing to that: a description of technologies from the present - and even the past - that could possibly underpin a future, less disappointing, Internet of Things.

The idea, and the technology to make it happen, goes back to Victorian times:
There was a time, in many places, when letters and parcels could be put in capsules and sent through pipelines direct to people’s houses. The capsules were propelled by air and whizzed along tubes from sender to receiver. Pneumatic delivery, as it was known, was commonplace from about 1850 to 1950. The largest system was in Paris, where more than 400km of tubes were laid. Berlin and London had extensive pneumatic systems, too. After 1950, however, the networks gradually closed down, and today only Prague still clings to this Victorian technology.
If that wasn't fantastic enough, the idea has recently received an infusion of modern technology:
Pipenet, a system Dr Cotana patented in 2003 and has been developing since then, is based on a network of metal pipes about 60cm (two feet) in diameter. Instead of air pressure, it uses magnetic fields. These fields, generated by devices called linear synchronous motors, both levitate the capsules and propel them forward. The capsules are routed through the network by radio transponders incorporated within them. At each bifurcation of the pipe, the transponder communicates the capsule’s destination and the magnets pull it to the left or the right, as appropriate. Air pumps are involved, but their role is limited to creating a partial vacuum in the pipes in order to reduce resistance to the capsules’ movement. This way, Dr Cotana calculates, capsules carrying up to 50kg of goods could travel at up to 1,500kph—so you could be wearing a pair of jeans or taking photographs with a new camera only a couple of hours after placing your order.
Crucially (in my opinion), replacing centralised air compressors that produce the pneumatic push to localised magnetic levitation technology should enable the "pneumatic post network" idea of the past to transform from a "monologue of things" to a "dialogue of things" - giving it an upload option to the standard download feature. What we have here now is glimmer of the democracy of the Internet we know so well, albeit shackled firmly by the economy of real things.

Dr. Franco Cotana - referred to in the excerpt above - is an engineering physicist working at the University of Perugia in Italy. His team is exploring pilot projects in Italy and China. The emphasis, pragmatically, is on transportation of freight - the new technology reduces the cost of transportation to under $5 a mile. True, a fraction of the cost of traditional freight transport but, a real bummer for dreamers of a widely available "Internet of Things."

But that shouldn't prevent an unhindered view of the possibilities:
In 1900 Charles Emory Smith, then postmaster-general of the United States, wrote that by the end of the decade he expected the “extension of the pneumatic tube system to every house, thus insuring the immediate delivery of mail as soon as it arrives in the city”.
I am hoping the above technologies, accompanied by an adoption of open network protocols and a modular (packeted) approach to making things, may yet make what is currently a pipe dream, a dream pipe.

Mar 16, 2010

How advertising can help improve machine translation of languages

Automatic translation of languages by machines has been a standard fixture of science fiction - and like quite a few other sci-fi standards, it has been painfully slow to cross over to reality.

A recent breakthrough called 'statistical machine translation' promises to pluck this fantasy out of its permanent future resident status. Here's a description of how the technique works (from an Economist article in June 2006):
"Statistical translation encompasses a range of techniques, but what they all have in common is the use of statistical analysis, rather than rigid rules, to convert text from one language into another. Most systems start with a large bilingual corpus of text. By analysing the frequency with which clusters of words appear in close proximity in the two languages, it is possible to work out which words correspond to each other in the two languages. This approach offers much greater flexibility than rule-based systems, since it translates languages based on how they are actually used, rather than relying on rigid grammatical rules which may not always be observed, and often have exceptions."
Not surpisingly, the company which is at the forefront of statistical machine translation is Google. Whenever you use Google Translate, this is what's happening behind the scenes (via Economist Feb 2010):
"For translation, the company was able to draw on its other services. Its search system had copies of European Commission documents, which are translated into around 20 languages. Its book-scanning project has thousands of titles that have been translated into many languages. All these translations are very good, done by experts to exacting standards. So instead of trying to teach its computers the rules of a language, Google turned them loose on the texts to make statistical inferences. Google Translate now covers more than 50 languages, according to Franz Och, one of the company’s engineers. The system identifies which word or phrase in one language is the most likely equivalent in a second language. If direct translations are not available (say, Hindi to Catalan), then English is used as a bridge."
But currently there are a few drawbacks with statistical machine translation - which have primarily got to do with the kind of readymade translated texts they rely on. From yet another recent Economist article:
"It is getting better, but it still struggles with colloquialisms and idioms. As Ethan Zuckerman, co-founder of Global Voices and a researcher at Harvard University, puts it: “If you sound like an EU parliamentarian, we can translate you quite well.”
What's foxing these gargantuan statistical crunching machines is the linguistic equivalent of the last mile problem. The everyday spoken language which is rarely captured and archived - let alone translated into a dozen languages before archiving.

For advertising and advertisers who believe in their existence serving a larger purpose and providing a public good, there might be an opportunity here.

Ads and commercials are routinely translated into different langauges - especially when they come from global multinationals. Because they aim to communicate with end consumers, these also contain the kind of colloquilaisms and idioms that EU and UN speeches lack.

What if advertisers could provide Google or a non-profit third party transcripts of these ads and commercials along with the translations that have been professsionally created through human experts. If a sufficiently large number of advertisers commit their future and past archives, it may end creating a formidable archive of spoken and eveyday lingo for the statistical inference bots to bite their silicon teeth into.

The drawback - as some skeptics will point out - is that the language of advertising may not be any less stitled and far removed than a EU speech. On the other hand, for advertising that seeks to leave behind a cultural impact, this could provide a platform to really make it come true.

If such a thing can be worked upon, the advertsing itself may have a limited run - but it's value could live on forever by providing us with better machine translation for ages to come.

Feb 23, 2010

Atoms aren't the new bits but...

 In WIRED 18.02, editor-in-chief Chris Anderson pens one more of his soon-to-be-stretched-into-a-book piece - Atoms are the new bits. If it does make it to book form, like his last effort it is likely to be a thick book with a thin idea.

His basic premise is probably uncontestable: "If the past 10 years have been about discovering post-institutional social models on the Web, then the next 10 years will be about applying them to the real world."

It hardly seems the stuff of revolutions, though. Joel Johnson at Gizmodo plucks apart many of the arguments made in the piece, including the prophetic promise of the headline. (It is true that the cost of copying and manipulating atoms is falling, but as Chris Anderson should know, free is a totally different species.)

My own feeling as I read and re-read the piece was that Chris had a tenable story within the stuff he's talking about - but instead of concentrating on it, he reached for the untenable and outrageous claim he couldn't back.

At several places, he hovers over what to me was the real idea behind the piece. Here, for instance:
Transformative change happens when industries democratize, when they’re ripped from the sole domain of companies, governments, and other institutions and handed over to regular folks. The Internet democratized publishing, broadcasting, and communications, and the consequence was a massive increase in the range of both participation and participants in everything digital — the long tail of bits. 
Now the same is happening to manufacturing — the long tail of things.
And here:
The academic way to put this is that global supply chains have become scale-free, able to serve the small as well as the large, the garage inventor and Sony.
Which brings me to the real plausible claim that I think Chris could have/should have made with the piece. It's not that atoms are the new bits, but that the 'global supply chain' is the new internet.

Jul 2, 2009

The news industry, disaggregation of audiences and 'failed truths'

Recently, much discussion online (and offline) has centred around the impending doom of journalism and the news industry. As an enthusiastic cheerleader of all the disruptive effects of the Internet, my opinion until of late has been that the changes couldn't have come about sooner.
But I am slowly beginning to wonder if in this particular case the results may be catastrophic in addition to being liberating.

The seeds of doubt were sown in my mind not while reading the many raging debates about the issue itself - but through an unconnected sentence in a Scientific American piece that explored the possibility of food shortages (caused by global warming, water shortages and over cultivation) increasing the risk of failed states and probably leading to the end of civilisation.

Contrasting the predominant geopolitical threat in the 20th century (superpower conflict) with what we are facing today (failing states like Somalia, Afghanistan, Iraq), the author of the piece states "It's not the concentration of power but its absence that puts us at risk."

That line in particular seemed to echo for me the travails - and the challenges - currently faced by the news industry. For much of the last century the criticism of mainstream media and the news industry has been that the huge audiences it aggregated by default created the risk of manipulation of the truth and news by special interest groups, advertisers, the government and the very leanings of the news outlet in question.

For those who feared that, the disruption wreaked by the Internet is welcome news - individuals were free to seek the news and the truth on their own and there were myriad places to find them. And most of it was free - a development that was simultaneously, and fortuitously, driving the established news industry out of business.

But there is a downside. In this context, the above line can be reframed as: "It is not the aggregation of audiences but its absence that puts us at risk."

If the predominant threat in the last century was that ever-larger audiences could be misinformed deliberately, then the threat we are facing now is the possibility of 'failed truths' - news and facts that don't have enough of an audience to become known or be championed.

That doesn't imply that all truth and news will suffer - in the very same way that not all countries face the risk of becoming failed states. As a recent Economist report states, "Yet the plight of the news business does not presage the end of news. As large branches of the industry wither, new shoots are rising. The result is a business that is smaller and less profitable, but also more efficient and innovative."

Much of this revolution is being driven by readers "seeking the kind of information they want, when they want" using search engines, aggregators and social sharing tools.

Some online newspapers are experimenting with a combination of free and premium (freemium) content - in the belief that the free fun (or commodity) stuff will bring in people who will be served targeted advertising and the audience looking for the dry, obscure specialist stuff will want it enough to pay for it.

But what of the the remaining stuff that we don't actively seek or are willing to pay for - which traditional news outlets served to us sandwiched between the above two layers. Information about local politics or local crime trends for example - news that we don't particularly seek but we should ideally know.

By including it in the mix, newspapers and news bulletins ensured that there was a bare minimum awareness we had of news and information that we didn't value ourselves - but added a great deal of value to our individual and collective lives.

It's these spheres that face the risk of being dominated by 'failed truths' if the current changes continue unabated - as empowered audiences seek and are found by news and views that cater to their own short-sighted, limited and momentary interest.

As I discovered, it's not a trifling worry. Tim Harford (the author of The Undercover Economist) writes of a research study that uncovers just that.

It seems, following the closure of The Cincinnati Post in the end of 2007, "local politics suffered. In the suburbs of Cincinnati where the Post had the strongest presence, fewer candidates ran for municipal office in the election after the paper folded, voter turnout fell, and incumbents grew more likely to win re-election."

As Tim Harford notes, the special circumstances of the Cincinnati Post - a closure date determined 5 years earlier and not by other factors like a local recession - points to the kind of void that the news industry will inevitably leave behind. And one that no amount of blogging and citizen journalism may make up for.

The larger arc of the Scientific American article about food shortages was that 'failed states' export disease, terrorism, illicit drugs, weapons and refugees; and without intervention, these could lead to a series of government collapses and the undermining of the world order.

I can't help but wonder what unseen repercussions our 'failed truths' may unleash upon us.

Apr 27, 2009

The future of: CAPTCHAs

This New Scientist article gives an overview about how programmers are being goaded by X-prize-like rewards from spammers to crack CAPTCHAs - and, in the process, are providing a push towards fundamental breakthroughs in Artificial Intelligence.

The obvious and tongue-in-cheek conclusion drawn by the report is then "to start designing CAPTCHAs in a different way – pick problems that need solving and make them into targets to be solved by resourceful criminals."

The subtext of this cat-and-mouse game of advances in CAPTCHAs and the technology to crack them is actually a fairly simple task - to define something that uniquely identifies us as human. After all CAPTCHA stands for 'Completely Automated Public Turing Test To Tell Computers and Humans Apart' - and if someone is able to automate the solution then it no longer qualifies as a CAPTCHA.

In this escalating arms race, the demise of the CAPTCHA-as-we-know-it is probably very near. But the solution to stopping spam may not lie in moving the bar a bit further each time by repeatedly creating problems that need technological breakthroughs - or (as framed in the above article) problems that need an infusion of funds to motivate programmers to find a solution.

My bet is that the solution to a lasting CAPTCHA is not to focus on exceptional human capabilities (reading distorted text, correcting the orientation of pictures, etc.) but instead to focus on human frailties and chinks in the human mind that allow it to be fooled.

Visual and cognitive illusions - and magic - mask the perception of physical reality by exploiting our sensory and cognitive weaknesses - and could actually be the basis of next-generation CAPTCHAs aiming to establish how to identify humans from non-humans.

Of course, that would set off a race to simulate the mind's shortcomings and the creating of an artifical intelligence mimicking the neural basis of human intelligence. Again, my bet is that it wouldn't be as easy - or as near in the future - as cracking the human mind's specific capabilities, one by one.

And if programmers - spurred by a bounty or otherwise - do manage to recreate the imperfections of the human senses and mind, we would have truly created natural artificial intelligence (as against artificial artificial intelligence).

And then, spam will be the least of our problems.

Oct 22, 2008

The Future of: Wikipedia (Redux)

Over at the Marketing & Strategy Innovation Blog a Wikipedian, Nihiltres, has left a detailed comment in response to my post 'The Future of: Wikipedia'.

While Nihiltres makes quite a few valid points, his comment skirts the big question the post rasied. Should Wikipedia be referred to as an encyclopaedia at all? I need to clarify that the question wasn't meant as a criticism of Wikipedia, but more as a clarification for people who regularly fall into the trap of arguing whether its content is reliable or not.

I am posting here Nihiltres' entire comment and my responses to the points raised.

Nihiltres: "Deletionpedia is not official in any way; it's merely the biggest endeavour of its kind. Someone with deleted-article access (i.e. an administrator) is running a bot to retrieve deleted articles. Many deleted articles do not go to Deletionpedia, especially those which are copyright violation, libel, et cetera. You can request copies of deleted articles from most Wikipedia administrators, who will do it happily so long as the article's content isn't problematic."

I am aware that Deletionpedia isn't an official site - but was totally unaware that one can request copies of deleted articles from administrators. Thanks for the info.

Nihiltres: "Wikipedia does have statistics, just they don't do it themselves, for the obvious reason that they can't spare the server resources. A third-party site tracks page-view statistics and has made them available and easily searchable. Wikipedians would love to have more statistics, and have even added a number of new statistics to the software lately: for example, Special:Statistics includes a new entry "active users" listing the number of unique users to have made at least one action in the past 30 days. I'm particularly familiar with the addition of this feature as I personally added the descriptive text about how it's determined to the interface. Regardless, statistics generally takes a back seat to the running of the rest of the site, and increasing need for servers generally results in some of the more "nice-to-have, but not essential" features getting disabled. If you want to help solve this, donate to the Wikimedia Foundation and encourage others to do the same. You can even specify that your donation go towards statistics-related improvements in hardware and software."

Again I was unaware of the availability of third party page view statistics. It's better than nothing of course, but they don't substitute for embedded statistics within every page, which is what I was arguing for.

To draw an analogy, anyone can look up an atlas to calculate the distance between two places. But that doesn't make road signs announcing distances (or directions) redundant. In the case of the latter, the data is embedded within the terrain itself - making it useful in a way that no atlas can. It felt a bit like referring to the atlas when I checked the stats of the populartity of pages - it's when the data is emebbed right within Wikipedia's individual pages that we'll see Wikipedia users explore it secure in the knowledge of what they can trust and what they cannot.

Perhaps, the problem is also with the "statistics generally takes a back seat to the running of the rest of the site" view. Conspicuous statistics about page-views, edits, etc. are a way of teaching and informing users about Wikipedia itself. It's the failure to do that which has resulted in the widespread misunderstanding of what Wikipedia is and what it stands for.

Nihiltres: "The market and trade analogies are difficult to swallow: if anything, Wikipedia is a gift economy, where everyone donates to a common pool of a non-commoditizable resource."

By referring to Wikipedia as a marketplace for information, I wasn't implying that money was changing hands (though Microsoft attempted something along those lines; and I won't be surprised if others haven't.) While Wikipedia is indeed a gift economy (or more likely, a reputation economy), an important part of both these terms is the word 'economy' itself. There is a trade or a barter - not necessarily in the monetary understanding of it.

What do Wikipedia contributors get in return for their efforts? Joy, pride, respect, the ability to influence people's views and - not to be underestimated in any way - their very own Wikipedia User Page.

Nihiltres: "While the notability system is contentious and has never quite worked, it does solve many problems with original research, vanity articles, spam, and other issues. The parent policy from which it derives, Wikipedia:Verifiability, has surely greatly increased Wikipedia's evident reliability, and repeated discussions have upheld its usefulness. While it's surely annoying as hell, it's often the only way to justify that something that obviously doesn't belong be excluded. Other methods, such as visitor-number measurement, aren't effective in practice. For example, the visitor-number model fails to distinguish between unworthy topics and obscure topics: is "Geoffrey Chaucer" less important than "Penis" because more immature teenagers view the latter? It's easy to criticize the current system (and the critics are usually correct), but I have yet to see a viable alternative."

I am a strong believer of William McDonough's maxim: "Regulations are signals of design failure." So introducing new regulations to deal with complications will only give rise to more complications - as Nihiltres acknowledges is the problem with Wikipedia's notability system. Given an opportunity to solve a problem, I'd urge, tinker with the design.

Visitor-number measurement might fail under the condition specified by Nihiltres - but refine it a bit, compare articles within categories (or within tags) and you'll end up showing a comparison between Geoffrey Chaucer and, say, Thomas Mallory.

And finally, I am not so much a critic of Wikipedia itself (if anything, I'm a Wikipedia believer). I am a critic of the direction it's currently taking - by striving to be seen as a 'reliable encyclopaedia' Wikipedia is being untrue to its roots.

Instead of glossing over its shortcomings, Wikipedia, I believe, should be transparent about them at all times - and especially at the very moment when a user is using it as a reference.

And if it means dropping the word 'encyclopaedia' from its description, so be it.

Oct 11, 2008

The future of: Wikipedia


A recent post by Doc Searls narrating the near-deletion experience of his Wikipedia entry set me thinking about the debate between Wikipedia inclusionists and deletionists.

To paraphrase the debate, the inlcusionists believe that since "Wikipedia is not paper" and has no space constraints, it should contain as many articles as its contributors are willing to produce - no matter how trivial they are. Deletionists on the other hand believe that Wikipedia should follow a more stringent editorial policy and ban articles on trivial subjects - something they believe will make it a credible and trustworthy source of reference.

As the Economist article above goes on to explain, Wikipedians have created a complex quagmire of rules to judge what makes an article trivial or non-trivial. The fate of a Wikipedia article nominated for deletion rests on the application and re-application of these rules, draining deliberations and debates - and if an entry fails this torturous process, it finds itself walking the plank (as it happens, to an afterworld called Deletionpedia.)

It is a sign of Wikipedia's growing importance that a crippling bureaucracy is developing around it - apparently, entries about governance and editorial policies comprise around a quarter of its content. To most observers, this regulation and law-mongering is good news - it will make Wikipedia a bona-fide encyclopaedia, an illegitimate child finally given the legitimacy of the family name (ironically, as Britannica itself crosses over.)

But, as I have argued elsewhere, the problem lies in defining Wikipedia as an encyclopaedia - or at least in comparing it to one. In my opinion, Wikipedia was never an encyclopaedia  - it is (and should remain) a marketplace for information, where buyers and sellers meet and trade information.

Every contributor to Wikipedia brings along a bundle of information - information that either makes an entire entry of its own, or is a cog in a bigger entry. When the contributions of various contributors conflict, Wikipedia's negotiation dance kicks in - the discussion page becomes a hotbed for the deliberations and debates discussed above. Hard as it is to believe, over time this protracted negotiation does result in an unbiased and objective entry.

While this negotation works well to resolve conflicting information, I am not convinced it works as well to decide upon the triviality or non-triviality of an entry.

A long tail marketplace of the kind Wikipedia is, instead, should not decide triviality and non-triviality by itself. It should leave it to the buyers to make that decision for themselves - and strive instead to make the meta-data (the information about the information) transparent to its users.

What that would mean is to append to every page with information about its use - the number of people visiting it, for example. An entry - no matter how detailed and complete - with no visitors is trivial. Correspondingly, a stub with lots of visitors is not. That decision shouldn't at all be an administrator's to make - no matter how stringent and bureaucratic the process aiding and abetting him.

Providing visitor numbers for each entry (in one or more ways - raw numbers, comparative colour-coded gradings, percentile figures, numbers benchmarked against the most popular entry in the category, etc) will also enable users to figure out the probablity of the accuracy of an entry. The more viewers a page has, the more likely that its going to be accurate, thanks to Linus' law - "given enough eyeballs, all bugs are shallow". (I use IMDb extensively, always aware that its Bollywood information can be buggy while everything about mainstream Hollywood fare is near authoritative.)

Finally, this transparency about visitor numbers for each entry sends a valuable signal to the sellers in the marketplace. Those contributors slaving over those biographies of Pokémon characters (more than 500 detailed character biographies at last count) will probably be persuaded to abandon those entries and contribute instead to biographies of the leaders of Poland's Solidarity movement (currently, only a handful of poorly edited entries.)

Or more likely, the numbers will probably convince them of saleability of their wares.

(The likelihood of such a shift towards transparency occuring in Wikipedia's near future seem bleak though. The Economist reports that Wikipedia has not been gathering and disclosing figures about user-activity on the site for more than a year - probably because they reveal unpleasant truths.)

The real battle for Wikipedia's soul does not lie in the include/exclude skirmishes currently taking place at its frontiers. It will be fought - if ever - within the very entry that defines what Wikipedia is. And when the word encyclopaedia is dropped from its definition, Wikipedia will be free to become what it was meant to be.

Aug 4, 2008

Don't just make a prediction, set yourself up on a 'prediction eddy'

Paul Saffo writes that the fastest way to make an effective forecast is often through a sequence of lousy forecasts.

He continues "Instead of withholding judgment until an exhaustive search for data is complete, I will force myself to make a tentative forecast based on the information available, and then systematically tear it apart, using the insights gained to guide my search for further indicators and information. Iterate the process a few times, and it is surprising how quickly one can get to a useful forecast."

The mantra of the process he reveals is "strong opinions, weakly held." Take a definite stance and prove yourself wrong. Repeat until you can no longer do the latter.

As I was reading the post, the picture that came to my mind was of an eddy - the prediction not as a single do-or-die point, but as a whirlpooling series of points - a line - seemingly taking a tangential and misguided route to its destination but continually course-corrected by its own momentum and willingness to change direction at every point.

Not surprising then that a prediction usually may end up being wrong.

A 'prediction eddy', on the other hand, is compelled towards success by its very own nature.

Jul 30, 2006

Coming soon : Number replay

I have two kinds of weekends. An F1 race weekend - where all my plans will revolve around watching the qualifying and the race - and other weekends where I am free to do my thing. This was a race weekend. And also a good time to ruminate on the nature of 'watching' a race.

Of late, I have been watching F1 on TV armed with a live data feed from the Official F1 Racing site. The feed is directly connected to the computers at the racing venue and is what's available to commentators and other professionals. It's pretty comprehensive in giving where each driver is placed along with his lap times, sector by sector. Colour coding indicates who's fastest in a sector or lap, and also who's doing a personal best.

From day one I have been hooked to this feed and sometimes I find myself transfixed by the F1 Live Timing screen (click on pic) more than I am with the TV images. I love taking in all the details in a glance and often find that this motley bunch of numbers say more about the current situation in the race than even the most stunning pictures the TV crew can dish out.

So evocative are the numbers, in fact, that whenever there has been no telecast of F1 qualifying, I have simply connected to F1.com and happily watched the qualifying drama unfold as a series of decimal-pointed numbers. (There was no telecast yesterday but there was no power either.)

Having found it a workable replacement for watching the race on TV, I now wonder if we can train ourselves to experience an event with a set of senses other the ones with which we are normally used to perceiving it.

Culturally, we are very attuned to visual images and our senses work in perfect harmony to extract loads of information from pictures. But even the best TV picture still consists of a set number of frames. The mind, while watching it, is filling in the motion and connecting each frame in a smooth continuous video - a video than in reality doesn't exist. Not unlike, the details my mind is filling in as it sees a series of rapidly changing numbers - a particularly bad sector time may mean an excursion off the track, for eg.

Sometime in the future, maybe, with some much more additional detail and appropriate data coding for easy assimilation, systems like F1 Live Timing will create an alternate way of watching a sport. Of course, F1 is better suited to such a scheme than say football. Or maybe I am wrong about that.

Unfortunately for me, the one thing F1 Live Timing hasn't been able to replace yet is the pleasing sight of a spectacular crash. And there were a few today. I hope the guys at F1 Racing are flooring the accelerator to come up with the next version of Live Timing soon.

May 30, 2006

Micro-explorations. Mega-encyclopedia.

I haven't had an opportunity to see the maps that Columbus consulted before setting off on his voyage of discovery. But it's a fair guess that the continent we now know as America didn't figure in any of them.

But we don't find anything alarming about that. Because the very process of map-making (especially in earlier times) consisted of putting down a representation of everything we know – reliable or not – and then allowing people (in this case, intrepid explorers) to either validate or correct it.

Which is why I find it surprising that people are willing to argue against the usefulness of Wikipedia – the free online encyclopedia that has been at the centre of a raging debate.

Don't mistake me. The debate is necessary – but the discussion itself is ignoring a very crucial lesson about the changing nature of our relationship with information.

Last year, Nature commissioned a study to find out the accuracy of a sample of articles drawn from Encyclopedia Britannica and Wikipedia. They found only 123 errors in the former but 162 in the latter. Though initially expressing victory, Britannica has traded rebuttals and more rebuttals with Nature since then. Wikipedia has stayed out of the war of words, but the spotlight has remained firmly on it.

The public debate has centered around the 'unreliability' of Wikipedia's information trove. Its detractors gleefully point to the obvious pitfalls in its open source process. Its supporters, while acknowledging it is nowhere near being accurate, have emphasized the process “is more traditional than most people realize.” It seems that less than 1% of users actually make edits – adding up to a few hundred committed volunteers. There's also a method in the madness – editors with superior reputations get to override others.

However long the debate lasts, it is apparent that both encyclopedias will co-exist and continue to attract their respective set of loyalists. Yet no one, it seems, has considered noting the different kind of audiences these two products will draw – probably because it is a fair assumption that everyone consulting an encyclopedia only wants accurate information. But is it?

This is where maps and mapmaking can throw some light. And if you think the comparison between encyclopedias and maps is just skin deep, think again. What else is an encyclopedia but an inter-connected mapping of all the data in the world?

Inherent in a map is the idea of ‘I am here’ – a place which gives us our mooring and is the epicenter of our outward exploration of the map. The further we go from the epicenter the less our individual experience is able to corroborate the data on the map with reality.

Not everything we know about the world comes from firsthand experience – therefore the birth of the concurrent idea : ‘we are here.’ This tag would apply to a large swathe of the globe where a sizeable number of people live or have visited : the corresponding data on the map is as good as the data wherever you are.

Outside of 'we are here' is the frontier – little-explored territory, anything we know about which could turn out to be wrong. Maps with their physical representation of a reality we know, make it easy – even intuitive – for us to understand this.

Encyclopedias too have their 'I am here' and 'we are here' territories – though not as immediately and intuitively accessible to our faculties. And they have their frontiers too; even though we don't expect them to.

Britannica's supporters want to have nothing to do with the frontiers. In effect, they want the certainty of a line that tells them that everything within the bounds of an encyclopedia is 'we are here' territory.

But reality is far away from that. That line is relentlessly shifting back and forth and is often creeping back within the bounds of even an established encyclopedia like Britannica. (The controversy over who really discovered America – Columbus or the Vikings – is a case in point.)

The users drawn to Wikipedia will, however, understand and accept that reality. They will know that a large part of their favorite encyclopedia is indeed 'we are here' territory. But they will also know that the frontiers are very real and just a click away. But instead of deeming the 'map' useless, they will instead approach it with the same enterprise that Columbus approached his maps – as an opportunity to embark on their own expeditions of discovery.

In that sense, the real use that Wikipedia serves is not as a storehouse of information – that is merely a by-product. Instead, its greatest value is that it gives all of us the opportunity to be explorers and discoverers – each of us doing our bit to push the limits of our collective knowledge a little further.

This exploration is not always in the realm of finding something new. As Kevin Kelly (in his Long Now Foundation lecture) points out, exploration is also the joining of an isolated and lesser known fact within the larger fabric of common knowledge (as Columbus actually did.)

These explorations are also different from the ones we know of in the investment and expertise they require. These  explorations are micro-explorations – unleashed by the same forces that have given us micro-markets and micro-trends. These are explorations that can be undertaken by any of us. In most cases they confirm what is already known – in some cases they indicate otherwise. But it is the legitimacy of these micro-explorations that Wikipedia recognizes by allowing any one of us to change its contents.

The Wikipedia revolution – changing us from passive consumers of explorations and discoveries to active creators of our own little expeditions – is just another manifestation of the sweeping changes the Internet has wrought. And it is here to stay.

And finally, if you use Wikipedia, please ignore the debate and continue using it. But do also remember what Thoreau had to say.

“The frontiers are not east or west, north or south, but wherever a man fronts a fact.” MisEntropy

Refernces & Notes :
'The Wiki Principle'; Economist April 20th 2006
'The Next 100 Years of Science:Long-term Trends in the Scientific Method' by Kevin Kelly

Apr 28, 2006

The photographer without a camera and other curiosities from the future

'You press the button, we’ll do the rest.’ A slogan that enshrined Kodak’s promise to photographers in the late nineteenth century. More than a hundred years later, neither Kodak nor the inheritors of its legacy have been able to deliver on that promise. Photography still remains a tightrope walk involving unfathomable technique, crafty art, lots of hard work and much heartache. And a disconcerting majority of camera buyers continue to languish in the quicksand of mediocrity.

Despite the many advances in camera technology, the promise of simplifying photography to a simple press of a button has remained an unattainable holy grail. But I suspect not for long. Unbeknownst to the rest of the world (and probably even to itself), a humble site designed for sharing pictures is already delivering on Kodak’s promise. And in the process, it might also end up redefining what the very act of shooting a picture means.

But as usual, I am steaming ahead of myself. Let me rewind and start at the beginning.

Though I have been shooting pictures for years, most of my output has been very forgettable. Except for a few images where luck lent a helping hand. Worse still, thanks to life and laziness, I got around to shooting only a fraction of the number of pictures I wanted to. Therefore, my skills hardly got the constant honing they needed. I replaced 3 cameras – each with a more advanced one - but I continued to remain the mediocre photographer I always was.

A few months ago, I stumbled upon a photo-sharing site called Flickr.com. The unique thing about this rather addictive site was that it made the entire process of sharing pictures an infectious and communal experience. And I mean sharing not just with people you know, but also with complete strangers from around the world.

But that’s just one side of the story. While Flickr enables you to share your pictures with the world, it simultaneously also enables the world to share its pictures with you. Imagine being able to view over 2.5 million photostreams (Flickr term for subscriber’s photo collections) sitting right at your desktop! The sheer expanse of pictures available for viewing is simply breathtaking. And in the true spirit of a gregarious community, Flickr encourages browsers to leave behind comments on other people’s pictures or even mark a photograph as a favourite simply by clicking on a button.

Of course, not everything is picture perfect. Flickr has its share of mediocre and downright bad pictures – in fact, they make up the overwhelming majority of the pictures you will find. But embedded within these overflowing streams of trash are very many photographic gems and masterpieces. By now a confirmed Flickr addict, I compulsively navigated from one photostream to another, frowning disapproval on pictures that fell short and rewarding a select few of them with the mouse salute – clicking a button to save them as favorites.

This is when things got really interesting. As my collection of favorites swelled, I found myself handling it with the very same attention and care that I directed towards my own photostream. It took very little physical effort to mark a photo as favourite, but to me the act carried as much investment as clicking a picture with my own camera. I started thinking about the broad themes that ran through this motley collection of pictures and how they reflected both the subjects and style of my choice.

In time, I found a shift in my photographic eye – for the better. I got more critical of the pictures I was viewing – and my confidence in my ability to differentiate the good from the ordinary began to soar. I even periodically reviewed my favourites folder, to weed out pictures that seemed good to begin with but now didn’t quite make the cut. Hey, wait a minute. People usually do that with their own portfolios. So, does that say anything about my favourites folder?

A common adage in photography goes, ‘Photographers don’t make photographs. Light makes photographs; photographers merely capture them.’ This recognizes the fundamental truth that the photographer as an artist is actually a mere recorder patiently waiting as millions of images (in a hi-speed continuous video sequence called sight) flit by him every day. Using the apparatus of a camera, he chooses to freeze a few images out of these which, in the words of Hentri Cartier-Bresson, puts “in the same line of sight the head, the eye and the heart.”

And as I sit every day in front of my computer viewing millions of images flitting by and capturing a handful of them with the press of a button, am I doing anything different?

Like any photographer you and I know, I too am a mere recorder patiently waiting as an ever-flowing multitude of images marched past me – only to freeze and rescue a select few. If anything had changed, it was the apparatus I was using. I had, in fact, found the ultimate camera upgrade – one that would gather and present to me a selected shortlist of pre-processed images on every conceivable subject. And when I was satisfied with what I saw in the viewfinder, all I had to do was simply press a button. And the picture was taken.

Flickr, to me, had delivered on Kodak’s promise. And also imparted a valuable lesson on the upheavals and contradictions the online world will unleash in the future to come – for eg. the photographer without a camera. It has also convinced me that looking at the future with present-tinted glasses will only misguide us – and catch us unguarded when new paradigms cast on us their disruptive spell.

As far as my photography goes, I am only too pleased with the progress I am making. And as I fire up my browser every morning I know that a century old advertising slogan can now be resurrected, albeit with a minor change.

“We’ve done the rest, you just press the button. Flickr.”
About the author:
Iqbal Mohammed is Head of Innovation & Strategy at a digital innovation agency serving the DACH and wider European markets. He is the winner of the WPP Atticus Award for Best Original Published Writing in Marketing & Communication.
You can reach him via email or Twitter.



misentropy

// Subscribe to Iqbal's weekly data newsletter. //