It’s a curiosity of scientific progress that black holes were postulated decades before a theory of evolution was¹. Not only is evolution a latecomer to the party, it also happens to be a straggler when it comes to popular appeal.
Why did it take so long for someone to come up with the idea of evolution? And even after 150 querulous years, why is it that so many people struggle to accept it?
According to Ernst Mayr, architect of neo-Darwinism, the blame rests squarely with Plato².
Plato believed that the world we inhabit is but an imperfect projection - a shadow - of an ideal world; a flawed departure from its perfect essence³.
For eg., there are rabbits in the world but there also is the ideal ‘Rabbit’, which is perfect and immutable for all time. Real world rabbits do differ from this ideal ‘Rabbit’ - they may have longer ears or differently coloured eyes. But these, in essence, are undesirable deviations from the ideal 'Rabbit', which serves as a conceptual blueprint for how all rabbits should be⁴.
Similarly, according to Plato, for every living and non-living thing there exists a category type of the same - its perfect version. Every thing in the world ‘strives’ to embody its category type.
As inheritors of the Greek tradition of thinking, we continue to see the world through the lens of this idea - a cognitive bias now dubbed essentialism⁵.
Essentialism has us believe that the categories we assign to things (and people) have a deep underlying basis to them. Instead of viewing categories as a helpful way of understanding the world, we begin to see them as factual features that are actually present in the world.
Based on this, we expect sharp and pronounced divides between different categories - triangles and squares, but also rabbits and non-rabbits. Any intermediates are only temporary anomalies.
To really appreciate evolution, however, one needs to disavow essentialism and see the world through the prism of ‘population thinking’.
There are no ideal types in the world, only real instances of populations with variations in characteristics. There is no perfect Rabbit form. Moreover, today’s commonplace rabbit form is anything but immutable and could have very different characteristics a thousand years hence, depending on selection pressures.
Evolution, if nothing else, is a wholehearted embracing of variation.
* * *
If essentialism has a professional cheerleading squad, it has to be the advertising industry.
At the heart of every advertising or brand-building endeavour is an exercise in essentialism.
Step 1: Define an archetypal experience or consumer: for eg., the tastiest burger, the sexiest man, the healthiest breakfast, the smartest housewife or the bounciest hair. Step 2: Present any deviations from these ‘ideals’ as sharply delineated and deeply flawed. Step 3: Suggest corrective action, buying the brand in question.
That’s not all. The process of creating advertising, from start to finish, rests on driving wedges of categorisation in a spectrum of contiguous variations: generations of consumers (Gen Xers OR Millenials), proficiency in using technology (digital natives OR digital immigrants), consumer affinity (favorability OR advocacy), brand personality (hero OR sage), and so on.
And it goes deeper still. Unlike magicians, we remain convinced in the magic of our own trick, even after the curtains have come down.
So when thinking about the future of our industry, we continue to resort to the categorical clarity of Platonic ideals.
We dichotomise between creative types and everyone else (We have a monopoly on creativity); between professionally produced ads and user generated content (Nobody would do this for free); between bold visionary ideas and ideas that arise out of research and testing (A consumer can have nothing constructive to add); between ads that look like ads and content marketing (Nobody wants to read, everyone wants to be entertained); between stories and data (Everybody loves to listen to stories); between the eternal kingdom of advertising and the end of all advertising as we know it (Nothing will change! Everything will change!)
Our instinctive response to all problems - including our own - is to seek ‘essences’, to leave no room for intermediates. Our reality remains firmly bounded by our ideas of it, rather than the other way around.
While Platonic certainty served us well in simpler times, the present and the future are decidedly messier⁶. To thrive in these interesting times, we need to master an ability to experiment with and vary our response to the world, not impose our mental order on it⁷.
But as long as we continue to see any deviation from the norm as an undesirable (and temporary) aberration and as long as we continue to mistake the map for the territory, we are destined to remain trapped in an essentialist maze of our own making.
References and Notes:
1. John Michell elaborated on the idea of blackholes in a paper presented to the Royal Society in 1793 (John Michell and Black Holes). Charles Darwin’s and Alfred Russell Wallace’s ideas of evolution were first presented to the Linnean Society on 1 July 1858 (Wikipedia: On the tendency of species to form varieties; and on the perpetuation of varieties and species by natural means of selection).
2. Ernst Mayr presents this argument in his book ‘What Evolution Is’ (and elsewhere.) Richard Dawkins writes in The Greatest Show On Earth, “According to Mayr, the reason Darwin was such an unconscionable time arriving on the scene was that we all - whether because of Greek influence or some from other reason - have essentialism burned into our mental DNA.”
3. This is the idea captured in Plato's Allegory of The Cave.
4. Richard Dawkins uses a rabbit to explain essentialism and population thinking in The Greatest Show on Earth. Since I discovered the distinction in his book, I saw no reason to change the example animal in my own narration.
5. “Essentialism is a cognitive bias that people across a wide range of cultures seem to have, and across a wide range of ages too. It’s a belief that the everyday categories in the world around us have some deep underlying basis to them.” (An Interview with Susan Gelman PDF link)
6. Nicholas Nassim Taleb writes in The Black Swan, “The Platonic fold is the explosive boundary where the Platonic mindset enters in contact with messy reality, where the gap between what you know and what you think you know becomes dangerously wide. It is here that the Black Swan is produced.”
7. Gareth Kay in his piece Advertising’s Kodak Moment I.M.H.O. calls on the advertising industry to take on a hacker’s approach to solving problems: “... hacking is about a predisposition and bias towards speed. It’s about solving a problem in a better, faster and easier way. It fights the tyranny of perfection that far too often slows us down. It lets us move and experiment at least as fast as culture.”
Adrian Ho in his Cannes presentation on innovation and agencies references Bruce Lee’s martial arts style, Jeet Kune Do which is “A style without style.” This idea of foregoing hardset techniques is also captured in Bruce Lee’s famous quote : “Don't get set into one form, adapt it and build your own, and let it grow, be like water. Empty your mind, be formless, shapeless — like water. Now you put water in a cup, it becomes the cup; You put water into a bottle it becomes the bottle; You put it in a teapot it becomes the teapot. Now water can flow or it can crash. Be water, my friend.”
Both Gareth Kay's and Adrian Ho's calls for advertising to adopt a looser, less-wedded-to-technique and problem-solving attitude are, in my opinion, a call for an end to essentialism in the industry.
Aug 6, 2013
Jul 30, 2013
You may or may not have heard of ‘native advertising’ yet but chances are you’ve already encountered it, and often¹.
In a recent survey by OPA, just under 75% of US publishers self-report that their sites featured native advertising, with a possibility of that number reaching 90% by year end².
So, what is native advertising³? It is a paid-for placement that appears on a publisher’s site or content stream, its appearance being ‘native’ to its visual surroundings. Along with residing on the publisher’s - not the marketer’s - site, it is designed to be discovered and shared in the same ways as regular content. It is also promoted alongside the site’s editorial content, though identified as ‘sponsored content’ or as created by ‘marketing partners’⁴.
One of the promises of native advertising is that it’ll significantly contain the pestilence of banner ads on the Web. No longer will we have to suffer visually screechy intrusions to our attention, nor will publishers have to sell them for next to nothing.
Native advertising, it is also being touted, could be a saviour for online publishing and lead to a win-all for almost everyone (except, of course, ad agencies who may see their role as intermediaries between marketers and online publishers all but vaporise⁵.)
It is not all smooth sailing, of course. A drumbeat of opposition comes from advocates of old-school journalism, who revere the rigid church-and-state line of separation between editorial and advertising. To them, this is a textbook definition of a slippery slope⁶.
* * *
In Game Theory 101, the choice of driving on the left or the right side of the road is often presented as a coordination problem, one in which both players stand to derive mutual gain (and road safety) by mutually agreeing on one, while remaining indifferent to any specific choice⁷.
But for most of history and up to the 18th century, the choice of which side of the road to ride on was more than just a problem of coordination, it was fundamentally a question of trust.
Which is why almost everyone rode on the left of the road⁸. In case they encountered highwaymen or aggressors on their way, this placed them with their right arms in the best position to defend with sword or knife. (Paradoxically, staying to the left also made it easier to be the attacked as one would be to the ‘right’ side of oncoming traffic.)
The rise of multi-horse carriages changed this dynamic⁹ (with the drivers preferring to stay right), but a papal edict in 1300 firmly established riding on the left as the law of the land¹⁰.
The French Revolution, which sought to rebuild society based on new egalitarian principles, brought an abrupt end to several artefacts and accepted norms of life. One of the casualties was the left of the road custom¹¹.
Napoleon decreed by royal order that this game of mistrust would no longer be enacted on roads in the European continent. His army was instructed to march on the right side of the road as a sign of trust and goodwill¹². This “civilised” practice spread far and wide, though the British and their erstwhile colonies still choose to be left behind¹³.
The church-and-state line of divide between editorial and advertising in the media, likewise, imposed a Napoleonic bargain in the way we navigate the world of news and information.
It was a coordination game, but also much more. We could safely encounter an armada of oncoming information in our daily lives and rarely have reason to mistrust its sources or intent. We stayed to the right, doffed our hats and chose to engage with content or advertising as we desired, with no worry about mistaking one for the other¹⁴.
One consequence of native advertising is a return to a bottom-up pre-Napoleonic state of affairs.
But isn’t this a fatal blow to the idea of journalistic ethics? Won’t we end up swimming in endless torrents of corporate propaganda endlessly repackaged and subverted as editorial content?
Unlikely. As minor league Robin Hoods through the centuries realised, it takes two to play a game of coordination and trust. There are no unilateral decisions.
Our new tripartite pact with journalism and native advertising will definitely not have the crisp starched principles that some of us fondly remember. But there’s no reason to believe it will be any less real, or helpful.
Millions of micro-interactions and negotiations between readers and a daily torrent of news will determine where the equilibrium of this new bargain lies. Far from being a rigid and straight line running only through the organisation charts of media companies, the division of what constitutes the useful and the promoted can and will be made in the marketplace, and by readers.
If you doubt that, welcome to the left side of the road.
References and Notes:
1. High-profile web publishers sporting native advertising include Gawker, Huffington Post, Business Insider, Forbes, BuzzFeed, Slate, Cheezburger, Techmeme and The Atlantic (Reuters: And now, a word against our sponsor).
2. eMarketer: How Native Ad Campaigns are shaping up
3. There a wide variety of confusing and overlapping definitions currently in use for the term native advertising, ranging from content marketing, publisher tweets to contextually-relevant non-standard advertising units (eMarketer: How Native Ad Campaigns are shaping up). Felix Salmon of Reuters has a useful matrix that clarifies how to think about distinguishing between many competing terms and definitions (Reuters: The Native Matrix).
4. I have outlined this definition based on pieces written by Felix Salmon (The disruptive potential of native advertising) and Lewis Dvorkin (What’s next for native ads? Controversy gives rise to market realities).
5. Andrew Rice covers Buzzfeed, one of the vocal advocates of native advertising in this piece for NY Mag (Does Buzzfeed know the secret?). He provides a panoramic view of the potential of the innovation along with what the critics make of it. Writing on his Reuters blog, Felix Salmon makes a positive case for its disruptive potential (The disruptive potential of native advertising).
6. Jack Shafer writes, “When Web publishers deliberately blur the visual and textual divide that separates editorial from advertising, as The Atlantic did, they force readers to judge whether a page is news/opinion or a commercial advertisement. But they’re not confused; it’s the publisher and the advertiser who are confused. The publishers and advertisers have polluted their own tradition by erasing the traditional line. Suddenly, it’s completely reasonable for readers to blame controversial news stories directly on advertisers and blame controversial advertisements directly on reporters and editors, because publishers and advertisers have essentially merged operations. Such calamities injure both publisher and advertiser, even already controversial advertisers like Scientology.” (Reuters: And now, a word against our sponsor)
7. “A typical case for a coordination game is choosing the sides of the road upon which to drive, a social standard which can save lives if it is widely adhered to.” (Wikipedia : Coordination Games)
8. “In the past, almost everybody travelled on the left side of the road because that was the most sensible option for feudal, violent societies. Since most people are right-handed, swordsmen preferred to keep to the left in order to have their right arm nearer to an opponent and their scabbard further from him. Moreover, it reduced the chance of the scabbard (worn on the left) hitting other people.” (World Standards: Why do some countries drive on the right and others on the left?)
9. “These wagons had no driver's seat; instead the driver sat on the left rear horse, so he could keep his right arm free to lash the team. Since he was sitting on the left, he naturally wanted everybody to pass on the left so he could look down and make sure he kept clear of the oncoming wagon’s wheels. Therefore he kept to the right side of the road.” (World Standards: Why do some countries drive on the right and others on the left?)
10. New Scientist : Left if right on the road
11. New Scientist : Left if right on the road
12. This move by Napoleon has been widely attributed to either the fact that he was left-handed or he was doing the opposite of what Britain, his enemy, did (Wikipedia: Right-and Left Hand Traffic). However, one does not have to attribute idealistic motives to Napoleon to see that ordering his armies to march on the right had a worldwide impact and did improve the trust capital of all involved.
13. A world map of countries driving left of the roads shows how much along British-lines this practice is divided. (World Standards: Why do some countries drive on the right and others on the left?)
14. Jack Shafer writes, “It’s equally important to advertising-supported journalism that the news not be confused with the ads that run nearby, a point Benjamin Franklin made in his advertising manifesto in his 1731 “Apology for Printers.” Franklin held — and most publishers continue to hold — that the controversy raised in news stories is 1) desirable, 2) should not be held against advertisers and 3) that the content of advertisement should not automatically be held against the newspaper publishing them.” (Reuters: And now, a word against our sponsor)
In a recent survey by OPA, just under 75% of US publishers self-report that their sites featured native advertising, with a possibility of that number reaching 90% by year end².
So, what is native advertising³? It is a paid-for placement that appears on a publisher’s site or content stream, its appearance being ‘native’ to its visual surroundings. Along with residing on the publisher’s - not the marketer’s - site, it is designed to be discovered and shared in the same ways as regular content. It is also promoted alongside the site’s editorial content, though identified as ‘sponsored content’ or as created by ‘marketing partners’⁴.
One of the promises of native advertising is that it’ll significantly contain the pestilence of banner ads on the Web. No longer will we have to suffer visually screechy intrusions to our attention, nor will publishers have to sell them for next to nothing.
Native advertising, it is also being touted, could be a saviour for online publishing and lead to a win-all for almost everyone (except, of course, ad agencies who may see their role as intermediaries between marketers and online publishers all but vaporise⁵.)
It is not all smooth sailing, of course. A drumbeat of opposition comes from advocates of old-school journalism, who revere the rigid church-and-state line of separation between editorial and advertising. To them, this is a textbook definition of a slippery slope⁶.
* * *
In Game Theory 101, the choice of driving on the left or the right side of the road is often presented as a coordination problem, one in which both players stand to derive mutual gain (and road safety) by mutually agreeing on one, while remaining indifferent to any specific choice⁷.
But for most of history and up to the 18th century, the choice of which side of the road to ride on was more than just a problem of coordination, it was fundamentally a question of trust.
Which is why almost everyone rode on the left of the road⁸. In case they encountered highwaymen or aggressors on their way, this placed them with their right arms in the best position to defend with sword or knife. (Paradoxically, staying to the left also made it easier to be the attacked as one would be to the ‘right’ side of oncoming traffic.)
The rise of multi-horse carriages changed this dynamic⁹ (with the drivers preferring to stay right), but a papal edict in 1300 firmly established riding on the left as the law of the land¹⁰.
The French Revolution, which sought to rebuild society based on new egalitarian principles, brought an abrupt end to several artefacts and accepted norms of life. One of the casualties was the left of the road custom¹¹.
Napoleon decreed by royal order that this game of mistrust would no longer be enacted on roads in the European continent. His army was instructed to march on the right side of the road as a sign of trust and goodwill¹². This “civilised” practice spread far and wide, though the British and their erstwhile colonies still choose to be left behind¹³.
The church-and-state line of divide between editorial and advertising in the media, likewise, imposed a Napoleonic bargain in the way we navigate the world of news and information.
It was a coordination game, but also much more. We could safely encounter an armada of oncoming information in our daily lives and rarely have reason to mistrust its sources or intent. We stayed to the right, doffed our hats and chose to engage with content or advertising as we desired, with no worry about mistaking one for the other¹⁴.
One consequence of native advertising is a return to a bottom-up pre-Napoleonic state of affairs.
But isn’t this a fatal blow to the idea of journalistic ethics? Won’t we end up swimming in endless torrents of corporate propaganda endlessly repackaged and subverted as editorial content?
Unlikely. As minor league Robin Hoods through the centuries realised, it takes two to play a game of coordination and trust. There are no unilateral decisions.
Our new tripartite pact with journalism and native advertising will definitely not have the crisp starched principles that some of us fondly remember. But there’s no reason to believe it will be any less real, or helpful.
Millions of micro-interactions and negotiations between readers and a daily torrent of news will determine where the equilibrium of this new bargain lies. Far from being a rigid and straight line running only through the organisation charts of media companies, the division of what constitutes the useful and the promoted can and will be made in the marketplace, and by readers.
If you doubt that, welcome to the left side of the road.
References and Notes:
1. High-profile web publishers sporting native advertising include Gawker, Huffington Post, Business Insider, Forbes, BuzzFeed, Slate, Cheezburger, Techmeme and The Atlantic (Reuters: And now, a word against our sponsor).
2. eMarketer: How Native Ad Campaigns are shaping up
3. There a wide variety of confusing and overlapping definitions currently in use for the term native advertising, ranging from content marketing, publisher tweets to contextually-relevant non-standard advertising units (eMarketer: How Native Ad Campaigns are shaping up). Felix Salmon of Reuters has a useful matrix that clarifies how to think about distinguishing between many competing terms and definitions (Reuters: The Native Matrix).
4. I have outlined this definition based on pieces written by Felix Salmon (The disruptive potential of native advertising) and Lewis Dvorkin (What’s next for native ads? Controversy gives rise to market realities).
5. Andrew Rice covers Buzzfeed, one of the vocal advocates of native advertising in this piece for NY Mag (Does Buzzfeed know the secret?). He provides a panoramic view of the potential of the innovation along with what the critics make of it. Writing on his Reuters blog, Felix Salmon makes a positive case for its disruptive potential (The disruptive potential of native advertising).
6. Jack Shafer writes, “When Web publishers deliberately blur the visual and textual divide that separates editorial from advertising, as The Atlantic did, they force readers to judge whether a page is news/opinion or a commercial advertisement. But they’re not confused; it’s the publisher and the advertiser who are confused. The publishers and advertisers have polluted their own tradition by erasing the traditional line. Suddenly, it’s completely reasonable for readers to blame controversial news stories directly on advertisers and blame controversial advertisements directly on reporters and editors, because publishers and advertisers have essentially merged operations. Such calamities injure both publisher and advertiser, even already controversial advertisers like Scientology.” (Reuters: And now, a word against our sponsor)
7. “A typical case for a coordination game is choosing the sides of the road upon which to drive, a social standard which can save lives if it is widely adhered to.” (Wikipedia : Coordination Games)
8. “In the past, almost everybody travelled on the left side of the road because that was the most sensible option for feudal, violent societies. Since most people are right-handed, swordsmen preferred to keep to the left in order to have their right arm nearer to an opponent and their scabbard further from him. Moreover, it reduced the chance of the scabbard (worn on the left) hitting other people.” (World Standards: Why do some countries drive on the right and others on the left?)
9. “These wagons had no driver's seat; instead the driver sat on the left rear horse, so he could keep his right arm free to lash the team. Since he was sitting on the left, he naturally wanted everybody to pass on the left so he could look down and make sure he kept clear of the oncoming wagon’s wheels. Therefore he kept to the right side of the road.” (World Standards: Why do some countries drive on the right and others on the left?)
10. New Scientist : Left if right on the road
11. New Scientist : Left if right on the road
12. This move by Napoleon has been widely attributed to either the fact that he was left-handed or he was doing the opposite of what Britain, his enemy, did (Wikipedia: Right-and Left Hand Traffic). However, one does not have to attribute idealistic motives to Napoleon to see that ordering his armies to march on the right had a worldwide impact and did improve the trust capital of all involved.
13. A world map of countries driving left of the roads shows how much along British-lines this practice is divided. (World Standards: Why do some countries drive on the right and others on the left?)
14. Jack Shafer writes, “It’s equally important to advertising-supported journalism that the news not be confused with the ads that run nearby, a point Benjamin Franklin made in his advertising manifesto in his 1731 “Apology for Printers.” Franklin held — and most publishers continue to hold — that the controversy raised in news stories is 1) desirable, 2) should not be held against advertisers and 3) that the content of advertisement should not automatically be held against the newspaper publishing them.” (Reuters: And now, a word against our sponsor)
Jul 23, 2013
...In that Empire, the Art of the Story attained such perfection that the Decennial Census of citizens was replaced with an Annual Census of all stories found within the borders. Politician, businessman, janitor all knew that having a story that framed one’s life and experiences was the only way to be counted among the living. In time, having a nation of citizens preoccupied with story-building was not enough. The all-powerful Story-collectors Guild decreed that it was a treachery against the empire to trade stories with other peoples and other nations; it was the sworn duty of every citizen to contribute ever bigger epics to the Empire’s formidable stockpile. The Generations that followed continued this arms race, but neglected honing the allied art of telling the stories. Since all were busy every waking hour collecting and crafting stories, there was none left to listen to them. In the Deserts of the West, there can still be found abandoned piles of stories half-buried in the sand; in all the land there is none alive who can narrate them¹.
— Å ahrzâd, ‘On Mutually Assured Narration’ (1706)²
* * *
Apollo Robbins³ is a pickpocket and a gentleman. A theatrical showman, he returns wallets and watches with the same practiced ease with which he pilfers them⁴. Often to applause, but always to astonishment.
He is also an uncommon pickpocket for one other reason. Not every plyer of his trade aspires to - or has achieved - Apollo’s ability to unravel and articulate the principles that underlie the art of the pickpocket.
His insights have earned him a dedicated following of neuroscientists (and consulting gigs with the DoD⁵ and many corporations) and even become the basis for a scientific paper⁶.
But there’s another lot whom he can teach a trick or two - us aspiring storytellers, if only we can tear ourselves away from our quest for the ‘what’ of stories - their hidden structures, blinkered archetypes and sublimated plots.
Because storytelling is as much - or even much more - about the ‘how’, the telling, as it is about the stories themselves.
Storytelling is about the audience, and about knowing if they are willing to play.
For millennia, storyteller and pickpocket have worked the same crowd (often at the same time.) We’ll be the ones losing the plot - by foregoing the sweep of the arc for the misplaced gratification of the straight line - if we seek inspiration from only one of them.
References and Notes:
1. I have adapted this from Jorge Luis Borges’ famous short story ‘On Exactitude in Science’ ("Del rigor en la ciencia"). The original short story imagines the consequences of desiring absolute precision in place of the necessary abstraction of maps (Wikipedia : On Exactitude in Science). I have retained much of the underlying structure of the original (as seen in this translation by Andrew Hurley), repurposing its payload to instead imagine the consequences of desiring stories in place of storytelling.
2. The original Jorge Luis Borges short story is written as a literary forgery, fictionally attributing it to a Surarez Miranda. I’ve attributed the story to Å ahrzâd (or Scheherazade) - the raconteur’s raconteur who spunkily wagers her own life on her storytelling abilities in The Thousand and One Nights. Who better to chronicle the death of storytelling than one who owed her life to it?
The title ‘On Mututally Assured Narration’ refers to the Cold war game theory-inspired stratagem of Mutually Assured Destruction (MAD). I imagine (in the story and outside) a time in the future where everyone is armed with an epic story to tell and is programmed to deploy their own the instant they encounter another’s.
The short story is dated 1706 to mark the year of the first English translation of The Thousand and One Nights (Wikipedia: The Thousand and One Nights).
3. Apollo is a stage name (New Yorker: A Pickpocket’s Tale).
4. You can watch Apollo in action in this Youtube clip.
5. Says a Special Operations Command official who recruited Apollo Robbins: “It’s no big secret that a lot of Army Special Forces guys have a very big interest in magic and deception and being able to manipulate attention. Apollo is the guy who actually gets into the nuts and bolts of how it works, why it works, and oftentimes can extrapolate that into the bigger principle.” (New Yorker: A Pickpocket’s Tale)
6. Stephen Macknik and Susana Martinez-Conde, a husband-and-wife team of neuroscientists have collaborated with Apollo and tested the basis of his insights in the paper Stronger Misdirection in Curved Than In Straight Motion. Apollo is credited as a co-author.
7. Apollo’s words taken from New Yorker: A Pickpocket’s Tale.
8. Apollo’s words taken from New Yorker: A Pickpocket’s Tale.
9. Apollo’s words taken from New Yorker: A Pickpocket’s Tale.
10. These are not the words of Apollo Robbins. This and the next 2 paragraphs are a splicing of two different articles referring to and explaining Apollo’s insights about misdirection in curved motion. This paragraph is taken from Scientific American : Magic and the Brain by Stephen Macknik and Susana Martinez-Conde, Apollo’s neuroscience collaborators.
11. These are not the words of Apollo Robbins. This, the last and the next paragraph are a splicing of two different articles referring to and explaining Apollo’s insights about misdirection in curved motion. This paragraph is taken from WSJ: Stealing A Watch Made Easy by Alex Stone.
12. These are not the words of Apollo Robbins. This and the last two paragraphs are a splicing of two different articles referring to and explaining Apollo’s insights about misdirection in curved motion. This paragraph is taken from WSJ: Stealing A Watch Made Easy by Alex Stone.
13. In the words of Ira Glass of ‘This American Life’ : “Narrative is basically a machine that’s raising questions and answering them.”
— Å ahrzâd, ‘On Mutually Assured Narration’ (1706)²
* * *
Apollo Robbins³ is a pickpocket and a gentleman. A theatrical showman, he returns wallets and watches with the same practiced ease with which he pilfers them⁴. Often to applause, but always to astonishment.
He is also an uncommon pickpocket for one other reason. Not every plyer of his trade aspires to - or has achieved - Apollo’s ability to unravel and articulate the principles that underlie the art of the pickpocket.
His insights have earned him a dedicated following of neuroscientists (and consulting gigs with the DoD⁵ and many corporations) and even become the basis for a scientific paper⁶.
But there’s another lot whom he can teach a trick or two - us aspiring storytellers, if only we can tear ourselves away from our quest for the ‘what’ of stories - their hidden structures, blinkered archetypes and sublimated plots.
Because storytelling is as much - or even much more - about the ‘how’, the telling, as it is about the stories themselves.
Storytelling is about the audience, and about knowing if they are willing to play.
“When I shake someone’s hand, I apply the lightest pressure on their wrist with my index and middle fingers and lead them across my body to my left. The cross-body lead is actually a move from salsa dancing. I’m finding out what kind of a partner they’re going to be, and I know that if they follow my lead I can do whatever I want with them.”⁷Storytelling is about attention, and a narrator’s ability to shape it.
“Attention is like water. It flows. It’s liquid. You create channels to divert it, and you hope that it flows the right way.”⁸Storytelling is about the sensitivity of the indirect over the direct.
“If I come at you head-on, like this, I’m going to run into that bubble of your personal space very quickly, and that’s going to make you uncomfortable. So, what I do is I give you a point of focus, say a coin. Then I break eye contact by looking down, and I pivot around the point of focus, stepping forward in an arc, or a semicircle, till I’m in your space.”⁹And storytelling ultimate’s trick? To invoke a calculus of computational hurdles offered by arcs and curves over the predictability of straight lines.
Pickpockets may move their hands in distinct ways, depending on their present purpose. They may trace a fast, linear path if they want to reduce attention to the path and quickly shift the mark’s attention to the final position. They may sweep out a curved path if they want to attract the mark’s attention to the entire path of motion.¹⁰
Neuroscientists think that these two forms of motion engage different parts of the visual system. Short linear bursts trigger saccadic eye movements—rapid but discontinuous focusing of the eyes during which visual awareness is suppressed for intervals as brief as 20 milliseconds—while curved movements activate smooth-pursuit neurons, brain cells programmed to follow moving targets.¹¹
This adaptation makes sense given that a straight line is a relatively predictable path, so your eyes can safely jump ahead, while a curved trajectory is less predictable and must be tracked more closely.¹²The simplest way to achieve a captivating narrative is to offer and resolve multiple tangential possibilities, every single step of the way¹³. Leaving the audience with no obvious straight line to DIY.
For millennia, storyteller and pickpocket have worked the same crowd (often at the same time.) We’ll be the ones losing the plot - by foregoing the sweep of the arc for the misplaced gratification of the straight line - if we seek inspiration from only one of them.
References and Notes:
1. I have adapted this from Jorge Luis Borges’ famous short story ‘On Exactitude in Science’ ("Del rigor en la ciencia"). The original short story imagines the consequences of desiring absolute precision in place of the necessary abstraction of maps (Wikipedia : On Exactitude in Science). I have retained much of the underlying structure of the original (as seen in this translation by Andrew Hurley), repurposing its payload to instead imagine the consequences of desiring stories in place of storytelling.
2. The original Jorge Luis Borges short story is written as a literary forgery, fictionally attributing it to a Surarez Miranda. I’ve attributed the story to Å ahrzâd (or Scheherazade) - the raconteur’s raconteur who spunkily wagers her own life on her storytelling abilities in The Thousand and One Nights. Who better to chronicle the death of storytelling than one who owed her life to it?
The title ‘On Mututally Assured Narration’ refers to the Cold war game theory-inspired stratagem of Mutually Assured Destruction (MAD). I imagine (in the story and outside) a time in the future where everyone is armed with an epic story to tell and is programmed to deploy their own the instant they encounter another’s.
The short story is dated 1706 to mark the year of the first English translation of The Thousand and One Nights (Wikipedia: The Thousand and One Nights).
3. Apollo is a stage name (New Yorker: A Pickpocket’s Tale).
4. You can watch Apollo in action in this Youtube clip.
5. Says a Special Operations Command official who recruited Apollo Robbins: “It’s no big secret that a lot of Army Special Forces guys have a very big interest in magic and deception and being able to manipulate attention. Apollo is the guy who actually gets into the nuts and bolts of how it works, why it works, and oftentimes can extrapolate that into the bigger principle.” (New Yorker: A Pickpocket’s Tale)
6. Stephen Macknik and Susana Martinez-Conde, a husband-and-wife team of neuroscientists have collaborated with Apollo and tested the basis of his insights in the paper Stronger Misdirection in Curved Than In Straight Motion. Apollo is credited as a co-author.
7. Apollo’s words taken from New Yorker: A Pickpocket’s Tale.
8. Apollo’s words taken from New Yorker: A Pickpocket’s Tale.
9. Apollo’s words taken from New Yorker: A Pickpocket’s Tale.
10. These are not the words of Apollo Robbins. This and the next 2 paragraphs are a splicing of two different articles referring to and explaining Apollo’s insights about misdirection in curved motion. This paragraph is taken from Scientific American : Magic and the Brain by Stephen Macknik and Susana Martinez-Conde, Apollo’s neuroscience collaborators.
11. These are not the words of Apollo Robbins. This, the last and the next paragraph are a splicing of two different articles referring to and explaining Apollo’s insights about misdirection in curved motion. This paragraph is taken from WSJ: Stealing A Watch Made Easy by Alex Stone.
12. These are not the words of Apollo Robbins. This and the last two paragraphs are a splicing of two different articles referring to and explaining Apollo’s insights about misdirection in curved motion. This paragraph is taken from WSJ: Stealing A Watch Made Easy by Alex Stone.
13. In the words of Ira Glass of ‘This American Life’ : “Narrative is basically a machine that’s raising questions and answering them.”
Jul 16, 2013
In the myth-making that follows success, the beginnings of any creative endeavour or career always happen under the light of a guiding star.
Every creative breakthrough, this myth holds, is foreordained. It benefits and enriches us - we undeserving flock of consumers - through the work and preachings of a chosen one. A christos¹, whose genius is to know something we don’t: what we really want.
Our own destiny, it seems, is to conserve the inertia of ignorance and wait for the moment when what we want is magically handed down to us on a platter of premonition.
This birth of the chosen one - or, interchangeably, the chosen idea - is greatly honoured and blessed by gifts sent forth by the three kings.
The first king is Akio Morita, founder of Sony and inventor of a product that famously failed research but didn’t fail history - the Walkman. His gift to the chosen one are these words: “We don’t ask consumers what they want. They don’t know. Instead we apply our brain power to what they need, and will want, and make sure we’re there, ready.”²
The second king is Steve Jobs, a phoenix among mortals, the inventor of the spiritual successor to the Walkman and übermensch above all³. His biggest legacy is, of course, his own extraordinary and multi-blockbuster career.
His gift, therefore, carries more than its weight in meaning; his words express the philosophy of a man who seemingly⁴ demonstrated their truth, time and again: "You can't just ask customers what they want then try to give that to them. By the time you get it built, they'll want something new."⁵
The last king is Henry Ford, an American icon who did more to transform the world (by making the car affordable) than we can possibly realise. His gift is the Hattori Hanzou⁶ sword of every creative ninja: “If I had asked people what they wanted, they would have said faster horses.“
As along as we have horses and period dramas set in Victorian London, this is the closest we will get to a Euclidean ideal of marketing.⁷
* * *
If you lived in ancient Greece or Rome and wanted to be known as a genius, your only option was to hope for rebirth, preferably as a creature of fantasy.
Because until the 15th century, a genius was not a human being. Instead, some overachievers had ‘a genius’⁸ - an other-worldly spirit, divinely assigned to them, who was the true source of their godly prowess in their chosen art.
The gifts of the talented, therefore, were not their own. Their creations were the result of a coalition of humble human contributions subject to a tsunami of creative forces unleashed by a supernatural collaborator - their attending genius.
This construct - of a human amanuensis and a genius spirit - assuages much envy, but also captures a fundamental reality of the creative process.
The act of creation is inherently a dialogue. A dialogue between primal unknowable forces and an all too human interlocutor - while the former makes unbridled brilliance possible, the latter ensures it remains relatable to human experience.⁹ We cannot have a masterpiece without either.
The ancient duality of a genius-human tag team offers space, even a distinct identity, within the creative process for a proto-consumer.
The same is true for our own three kings. They indeed had genius, but this genius was not free-standing - it was in attendance to an inherent consumer.
Their gifts do us no service by equating an absence of a formalised dialogue between disjointed creators and consumers as proof of the absence of any dialogue at all.
And if there ever was a gift horse that needed its dentures examined, it is Henry Ford’s.
Even if the people of his time naively longed for “faster horses”, Henry Ford the astute listener and creator knew which one of those two words mattered more. This does not signify a breakdown in communication; it is the perfect example of exactly the opposite.
That creation is effected by dialogue does not mean all or any exchange is good and should go unchallenged. Much of the current apparatus of formalised dialogue in our context - market research - is based on outdated assumptions¹⁰. There has never been a greater need to inject genuine empathy, communication and understanding back into our creative processes.
But we do ourselves no favours by attempting a reformation of this process with fingers pointed squarely at the consumer¹¹ - by endlessly repeating assertions that he cannot possibly know anything valuable and therefore has nothing to contribute¹².
Just think of the almighty genie¹³ - the genius - of the lamp, blessed with the power to summon or create anything at will. He still needs to ask someone, “What do you want?”
References and Notes:
0. I have kept things stylistically simple by using the standard convention of referring to a reader or consumer as ‘he’, without intending to imply that I think of them exclusively as male.
That said, in the headline I have specifically departed from the female pronoun of the David Ogilvy original quote to avoid the suggestion, or misunderstanding, that it’s only the female half that can possess creative genius.
1. Greek for ‘anointed one’ - the second name added to Jesus by early Greek-speaking christians. (Behind the name: Christos)
2. Akio Morita notably ignored focus groups that hated the Walkman (Man and Superman).
3. Chris Dixon uses the great man theory of history to suggest that firms run by intuitive geniuses like Akio Morita and Steve Jobs end up defying the natural lifecycle of companies (Man and Superman).
4. Seemingly, because his failures are often forgotten or ignored. “When Steve Jobs has fancied himself the chief creator, disastrous failures often ensued. His instincts were often wrong. For example, his much ballyhooed Apple Cube, which was in fact a successor to the NeXT cube he'd developed during his Apple hiatus, was a $6,500 dud. He was also openly disdainful of the Internet in the late 1990s. And before his hiatus from Apple, in 1985, his meddling and micro-management had gotten out of control.” (Co.Design: What made Steve Jobs so great?)
5. A view he expressed to Inc. magazine as far back as 1989 (The Decade of the Entrepreneur). A couple of decades later when asked what market research went into the iPad, he famously said "None. It's not the consumers' job to know what they want." (Co.Design: What made Steve Jobs so great?)
6. A special katana sword - in the mythical tradition of fabled and enchanted weapons - prepared for the protagonist by a renowned swordsmith of the same name in the Quentin Tarantino movie Kill Bill Vol 1. The swordsmith and sword are named for a famous 16th century Samurai and Ninja master of the Sengoku era (Wikipedia: Hattori Hanzo).
7. Euclidean geometry consists in assuming a small set of intuitively appealing axioms and deriving other propositions from them, creating a comprehensive logical and deductive framework (Wikipedia: Euclidean Geometry). Plato - the idealist’s idealist - is supposed to have inscribed above the entrance to his famous school, "Let none ignorant of geometry enter here." (Wikipedia: History of Geometry)
8. This sense of genius also gives us the English word ‘genie’, both derived from Latin gignere, meaning “to produce.” (Lexial Investigations: Genius)
9. In the Indian myth dealing with the creation of the epic poem Mahabharatha, the scribe is charged with the job of understanding every word that’s being dictated by the creator - in fact, he is commanded not to write down anything until he has understood it. An effective way to ensure the genius’ work is understandable to a commoner. I wrote more about this in the context of story-telling: Ganesha, the Mahabhratha and Complexity as Narrative Device.
10. Faris Yakob’s blog post All Market Research is Wrong and paper Uncovering Hidden Persuaders do a good job of rounding up the objections to market research as it is currently practiced and proposing approaches to tackle the shortcomings.
11. This marketoonist cartoon by Tom Fishburne and this parody video of a prehistoric focus group capture the popular disdain of seeking any consumer input in the creative process.
12. Consumers do know a lot more about their needs than they are usually given credit for. Proof of this is the increased occurrences of user innovations. This phenomenon is captured by MIT professor Eric von Hippel in his book ‘Democratising Innovation’.
In the book he writes, “Product developers need two types of information in order to succeed at their work: need and context-of-use information (generated by users) and generic solution information (often initially generated by manufacturers specializing in a particular type of solution). Bringing these two types of information together is not easy. Both need information and solution information are often very "sticky"-that is, costly to move from the site where the information was generated to other sites. As a result, users generally have a more accurate and more detailed model of their needs than manufacturers have, while manufacturers have a better model of the solution approach in which they specialize than the user has.”
Therefore, the innovation that companies pride themselves on turns out to be a particular and limited kind of innovation. “One consequence of the information asymmetry between users and manufacturers is that users tend to develop innovations that are functionally novel, requiring a great deal of user-need information and use-context information for their development. In contrast, manufacturers tend to develop innovations that are improvements on well-known needs and that require a rich understanding of solution information for their development.”
13. The arabic term ‘djinn’ is rooted in terms meaning ‘conceal’ or ‘cover of darkness’ and is unrelated to the Latin word ‘genie.’ It was translated to ‘genie’ by the original French translators of The Thousand and One Nights because of its proximity to the latin ‘genie’ in sound and meaning (What’s the difference between genies, jinn, and djinn?).
The djinns and spirits of the Arab world are mostly mischief makers, but among them were also the functional equivalents of the Latin genius. “Another manifestation, called Qareen, were devil companions appointed to every human being by Allah from among the jinn who, like Iblis, whispered evil things into their hearts and led them astray. Such was their influence that they were believed responsible for every inspirational work in pre-Islamic Arabia. Every poet was alleged to have a qareen devil of whom he was only a mouthpiece. A fantastic vision of the qareen given in one report shows him having a translucent body in the shape of a frog perched on the left shoulder bone of a man. It had a stinger like a mosquito’s, with which it actively probed the depths of the man’s heart and injected its message.” (Lapham’s Quarterly: Mischief Makers)
Every creative breakthrough, this myth holds, is foreordained. It benefits and enriches us - we undeserving flock of consumers - through the work and preachings of a chosen one. A christos¹, whose genius is to know something we don’t: what we really want.
Our own destiny, it seems, is to conserve the inertia of ignorance and wait for the moment when what we want is magically handed down to us on a platter of premonition.
This birth of the chosen one - or, interchangeably, the chosen idea - is greatly honoured and blessed by gifts sent forth by the three kings.
The first king is Akio Morita, founder of Sony and inventor of a product that famously failed research but didn’t fail history - the Walkman. His gift to the chosen one are these words: “We don’t ask consumers what they want. They don’t know. Instead we apply our brain power to what they need, and will want, and make sure we’re there, ready.”²
The second king is Steve Jobs, a phoenix among mortals, the inventor of the spiritual successor to the Walkman and übermensch above all³. His biggest legacy is, of course, his own extraordinary and multi-blockbuster career.
His gift, therefore, carries more than its weight in meaning; his words express the philosophy of a man who seemingly⁴ demonstrated their truth, time and again: "You can't just ask customers what they want then try to give that to them. By the time you get it built, they'll want something new."⁵
The last king is Henry Ford, an American icon who did more to transform the world (by making the car affordable) than we can possibly realise. His gift is the Hattori Hanzou⁶ sword of every creative ninja: “If I had asked people what they wanted, they would have said faster horses.“
As along as we have horses and period dramas set in Victorian London, this is the closest we will get to a Euclidean ideal of marketing.⁷
* * *
If you lived in ancient Greece or Rome and wanted to be known as a genius, your only option was to hope for rebirth, preferably as a creature of fantasy.
Because until the 15th century, a genius was not a human being. Instead, some overachievers had ‘a genius’⁸ - an other-worldly spirit, divinely assigned to them, who was the true source of their godly prowess in their chosen art.
The gifts of the talented, therefore, were not their own. Their creations were the result of a coalition of humble human contributions subject to a tsunami of creative forces unleashed by a supernatural collaborator - their attending genius.
This construct - of a human amanuensis and a genius spirit - assuages much envy, but also captures a fundamental reality of the creative process.
The act of creation is inherently a dialogue. A dialogue between primal unknowable forces and an all too human interlocutor - while the former makes unbridled brilliance possible, the latter ensures it remains relatable to human experience.⁹ We cannot have a masterpiece without either.
The ancient duality of a genius-human tag team offers space, even a distinct identity, within the creative process for a proto-consumer.
The same is true for our own three kings. They indeed had genius, but this genius was not free-standing - it was in attendance to an inherent consumer.
Their gifts do us no service by equating an absence of a formalised dialogue between disjointed creators and consumers as proof of the absence of any dialogue at all.
And if there ever was a gift horse that needed its dentures examined, it is Henry Ford’s.
Even if the people of his time naively longed for “faster horses”, Henry Ford the astute listener and creator knew which one of those two words mattered more. This does not signify a breakdown in communication; it is the perfect example of exactly the opposite.
That creation is effected by dialogue does not mean all or any exchange is good and should go unchallenged. Much of the current apparatus of formalised dialogue in our context - market research - is based on outdated assumptions¹⁰. There has never been a greater need to inject genuine empathy, communication and understanding back into our creative processes.
But we do ourselves no favours by attempting a reformation of this process with fingers pointed squarely at the consumer¹¹ - by endlessly repeating assertions that he cannot possibly know anything valuable and therefore has nothing to contribute¹².
Just think of the almighty genie¹³ - the genius - of the lamp, blessed with the power to summon or create anything at will. He still needs to ask someone, “What do you want?”
References and Notes:
0. I have kept things stylistically simple by using the standard convention of referring to a reader or consumer as ‘he’, without intending to imply that I think of them exclusively as male.
That said, in the headline I have specifically departed from the female pronoun of the David Ogilvy original quote to avoid the suggestion, or misunderstanding, that it’s only the female half that can possess creative genius.
1. Greek for ‘anointed one’ - the second name added to Jesus by early Greek-speaking christians. (Behind the name: Christos)
2. Akio Morita notably ignored focus groups that hated the Walkman (Man and Superman).
3. Chris Dixon uses the great man theory of history to suggest that firms run by intuitive geniuses like Akio Morita and Steve Jobs end up defying the natural lifecycle of companies (Man and Superman).
4. Seemingly, because his failures are often forgotten or ignored. “When Steve Jobs has fancied himself the chief creator, disastrous failures often ensued. His instincts were often wrong. For example, his much ballyhooed Apple Cube, which was in fact a successor to the NeXT cube he'd developed during his Apple hiatus, was a $6,500 dud. He was also openly disdainful of the Internet in the late 1990s. And before his hiatus from Apple, in 1985, his meddling and micro-management had gotten out of control.” (Co.Design: What made Steve Jobs so great?)
5. A view he expressed to Inc. magazine as far back as 1989 (The Decade of the Entrepreneur). A couple of decades later when asked what market research went into the iPad, he famously said "None. It's not the consumers' job to know what they want." (Co.Design: What made Steve Jobs so great?)
6. A special katana sword - in the mythical tradition of fabled and enchanted weapons - prepared for the protagonist by a renowned swordsmith of the same name in the Quentin Tarantino movie Kill Bill Vol 1. The swordsmith and sword are named for a famous 16th century Samurai and Ninja master of the Sengoku era (Wikipedia: Hattori Hanzo).
7. Euclidean geometry consists in assuming a small set of intuitively appealing axioms and deriving other propositions from them, creating a comprehensive logical and deductive framework (Wikipedia: Euclidean Geometry). Plato - the idealist’s idealist - is supposed to have inscribed above the entrance to his famous school, "Let none ignorant of geometry enter here." (Wikipedia: History of Geometry)
8. This sense of genius also gives us the English word ‘genie’, both derived from Latin gignere, meaning “to produce.” (Lexial Investigations: Genius)
9. In the Indian myth dealing with the creation of the epic poem Mahabharatha, the scribe is charged with the job of understanding every word that’s being dictated by the creator - in fact, he is commanded not to write down anything until he has understood it. An effective way to ensure the genius’ work is understandable to a commoner. I wrote more about this in the context of story-telling: Ganesha, the Mahabhratha and Complexity as Narrative Device.
10. Faris Yakob’s blog post All Market Research is Wrong and paper Uncovering Hidden Persuaders do a good job of rounding up the objections to market research as it is currently practiced and proposing approaches to tackle the shortcomings.
11. This marketoonist cartoon by Tom Fishburne and this parody video of a prehistoric focus group capture the popular disdain of seeking any consumer input in the creative process.
12. Consumers do know a lot more about their needs than they are usually given credit for. Proof of this is the increased occurrences of user innovations. This phenomenon is captured by MIT professor Eric von Hippel in his book ‘Democratising Innovation’.
In the book he writes, “Product developers need two types of information in order to succeed at their work: need and context-of-use information (generated by users) and generic solution information (often initially generated by manufacturers specializing in a particular type of solution). Bringing these two types of information together is not easy. Both need information and solution information are often very "sticky"-that is, costly to move from the site where the information was generated to other sites. As a result, users generally have a more accurate and more detailed model of their needs than manufacturers have, while manufacturers have a better model of the solution approach in which they specialize than the user has.”
Therefore, the innovation that companies pride themselves on turns out to be a particular and limited kind of innovation. “One consequence of the information asymmetry between users and manufacturers is that users tend to develop innovations that are functionally novel, requiring a great deal of user-need information and use-context information for their development. In contrast, manufacturers tend to develop innovations that are improvements on well-known needs and that require a rich understanding of solution information for their development.”
13. The arabic term ‘djinn’ is rooted in terms meaning ‘conceal’ or ‘cover of darkness’ and is unrelated to the Latin word ‘genie.’ It was translated to ‘genie’ by the original French translators of The Thousand and One Nights because of its proximity to the latin ‘genie’ in sound and meaning (What’s the difference between genies, jinn, and djinn?).
The djinns and spirits of the Arab world are mostly mischief makers, but among them were also the functional equivalents of the Latin genius. “Another manifestation, called Qareen, were devil companions appointed to every human being by Allah from among the jinn who, like Iblis, whispered evil things into their hearts and led them astray. Such was their influence that they were believed responsible for every inspirational work in pre-Islamic Arabia. Every poet was alleged to have a qareen devil of whom he was only a mouthpiece. A fantastic vision of the qareen given in one report shows him having a translucent body in the shape of a frog perched on the left shoulder bone of a man. It had a stinger like a mosquito’s, with which it actively probed the depths of the man’s heart and injected its message.” (Lapham’s Quarterly: Mischief Makers)
Jul 9, 2013
Before showing up in Star Trek¹, the Dyson sphere was a thought experiment.
Physicist and mathematician Freeman Dyson wondered what would be the logical consequences of millenia of technological progress on a civilisation. In particular, what tell-tale outward signs would such a civilisation emanate that can be observed from afar?
One defining feature of technological progress is an escalation of energy needs. As we move up the ladder, we typically extract orders of magnitude more energy from the environment. Dyson conjectured that this hunger for ever more energy will never go away.
At a sufficiently advanced stage, a civilisation will come to view its primary source of energy - the star that it orbits (its Sun) - in a new light. Rather than just remaining passive harvesters of the tiny fraction of stellar energy the planet does intercept, its people will use all their know-how to extend their cosmic grasp beyond themselves.
One way they could achieve this is by building a sphere around their star, designed to capture and redirect potentially all of its radiated energy. This is the Dyson sphere².
Work by later enthusiasts has built upon these initial ideas and refined them - not just to aid and abet the search for extraterrestrial intelligence³, but as a regular prop in sci-fi story-telling⁴ and also as an indulgence in a shared thought experiment.
There’s the risk that this post is beginning to resemble a Wikipedia page with the barest excuse of a plot, but there’s also more. In 1964, a Soviet astronomer named Nikolai Kardashev proposed a scale to measure the technological advancement of civilisations.
On this scale, a Type I Kardashev civilisation would be one that uses all of the energy impinging on its planet. A Type II, on the other hand, would girdle and harness all the energy of its star, down to the very last drop⁵. They have splurged on a Dyson sphere.
* * *
Richard Dawkins begins his immortal book⁶ ‘The Selfish Gene’ by surmising “If superior creatures from space ever visit earth, the first question they will ask, in order to assess the level of our civilisation, is: “Have they discovered evolution yet?”⁷
It’s quite probable that this cosmic IQ test has more than one question and in that case, I would be surprised if “Have they discovered Big Data?” doesn’t figure prominently as well.
Why Big Data? Agreed, it helps businesses to make better decisions, enables public institutions to frame more informed policies and practices, allows science to bypass theories and causation altogether, prods governments to cosplay as Agent Smith⁸ and enfranchises statistical analysts to hold forth in meetings. But what civilisational pole-vault does Big Data represent?
To put it simply, Big Data represents a quiet backflip on the information equivalent of the Kardashev scale⁹.
As Big Data thinking seeps into every fold of business, science and governance, we are beginning to make the transition from asking what data impinges on us, to seeking what data can we intercept now and store for later. From seeing data as an incident and often-times helpful feature of our world, to treating it as a raw resource to be fracked out of every nook and search. From lamenting that we have more data than we can ever use, to turning our attention to all the data previously going uncaptured and building apps (and satellites¹⁰) to change that.
In other words, we have crossed the metaphorical watershed between contemplating the beauty of a sunset and contemplating the blueprint of a Dyson sphere.
Facebook represents one (but not the only) ambitious and centralised attempt to cloak the world’s population in an information Dyson sphere. Enclosed and interconnected by its very fabric, every quantum of information each one of us ever radiates is to be captured, stored and ultimately harnessed in the service of some decision, sometime and someplace else. None shall be allowed the luxury of impermanence. (Much like its sci-fi and energy counterpart, however, it’s already apparent that successful long-term attempts to do this will more likely be decentralised and loosely co-ordinated¹¹ - not Dyson spheres but Dyson swarms.)
Not to forget, there’s also the non-trivial question of where all this information - this 2.5 quintillion of data created and captured every single day - will reside. What else does this non-fictional Library of Babel¹², quietly purring and awaiting the deft touch of data-whisperers, have in store?
Running these massive data centers already consumes around 1.5 percent of the world’s energy resources¹³. It’s not inconceivable that some time in the near future they may come to consume closer to 20% of our total energy - the very amount of our caloric intake that’s diverted to our brains.
When that happens, things could get interesting. We may end up with an experiment in thought¹⁴.
Reference and Notes:
1. A full Dyson sphere is featured in the Relics episode of Star Trek : The Next Generation (Dyson spheres in popular culture)
2. You can read more about the Dyson sphere at its Wikipedia page.
3. As it so happens, Dyson spheres are actually difficult to distinguish from natural astronomical objects, like heavy dust clouds of birthing or dying stars, that may radiate similar signatures (Signs of Life). However, the search continues (Dyson sphere searches).
4. This Wikipedia page (Dyson spheres in popular culture) keeps track of all occurrences of Dyson spheres in fictional worlds.
5. A Type III Kardashev civilisation is also in the scale - this is one that harnesses all the energy of its galaxy (Kardashev Scale).
6. ‘The Selfish Gene’ is indeed one of my all-time favourite books but this is just a sly reference to the title Richard Dawkins now wishes he had given the book (The Immortal Gene), considering the misunderstandings the original title has led to (The Selfish Gene).
7. Chapter 1, The Selfish Gene by Richard Dawkins.
8. The antagonist of The Matrix trilogy.
9. I am unaware if such a scale has been already conjectured. But if not, I propose the name Ovshinsky scale, for Stanford Ovshinsky, who recognised that “information really is encoded energy” (Interview ith Stanford Ovshinsky).
10. Wired 21.07 has a fascinating piece on a startup which is launching a fleet of cheap, small and ultra-efficient satellites that send back up-to-minute hi-res pictures of all of the earth, which could change the way we measure the economic health of countries, industries and, even individual businesses. (The Watchers)
11. Jeff Stibel makes a compelling case why Facebook’s is set on the path to implosion rather than unrestrained growth. (Is Facebook Too Big To Survive)
12. ‘La Biblioteca de Babel’ is a short story by Jorge Luis Borges in which there exists a library containing all possible books, covering every possible ordering of alphabets (The Library of Babel). Much of the content of the library is gibberish leading its users to suicidal despair, but contained within its vastness are also untold and unimaginable treasures - like translations of all books in all languages.
13. Statistic from Google Throws Open Doors To Its Top Secret Data Center.
14. In Wired 13.08: We Are The Web, Kevin Kelly writes “This planet-sized computer is comparable in complexity to a human brain. Both the brain and the Web have hundreds of billions of neurons (or Web pages). Each biological neuron sprouts synaptic links to thousands of other neurons, while each Web page branches into dozens of hyperlinks. That adds up to a trillion "synapses" between the static pages on the Web. The human brain has about 100 times that number - but brains are not doubling in size every few years. The Machine is.”
My own conjecture here is that it might not be the total number of neurons, or connections, that could emerge as a threshold for the singularity moment, but the amount of energy the global mind draws from the world’s energy resources. It's one good reason why the global mind might convince us to build (it) a Dyson sphere.
Physicist and mathematician Freeman Dyson wondered what would be the logical consequences of millenia of technological progress on a civilisation. In particular, what tell-tale outward signs would such a civilisation emanate that can be observed from afar?
One defining feature of technological progress is an escalation of energy needs. As we move up the ladder, we typically extract orders of magnitude more energy from the environment. Dyson conjectured that this hunger for ever more energy will never go away.
At a sufficiently advanced stage, a civilisation will come to view its primary source of energy - the star that it orbits (its Sun) - in a new light. Rather than just remaining passive harvesters of the tiny fraction of stellar energy the planet does intercept, its people will use all their know-how to extend their cosmic grasp beyond themselves.
One way they could achieve this is by building a sphere around their star, designed to capture and redirect potentially all of its radiated energy. This is the Dyson sphere².
Work by later enthusiasts has built upon these initial ideas and refined them - not just to aid and abet the search for extraterrestrial intelligence³, but as a regular prop in sci-fi story-telling⁴ and also as an indulgence in a shared thought experiment.
There’s the risk that this post is beginning to resemble a Wikipedia page with the barest excuse of a plot, but there’s also more. In 1964, a Soviet astronomer named Nikolai Kardashev proposed a scale to measure the technological advancement of civilisations.
On this scale, a Type I Kardashev civilisation would be one that uses all of the energy impinging on its planet. A Type II, on the other hand, would girdle and harness all the energy of its star, down to the very last drop⁵. They have splurged on a Dyson sphere.
* * *
Richard Dawkins begins his immortal book⁶ ‘The Selfish Gene’ by surmising “If superior creatures from space ever visit earth, the first question they will ask, in order to assess the level of our civilisation, is: “Have they discovered evolution yet?”⁷
It’s quite probable that this cosmic IQ test has more than one question and in that case, I would be surprised if “Have they discovered Big Data?” doesn’t figure prominently as well.
Why Big Data? Agreed, it helps businesses to make better decisions, enables public institutions to frame more informed policies and practices, allows science to bypass theories and causation altogether, prods governments to cosplay as Agent Smith⁸ and enfranchises statistical analysts to hold forth in meetings. But what civilisational pole-vault does Big Data represent?
To put it simply, Big Data represents a quiet backflip on the information equivalent of the Kardashev scale⁹.
As Big Data thinking seeps into every fold of business, science and governance, we are beginning to make the transition from asking what data impinges on us, to seeking what data can we intercept now and store for later. From seeing data as an incident and often-times helpful feature of our world, to treating it as a raw resource to be fracked out of every nook and search. From lamenting that we have more data than we can ever use, to turning our attention to all the data previously going uncaptured and building apps (and satellites¹⁰) to change that.
In other words, we have crossed the metaphorical watershed between contemplating the beauty of a sunset and contemplating the blueprint of a Dyson sphere.
Facebook represents one (but not the only) ambitious and centralised attempt to cloak the world’s population in an information Dyson sphere. Enclosed and interconnected by its very fabric, every quantum of information each one of us ever radiates is to be captured, stored and ultimately harnessed in the service of some decision, sometime and someplace else. None shall be allowed the luxury of impermanence. (Much like its sci-fi and energy counterpart, however, it’s already apparent that successful long-term attempts to do this will more likely be decentralised and loosely co-ordinated¹¹ - not Dyson spheres but Dyson swarms.)
Not to forget, there’s also the non-trivial question of where all this information - this 2.5 quintillion of data created and captured every single day - will reside. What else does this non-fictional Library of Babel¹², quietly purring and awaiting the deft touch of data-whisperers, have in store?
Running these massive data centers already consumes around 1.5 percent of the world’s energy resources¹³. It’s not inconceivable that some time in the near future they may come to consume closer to 20% of our total energy - the very amount of our caloric intake that’s diverted to our brains.
When that happens, things could get interesting. We may end up with an experiment in thought¹⁴.
Reference and Notes:
1. A full Dyson sphere is featured in the Relics episode of Star Trek : The Next Generation (Dyson spheres in popular culture)
2. You can read more about the Dyson sphere at its Wikipedia page.
3. As it so happens, Dyson spheres are actually difficult to distinguish from natural astronomical objects, like heavy dust clouds of birthing or dying stars, that may radiate similar signatures (Signs of Life). However, the search continues (Dyson sphere searches).
4. This Wikipedia page (Dyson spheres in popular culture) keeps track of all occurrences of Dyson spheres in fictional worlds.
5. A Type III Kardashev civilisation is also in the scale - this is one that harnesses all the energy of its galaxy (Kardashev Scale).
6. ‘The Selfish Gene’ is indeed one of my all-time favourite books but this is just a sly reference to the title Richard Dawkins now wishes he had given the book (The Immortal Gene), considering the misunderstandings the original title has led to (The Selfish Gene).
7. Chapter 1, The Selfish Gene by Richard Dawkins.
8. The antagonist of The Matrix trilogy.
9. I am unaware if such a scale has been already conjectured. But if not, I propose the name Ovshinsky scale, for Stanford Ovshinsky, who recognised that “information really is encoded energy” (Interview ith Stanford Ovshinsky).
10. Wired 21.07 has a fascinating piece on a startup which is launching a fleet of cheap, small and ultra-efficient satellites that send back up-to-minute hi-res pictures of all of the earth, which could change the way we measure the economic health of countries, industries and, even individual businesses. (The Watchers)
11. Jeff Stibel makes a compelling case why Facebook’s is set on the path to implosion rather than unrestrained growth. (Is Facebook Too Big To Survive)
12. ‘La Biblioteca de Babel’ is a short story by Jorge Luis Borges in which there exists a library containing all possible books, covering every possible ordering of alphabets (The Library of Babel). Much of the content of the library is gibberish leading its users to suicidal despair, but contained within its vastness are also untold and unimaginable treasures - like translations of all books in all languages.
13. Statistic from Google Throws Open Doors To Its Top Secret Data Center.
14. In Wired 13.08: We Are The Web, Kevin Kelly writes “This planet-sized computer is comparable in complexity to a human brain. Both the brain and the Web have hundreds of billions of neurons (or Web pages). Each biological neuron sprouts synaptic links to thousands of other neurons, while each Web page branches into dozens of hyperlinks. That adds up to a trillion "synapses" between the static pages on the Web. The human brain has about 100 times that number - but brains are not doubling in size every few years. The Machine is.”
My own conjecture here is that it might not be the total number of neurons, or connections, that could emerge as a threshold for the singularity moment, but the amount of energy the global mind draws from the world’s energy resources. It's one good reason why the global mind might convince us to build (it) a Dyson sphere.
Jul 2, 2013
“Stop fucking with the words man. i maybe dead, but i am watching ~ Rumi”¹
These fake words by a real poet², exemplify a persistent occupational hazard of the Internet: the fake quotation. This particular one exists, of course, to poke fun at the genre by exposing the ease with which “genuine” quotes of wisdom can be freshly minted and set loose³.
Is it any wonder then that we are living through an epidemic of fake quotations?⁴ If you spend any amount of time online, you’ve been exposed to the most virulent strains and have also helped spread some of them. Contagion is a feature, the retweet is merely an unmuffled cough.
But where the quote-mongers are amok, can the crack investigation squad be far behind?
Historians and attribution hounds⁵ harness the beneficial side of the Force, straddling both social networks and history books to ferret out the most influential fakes, splicing and dicing them and trussing them up in shaming sites⁶ and the occasional tome⁷. It appears that Rumi and his ilk have the geeks and professionals watching over their legacy - which doesn’t quite explain the surly tone above.
Investigative interrogation of fake quotations usually approaches its task through the label⁸, and in defence, of the speaker: who said that? Did he or she really say it? Those exact words, in that order? Were there any emendations or elaborations? Was anything lost in the smithery?
Much of this is valid, worthy and commendable pursuit; and there’s the cause for historical research, preservation and literacy. The generation that doesn’t pay attention to its history may not only be condemned to repeat the mistakes of the past, but also certainly won’t get to enjoy screen epics with Brad Pitt in leather skirts.
* * *
For us who live through the constant and ever-present breeze of social and technological change, life before the Renaissance and Scientific Revolution would be hard to imagine. Innovation was often an accusation⁹, and there were little output, cultural or intellectual, to speak of. Also, someone seemed to have forgotten to switch the lights on¹⁰.
There are many reasons for the Renaissance to finally get going, but what kept the dark ages dark? Why, in Europe, for over a whole millenia were there no new instances of any scientist, or even a layperson, running out of his bath screaming ‘Eureka’?
Historians blame the ancient Greeks and Romans¹¹. Western Europeans of the middle ages lived in the colossal shadows and ruins of their intellectual forefathers. Everywhere they looked were glorious accomplishments of literature and art, accompanied by impressive feats of building and conquests. Surrounded by these persistent reminders of a triumphant past, people in the Middle Ages came to ascribe the ancients with God-like prowess. How could anyone ever improve upon anything these great civilisations devised - in technology, in science or in art? Why should anyone even bother trying?
The misfortune of the Middle Ages is that they found themselves in no position to ignore what came before - it was usually colossal, in composition and in reputation.
Our misfortune, on the other hand, is that we insist on making the glory of our predecessors a constant presence within our horizons, on rebuilding their ruins using brick upon brick of their preserved words and intended wisdom.
When dealing with fake quotations, we have no doubt that everything the greats of yore said is by definition right. Every mutation and straying from it, in words or intent, is not only undesirable but also dispensable. Our only collective historical and cultural task, it seems, is to safeguard our inherited legacy¹².
But the question isn’t whether Gandhi actually said “Be the change you wish to see in the world” (he didn’t) and whether his philosophy of social change is reflected in that particular emendation of his words (it isn’t)¹³.
The question is, having established the dubious provenance of that pairing of words and attribution, should we just toss the words into the trash heap? Should we condemn every instance of a fake quotation as a documented illegal? Or should we be choosing to cherish our fake quotations by putting them back in circulation to survive (or perish) their own cultural auditions under the attribution “Anonymous”?
If we can reverse our unquestioned reverence for the words of the ancients and engineer our own Enlightenment¹⁴, maybe the culture hacks who (knowingly) fake quotations would come to realise that we value the wisdom inherent in certain quotations, not the wisdom of only certain men and women. That instead of faking attributions for ephemeral and dubious glory, their words can stand a chance on their own.
And just maybe, someday, our future generations will come to say, “For most of history, Anonymous was an ordinary person with extraordinary wisdom.”¹⁵
Reference and Notes:
1. I first came across this quote in this tweet. Of course, it doesn’t mean this is its first recorded instance or its last.
2. This is the same Jalal-ud-din Rumi, a 13th century Persian mystic and poet, who is documented to have said: “Out beyond the world of ideas of wrong doing and right doing, there is a field. I will meet you there.”
3. The most well-known parody of a fake quotation on the internet remains, of course, the Abraham Lincoln line: “Don’t believe everything you read on the Internet.”
4. This piece by Louis Menad in The New Yorker (Notable Quotables) provides abundant examples.
5. To see an attribution hound at work, see this piece by Megan MacArdle on how she picked apart a compeling fake (Anatomy of a Fake Quotation).
6. I am joking, of course. Stuff like Megan MacArdle’s detective work (Anatomy of a Fake Quotation) and Snopes’ compendium of questionable quotes do us great service by ensuring we can be sure of what and whom we choose to quote.
7. They Never Said It: A Book of Fake Quotes, Misquotes, and Misleading Attributions is but one example.
8. This piece by Brian Morton in the NYT (Falser words were never spoke) provides a example of what it all entails.
9. Emma Green provides an interesting historical tour (Innovation: The History of a Buzzword) of what else an unabashedly positive word like innovation could have meant or implied in the past.
10. This is, of course, an Euro-centric view. The centuries of the European Dark Ages sit alongside an age of glorious accomplishments and intellectual enlightenment in the Islamic and Chinese worlds. For a quick and even-handed encapsulation of this period in history, watch John Green's Crash Course World History #14.
11. The right place to read about this is of course in histories of the Middle Ages and the Renaissance. I, instead, read about it in John Gribbin’s “Science: A History”.
12. Probably an exaggeration, but only a slight one. The same forces are at work in our misplaced zeal to value and preserve our inherited conventions of language. (What is ‘Correct’ Language?)
13. This piece by Brian Morton in the NYT (Falser words were never spoke) provides the closest thing to the fake quote Gandhi said and what Gandhi would have meant, not just with his words (but also his deeds.)
14. In his 1784 essay 'What is Enlightenment?', Immanuel Kant writes, “Enlightenment is man's emergence from his self-imposed immaturity. Immaturity is the inability to use one's understanding without guidance from another. This immaturity is self-imposed when its cause lies not in lack of understanding, but in lack of resolve and courage to use it without guidance from another. Sapere Aude! [dare to know] "Have courage to use your own understanding!"--that is the motto of enlightenment.”
15. A deliberate misquote of a well-known Virginia Woolf line “For most of history, Anonymous was a woman." which itself is a misquote of her actual words, “I would venture to guess that Anon, who wrote so many poems without signing them, was often a woman.”
These fake words by a real poet², exemplify a persistent occupational hazard of the Internet: the fake quotation. This particular one exists, of course, to poke fun at the genre by exposing the ease with which “genuine” quotes of wisdom can be freshly minted and set loose³.
Is it any wonder then that we are living through an epidemic of fake quotations?⁴ If you spend any amount of time online, you’ve been exposed to the most virulent strains and have also helped spread some of them. Contagion is a feature, the retweet is merely an unmuffled cough.
But where the quote-mongers are amok, can the crack investigation squad be far behind?
Historians and attribution hounds⁵ harness the beneficial side of the Force, straddling both social networks and history books to ferret out the most influential fakes, splicing and dicing them and trussing them up in shaming sites⁶ and the occasional tome⁷. It appears that Rumi and his ilk have the geeks and professionals watching over their legacy - which doesn’t quite explain the surly tone above.
Investigative interrogation of fake quotations usually approaches its task through the label⁸, and in defence, of the speaker: who said that? Did he or she really say it? Those exact words, in that order? Were there any emendations or elaborations? Was anything lost in the smithery?
Much of this is valid, worthy and commendable pursuit; and there’s the cause for historical research, preservation and literacy. The generation that doesn’t pay attention to its history may not only be condemned to repeat the mistakes of the past, but also certainly won’t get to enjoy screen epics with Brad Pitt in leather skirts.
* * *
For us who live through the constant and ever-present breeze of social and technological change, life before the Renaissance and Scientific Revolution would be hard to imagine. Innovation was often an accusation⁹, and there were little output, cultural or intellectual, to speak of. Also, someone seemed to have forgotten to switch the lights on¹⁰.
There are many reasons for the Renaissance to finally get going, but what kept the dark ages dark? Why, in Europe, for over a whole millenia were there no new instances of any scientist, or even a layperson, running out of his bath screaming ‘Eureka’?
Historians blame the ancient Greeks and Romans¹¹. Western Europeans of the middle ages lived in the colossal shadows and ruins of their intellectual forefathers. Everywhere they looked were glorious accomplishments of literature and art, accompanied by impressive feats of building and conquests. Surrounded by these persistent reminders of a triumphant past, people in the Middle Ages came to ascribe the ancients with God-like prowess. How could anyone ever improve upon anything these great civilisations devised - in technology, in science or in art? Why should anyone even bother trying?
The misfortune of the Middle Ages is that they found themselves in no position to ignore what came before - it was usually colossal, in composition and in reputation.
Our misfortune, on the other hand, is that we insist on making the glory of our predecessors a constant presence within our horizons, on rebuilding their ruins using brick upon brick of their preserved words and intended wisdom.
When dealing with fake quotations, we have no doubt that everything the greats of yore said is by definition right. Every mutation and straying from it, in words or intent, is not only undesirable but also dispensable. Our only collective historical and cultural task, it seems, is to safeguard our inherited legacy¹².
But the question isn’t whether Gandhi actually said “Be the change you wish to see in the world” (he didn’t) and whether his philosophy of social change is reflected in that particular emendation of his words (it isn’t)¹³.
The question is, having established the dubious provenance of that pairing of words and attribution, should we just toss the words into the trash heap? Should we condemn every instance of a fake quotation as a documented illegal? Or should we be choosing to cherish our fake quotations by putting them back in circulation to survive (or perish) their own cultural auditions under the attribution “Anonymous”?
If we can reverse our unquestioned reverence for the words of the ancients and engineer our own Enlightenment¹⁴, maybe the culture hacks who (knowingly) fake quotations would come to realise that we value the wisdom inherent in certain quotations, not the wisdom of only certain men and women. That instead of faking attributions for ephemeral and dubious glory, their words can stand a chance on their own.
And just maybe, someday, our future generations will come to say, “For most of history, Anonymous was an ordinary person with extraordinary wisdom.”¹⁵
Reference and Notes:
1. I first came across this quote in this tweet. Of course, it doesn’t mean this is its first recorded instance or its last.
2. This is the same Jalal-ud-din Rumi, a 13th century Persian mystic and poet, who is documented to have said: “Out beyond the world of ideas of wrong doing and right doing, there is a field. I will meet you there.”
3. The most well-known parody of a fake quotation on the internet remains, of course, the Abraham Lincoln line: “Don’t believe everything you read on the Internet.”
4. This piece by Louis Menad in The New Yorker (Notable Quotables) provides abundant examples.
5. To see an attribution hound at work, see this piece by Megan MacArdle on how she picked apart a compeling fake (Anatomy of a Fake Quotation).
6. I am joking, of course. Stuff like Megan MacArdle’s detective work (Anatomy of a Fake Quotation) and Snopes’ compendium of questionable quotes do us great service by ensuring we can be sure of what and whom we choose to quote.
7. They Never Said It: A Book of Fake Quotes, Misquotes, and Misleading Attributions is but one example.
8. This piece by Brian Morton in the NYT (Falser words were never spoke) provides a example of what it all entails.
9. Emma Green provides an interesting historical tour (Innovation: The History of a Buzzword) of what else an unabashedly positive word like innovation could have meant or implied in the past.
10. This is, of course, an Euro-centric view. The centuries of the European Dark Ages sit alongside an age of glorious accomplishments and intellectual enlightenment in the Islamic and Chinese worlds. For a quick and even-handed encapsulation of this period in history, watch John Green's Crash Course World History #14.
11. The right place to read about this is of course in histories of the Middle Ages and the Renaissance. I, instead, read about it in John Gribbin’s “Science: A History”.
12. Probably an exaggeration, but only a slight one. The same forces are at work in our misplaced zeal to value and preserve our inherited conventions of language. (What is ‘Correct’ Language?)
13. This piece by Brian Morton in the NYT (Falser words were never spoke) provides the closest thing to the fake quote Gandhi said and what Gandhi would have meant, not just with his words (but also his deeds.)
14. In his 1784 essay 'What is Enlightenment?', Immanuel Kant writes, “Enlightenment is man's emergence from his self-imposed immaturity. Immaturity is the inability to use one's understanding without guidance from another. This immaturity is self-imposed when its cause lies not in lack of understanding, but in lack of resolve and courage to use it without guidance from another. Sapere Aude! [dare to know] "Have courage to use your own understanding!"--that is the motto of enlightenment.”
15. A deliberate misquote of a well-known Virginia Woolf line “For most of history, Anonymous was a woman." which itself is a misquote of her actual words, “I would venture to guess that Anon, who wrote so many poems without signing them, was often a woman.”
Jun 25, 2013
Less than a week for Doomsday¹, and two things continue to surprise me.
First, the number of Google Reader² devotees (including me) who are yet to find a replacement. With other dead products walking, finding a replacement is top priority. With GR, bedside vigil and mourning have taken precedence³.
The second is how everyone - even GR devotees - seem all too willing to perpetuate Google’s preferred script explaining away the summary execution of a much beloved product. RSS usage has been declining, only geeks use it, on the litany goes⁴.
It’s almost as if one is trapped between the covers of a detective novel in which, as expected, an unnatural death has occurred - but the detective du jour is conspicuously absent. Everyone you encounter has no alternative but to trust and repeat hearsay, even the perpetrators’ own mea-not-culpas.
So, what’s this alternate vision of the future in whose service GR was sacrificed? Google’s lullaby is a klutzy mashup of how much things have already changed and what the benevolent future has in store for us.
“As a culture we have moved into a realm where the consumption of news is a near-constant process. Users with smartphones and tablets are consuming news in bits and bites throughout the course of the day — replacing the old standard behaviors of news consumption over breakfast along with a leisurely read at the end of the day.”⁵
And of course, Google is working on “pervasive means to surface news across [Google's] products to address each user’s interest with the right information at the right time via the most appropriate means.”5
To me that sounds an awful lot like being constantly, and involuntarily, drip-fed the information equivalent of burgers, fries and supersize colas every waking hour - with matching nutritional value and info-calories.
It turns out, quite a few of us still believe in the opposite⁶ - in the benefits of a sumptuous and healthy breakfast of quality, hand-picked and slow-published reading material, each morsel chewed and ruminated upon and, occasionally, unsubscribed if found wanting - and new discoveries cheerily subscribed, if found nourishing.
* * *
Anyone who has ever visited an Ikea store - or a shopping mall - will know this feeling: You walk in on an errand, knowing exactly what that errand is. But before long, you’ve lost track of where you are in the store and where you were supposed to be.
Not everyone realises what inevitably ensues: they end up buying much more than they set out to. They have become the victims of a ploy in shopping mall design called the Gruen Transfer⁷.
The Gruen Transfer refers to that critical moment when shoppers get overwhelmed and disorientated by the deliberately confusing layout and cues of the store (presumably while excoriating themselves for not being up to the task.) Controlled ambient factors and store displays wear out their focus and decision making faculties. Literally, their eyes glaze and their jaws slacken. And in a snap, they become impulse buyers, sacrificial offerings to the highest bidders for shelfspace.
In those futuristic visions of how we will consume content online, there’s ample room for every crafty trick discovered and perfected by retailers⁸. But, primarily, there will be no escape from the online equivalent of the Gruen Transfer. You head online to read the news and before you know it, you are clicking palpably on “22 more reasons why Neo (eventually) regretted taking the red pill (Now in Slideshow Mode).”
Far from being a product with no future, GR I suspect was a cannibalising thorn in this vision. A thorn that in traitorous alliance with the social web’s bees and pollinators⁹ would leak much of the transfer out of Gruen.
It represented a stand by a cohort of content pro-sumers - the supposed minority of supposed power users who conscientiously wanted to tick their list of to-reads every day. God, how 20th century is that annoying habit?
Putting the deathwish on RSS¹⁰ is Google’s deliberate ploy to tip us collectively into a world of bluish reality, a world where we harbour no hopes of hanging on to our errand lists when we check in online. Instead, we submit to being passive and impulsive consumers. And those recurring pangs of anxiety? Just chew on some algorithmic manna and you’ll be eventually cured of them.¹¹
While it’s unclear if RSS will thrive in the future, all indications suggest it will fight another day¹². An outcome hardly to Google’s liking, whose concerted actions¹³ seemed to have hinged on making RSS the first technology in history to plunge into permanent disuse¹⁴.
So, farewell sweet Reader, may angels sing thee to thy hard-earned rest. And as for you dear reader, stay safe and hope you can find your way back again.
References and Notes:
1. Bearing the news of Google Reader’s demise, The Economist’s Babbage blog put it thus : “Users, meanwhile, worry about impending newslessness.” (Have I Got News For You?) For a less restrained (and funny) response to the news, check out Hitler’s reaction. (Hitler Finds Out Google Reader Is Shutting Down)
2. If you do not know anything about Google Reader or its underlying RSS technology, David Pogue has a helpful introduction to both, along with his recommendation for a replacement. (Google’s Aggregator Gives Way To An Heir)
3. A typical sentiment is the one expressed by Tim Harford in this tweet. Robin Sloan expresses a more extreme, but not uncommon, commitment to use the product till its very last day. To be fair, there’s also the moved-on-and-loved-it brigade captured in this tweet by Chris Anderson.
4. The following paragraph from a recent WIRED piece (Why Google Reader Really Got The Axe) repeats Google’s position verbatim but in a faux objective tone: “Obviously Google had to have a good reason to shut Reader down. The company has reams of data on how we use its products, and would not shutter a product that was providing sufficient food to its info-hungry maw. While some users remained devoted, the usage numbers just didn’t add up. The announcement shouldn’t have been too unexpected. Google hadn’t iterated on the service for years. It even went down for a few days in February.”
5. The words of Richard Gingras, Senior Director, News & Social Products at Google as reported in a recent WIRED piece. (Why Google Reader Really Got The Axe)
6. Just 2 weeks after the announcement of Google Reader’s demise (A Second Spring of Cleaning), Feedly acquired over 3 million new subscribers (TechCrunch). There are many more where they came from.
7. The Gruen Transfer is named for Viktor Gruen (ABC TV), the inventor in the 1950s of the shopping mall. He actually disavowed the manipulative techniques that were given his name, but ironically the name stuck.
8. Entire books have been written about the shenanigans of retailers. This piece provides a basic introduction: The Psychology of Retailing Revealed.
9. MG Siegler at Techcrunch argues that Google Reader’s underappreciated power users actually constitute the bees who pollinate the web and through their unique leverage keep the social web blooming and aplenty : “The first is that Reader’s users, while again, relatively small in number, are hugely influential in the spread of news around the web. In a sense, Reader is the flower that allows the news bees to pollinate the social web. You know all those links you click on and re-share on Twitter and Facebook? They have to first be found somewhere, by someone. And I’d guess a lot of that discovery happens by news junkies using Reader.” (What If The Google Reader Readers Just Don’t Come Back)
10. This is also the view of Dave Winer, the developer and populariser of the original RSS format. (E-mail with Max Levchin & As July 1 Approaches)
11. Evegeny Morozov, and others, have argued persuasively for the greater harm Google might inflict on us with their blind deference to “algorithmic neutrality.” (Don’t BE Evil)
12. The attention and activity around many old challengers and upcoming RSS readers promises that this could turn out to be a blip that a revitalised and Google-free market will redress. (Have I Got News For You?)
13. Following the KO-ing of Google Reader, Google also shut down the RSS Add-On in its Chrome Browser. (It’s Not Just Reader: Google Kills Chrome RSS Add-On Too)
14. Kevin Kelly has claimed and demonstrated to critics that technologies never go away (not even ancient Roman bridge making techniques) and persist for a very long time, though popularly deemed to be extinct. (Technologies Don’t Go Extinct)
First, the number of Google Reader² devotees (including me) who are yet to find a replacement. With other dead products walking, finding a replacement is top priority. With GR, bedside vigil and mourning have taken precedence³.
The second is how everyone - even GR devotees - seem all too willing to perpetuate Google’s preferred script explaining away the summary execution of a much beloved product. RSS usage has been declining, only geeks use it, on the litany goes⁴.
It’s almost as if one is trapped between the covers of a detective novel in which, as expected, an unnatural death has occurred - but the detective du jour is conspicuously absent. Everyone you encounter has no alternative but to trust and repeat hearsay, even the perpetrators’ own mea-not-culpas.
So, what’s this alternate vision of the future in whose service GR was sacrificed? Google’s lullaby is a klutzy mashup of how much things have already changed and what the benevolent future has in store for us.
“As a culture we have moved into a realm where the consumption of news is a near-constant process. Users with smartphones and tablets are consuming news in bits and bites throughout the course of the day — replacing the old standard behaviors of news consumption over breakfast along with a leisurely read at the end of the day.”⁵
And of course, Google is working on “pervasive means to surface news across [Google's] products to address each user’s interest with the right information at the right time via the most appropriate means.”5
To me that sounds an awful lot like being constantly, and involuntarily, drip-fed the information equivalent of burgers, fries and supersize colas every waking hour - with matching nutritional value and info-calories.
It turns out, quite a few of us still believe in the opposite⁶ - in the benefits of a sumptuous and healthy breakfast of quality, hand-picked and slow-published reading material, each morsel chewed and ruminated upon and, occasionally, unsubscribed if found wanting - and new discoveries cheerily subscribed, if found nourishing.
* * *
Anyone who has ever visited an Ikea store - or a shopping mall - will know this feeling: You walk in on an errand, knowing exactly what that errand is. But before long, you’ve lost track of where you are in the store and where you were supposed to be.
Not everyone realises what inevitably ensues: they end up buying much more than they set out to. They have become the victims of a ploy in shopping mall design called the Gruen Transfer⁷.
The Gruen Transfer refers to that critical moment when shoppers get overwhelmed and disorientated by the deliberately confusing layout and cues of the store (presumably while excoriating themselves for not being up to the task.) Controlled ambient factors and store displays wear out their focus and decision making faculties. Literally, their eyes glaze and their jaws slacken. And in a snap, they become impulse buyers, sacrificial offerings to the highest bidders for shelfspace.
In those futuristic visions of how we will consume content online, there’s ample room for every crafty trick discovered and perfected by retailers⁸. But, primarily, there will be no escape from the online equivalent of the Gruen Transfer. You head online to read the news and before you know it, you are clicking palpably on “22 more reasons why Neo (eventually) regretted taking the red pill (Now in Slideshow Mode).”
Far from being a product with no future, GR I suspect was a cannibalising thorn in this vision. A thorn that in traitorous alliance with the social web’s bees and pollinators⁹ would leak much of the transfer out of Gruen.
It represented a stand by a cohort of content pro-sumers - the supposed minority of supposed power users who conscientiously wanted to tick their list of to-reads every day. God, how 20th century is that annoying habit?
Putting the deathwish on RSS¹⁰ is Google’s deliberate ploy to tip us collectively into a world of bluish reality, a world where we harbour no hopes of hanging on to our errand lists when we check in online. Instead, we submit to being passive and impulsive consumers. And those recurring pangs of anxiety? Just chew on some algorithmic manna and you’ll be eventually cured of them.¹¹
While it’s unclear if RSS will thrive in the future, all indications suggest it will fight another day¹². An outcome hardly to Google’s liking, whose concerted actions¹³ seemed to have hinged on making RSS the first technology in history to plunge into permanent disuse¹⁴.
So, farewell sweet Reader, may angels sing thee to thy hard-earned rest. And as for you dear reader, stay safe and hope you can find your way back again.
References and Notes:
1. Bearing the news of Google Reader’s demise, The Economist’s Babbage blog put it thus : “Users, meanwhile, worry about impending newslessness.” (Have I Got News For You?) For a less restrained (and funny) response to the news, check out Hitler’s reaction. (Hitler Finds Out Google Reader Is Shutting Down)
2. If you do not know anything about Google Reader or its underlying RSS technology, David Pogue has a helpful introduction to both, along with his recommendation for a replacement. (Google’s Aggregator Gives Way To An Heir)
3. A typical sentiment is the one expressed by Tim Harford in this tweet. Robin Sloan expresses a more extreme, but not uncommon, commitment to use the product till its very last day. To be fair, there’s also the moved-on-and-loved-it brigade captured in this tweet by Chris Anderson.
4. The following paragraph from a recent WIRED piece (Why Google Reader Really Got The Axe) repeats Google’s position verbatim but in a faux objective tone: “Obviously Google had to have a good reason to shut Reader down. The company has reams of data on how we use its products, and would not shutter a product that was providing sufficient food to its info-hungry maw. While some users remained devoted, the usage numbers just didn’t add up. The announcement shouldn’t have been too unexpected. Google hadn’t iterated on the service for years. It even went down for a few days in February.”
5. The words of Richard Gingras, Senior Director, News & Social Products at Google as reported in a recent WIRED piece. (Why Google Reader Really Got The Axe)
6. Just 2 weeks after the announcement of Google Reader’s demise (A Second Spring of Cleaning), Feedly acquired over 3 million new subscribers (TechCrunch). There are many more where they came from.
7. The Gruen Transfer is named for Viktor Gruen (ABC TV), the inventor in the 1950s of the shopping mall. He actually disavowed the manipulative techniques that were given his name, but ironically the name stuck.
8. Entire books have been written about the shenanigans of retailers. This piece provides a basic introduction: The Psychology of Retailing Revealed.
9. MG Siegler at Techcrunch argues that Google Reader’s underappreciated power users actually constitute the bees who pollinate the web and through their unique leverage keep the social web blooming and aplenty : “The first is that Reader’s users, while again, relatively small in number, are hugely influential in the spread of news around the web. In a sense, Reader is the flower that allows the news bees to pollinate the social web. You know all those links you click on and re-share on Twitter and Facebook? They have to first be found somewhere, by someone. And I’d guess a lot of that discovery happens by news junkies using Reader.” (What If The Google Reader Readers Just Don’t Come Back)
10. This is also the view of Dave Winer, the developer and populariser of the original RSS format. (E-mail with Max Levchin & As July 1 Approaches)
11. Evegeny Morozov, and others, have argued persuasively for the greater harm Google might inflict on us with their blind deference to “algorithmic neutrality.” (Don’t BE Evil)
12. The attention and activity around many old challengers and upcoming RSS readers promises that this could turn out to be a blip that a revitalised and Google-free market will redress. (Have I Got News For You?)
13. Following the KO-ing of Google Reader, Google also shut down the RSS Add-On in its Chrome Browser. (It’s Not Just Reader: Google Kills Chrome RSS Add-On Too)
14. Kevin Kelly has claimed and demonstrated to critics that technologies never go away (not even ancient Roman bridge making techniques) and persist for a very long time, though popularly deemed to be extinct. (Technologies Don’t Go Extinct)
Nov 7, 2011
A couple of days ago, I riffed a bit on research findings about how walking through doors seems to have an adverse effect on memory.
Folks at Lifehacker read about the research too - and had their own angle on it, dictated in part by the site's raison d'etre. Their take was the tip 'to write down what you want to remember before you move into another room' - the emphasis being on ideas that you come upon and may subsequently forget, as happens all too often.
That conclusion is only partly true. Walking through doors affects only one kind of memory - episodic memory which relates to specific events, places and times. The other kind of declarative memory - semantic memory - is responsible for meanings, understandings and concept-based knowledge unrelated to specific experiences, and is immune to doorways.
If the idea or thought had anything to do with the latter, you're not going to forget it even after repeated mishaps of walking through doors. But ideas are also often related to sequential events or thinking around events - this thought leads to that and that leads to an idea.
It is here that the risk of forgetting something as a result of a new memory episode (related to the simple act of leaving a room) is high. But even here, from my own experience, one tends to forget not the idea in particular but the whole chain of thought leading up to it. All it takes is remembering a link in the chain and, sure enough, the idea comes back with the eagerness of a lost puppy.
To be fair, the original BPS Research Digest post doesn't actually clarify these distinctions of memory - but there's something more that it does clarify that Lifehacker gets positively and absolutely wrong.
Here's what the Lifehacker goes on to say:
Folks at Lifehacker read about the research too - and had their own angle on it, dictated in part by the site's raison d'etre. Their take was the tip 'to write down what you want to remember before you move into another room' - the emphasis being on ideas that you come upon and may subsequently forget, as happens all too often.
That conclusion is only partly true. Walking through doors affects only one kind of memory - episodic memory which relates to specific events, places and times. The other kind of declarative memory - semantic memory - is responsible for meanings, understandings and concept-based knowledge unrelated to specific experiences, and is immune to doorways.
If the idea or thought had anything to do with the latter, you're not going to forget it even after repeated mishaps of walking through doors. But ideas are also often related to sequential events or thinking around events - this thought leads to that and that leads to an idea.
It is here that the risk of forgetting something as a result of a new memory episode (related to the simple act of leaving a room) is high. But even here, from my own experience, one tends to forget not the idea in particular but the whole chain of thought leading up to it. All it takes is remembering a link in the chain and, sure enough, the idea comes back with the eagerness of a lost puppy.
To be fair, the original BPS Research Digest post doesn't actually clarify these distinctions of memory - but there's something more that it does clarify that Lifehacker gets positively and absolutely wrong.
Here's what the Lifehacker goes on to say:
Another interpretation, BPS Research Digest says, is that the increased forgetting wasn't about the "boundary effect of a doorway" but that the context had changed. In other words, participants had better memory about objects in the room where they created those objects.The original post mentions the plausibility of context playing a possible role, but only to report that further research doesn't support that hypothesis :
Radvansky and his team tested this possibility with a virtual reality study in which memory was probed after passing through a doorway into a second room, passing through two doorways into a third unfamiliar room, or through two doorways back to the original room - the one where they'd first encountered the relevant objects. Performance was no better when back in the original room compared with being tested in the second room, thus undermining the idea that this is all about context effects on memory. Performance was worst of all when in the third, unfamiliar room, supporting the account based on new memory episodes being created on entering each new area.In other words, if it's an idea you have misplaced and forgotten, don't bother going back into the original room and looking for it there.
Nov 4, 2011
According to Wikipedia, the stock phrase "Once upon a time..." has been in use in some form since at least the 14th century. And its prevelance is not just limited to the English language - the Wikipedia page lists variants in dozens of languages from around the world - as also the modern variants, "A long time ago..." and even "Not so long ago..."
While all of us have heard at least one story that began "Once upon a time...", we are also acutely aware that some lines make for great story openers - if only because they unambiguously announce the intention to narrate a story.
(The line that did the trick for me during my childhood was "For those who came in late..." enshrined in the opening panel of every Phantom comic book.)
But is there something else at work here? Do great (or stock) opening lines do more than just build the expectation of a narrative?
I would like to think so. At least after reading up research findings that the mere act of passing through a doorway clears up your memory and begins a new memory episode. Making it less likely that you'll remember something that happened in the room you just left. The unlikely result of simply walking through a doorway to another room is akin to wiping your memory slate clean.
The effect itself is not dependent on the distance walked but only on the act of walking through a doorway. As the reserachers put it "Walking through doorways serves as an event boundary, thereby initiating the updating of one's event model [i.e. the creation of a new episode in memory]."
The findings also complement so-called episode markers in story-telling - phrases like "a while later" seem to create a temporal boundary within the narrative. As a result, test subjects found it difficult to remember the sequence of sentences in an episode prior to the narrative divide vis-a-vis sentences in the current episode.
So, here's what I think is happening with the stock opening phrases or with great opening lines.
An effective opening line creates a temporal boundary between what you were doing or thinking before the story began and afterwards. Or in other words, the opening line is transporting you across an imaginary doorway - if not to another world, but at least to the adjacent room.
Of course, most people know or would readily believe that great stories transport them to another world or another time and place. But what this research, coupled with my conjecture, could suggest is that it is a cognitive trick related to memory - and in particular episodic memory - that creates that illusion.
It also suggests that certain key phrases or lines serve as subliminal commands - in effect, setting up, or even resetting, a new memory episode in your mind. Often, without your knowledge.
From here, it's not hard to imagine a programming language of story-telling - yielding the machine-readable (or is it mind-readable) source code of every story. Don't you think so?
While all of us have heard at least one story that began "Once upon a time...", we are also acutely aware that some lines make for great story openers - if only because they unambiguously announce the intention to narrate a story.
(The line that did the trick for me during my childhood was "For those who came in late..." enshrined in the opening panel of every Phantom comic book.)
But is there something else at work here? Do great (or stock) opening lines do more than just build the expectation of a narrative?
I would like to think so. At least after reading up research findings that the mere act of passing through a doorway clears up your memory and begins a new memory episode. Making it less likely that you'll remember something that happened in the room you just left. The unlikely result of simply walking through a doorway to another room is akin to wiping your memory slate clean.
The effect itself is not dependent on the distance walked but only on the act of walking through a doorway. As the reserachers put it "Walking through doorways serves as an event boundary, thereby initiating the updating of one's event model [i.e. the creation of a new episode in memory]."
The findings also complement so-called episode markers in story-telling - phrases like "a while later" seem to create a temporal boundary within the narrative. As a result, test subjects found it difficult to remember the sequence of sentences in an episode prior to the narrative divide vis-a-vis sentences in the current episode.
So, here's what I think is happening with the stock opening phrases or with great opening lines.
An effective opening line creates a temporal boundary between what you were doing or thinking before the story began and afterwards. Or in other words, the opening line is transporting you across an imaginary doorway - if not to another world, but at least to the adjacent room.
Of course, most people know or would readily believe that great stories transport them to another world or another time and place. But what this research, coupled with my conjecture, could suggest is that it is a cognitive trick related to memory - and in particular episodic memory - that creates that illusion.
It also suggests that certain key phrases or lines serve as subliminal commands - in effect, setting up, or even resetting, a new memory episode in your mind. Often, without your knowledge.
From here, it's not hard to imagine a programming language of story-telling - yielding the machine-readable (or is it mind-readable) source code of every story. Don't you think so?
Nov 3, 2011
Adliterate recently penned a rant about targetting in online advertising. The experience that triggered it is something all of us have encountered (or will eventually do so) - search for something online and be bombarded with ads for the same for eternity.
His conclusion about targetting - it's a great idea in theory, but in practice it's better to waste ad spend reaching a broader audience. Hard to disagree with that, especially if one has been at the receiving end of this pesky tactic.
Malcolm Gladwell in a recent TED talk addressed this issue as well - not with regards to advertising, of course. He recounts the story of Carl Norden, a Swiss engineer who created the Norden Mark II Bombsight - a complex device designed to vastly improve the accuracy of dropping bombs from aircraft. Adopted by the US in WWII, it took a lot of money to develop and was great in ideal conditions. But in battle, it hardly achieved its promise. Another case of targetting failure.
But what these two examples reveal is that targetting is a ruse that's not just limited to advertising or to dropping bombs - it's a widely adopted stratagem in the natural and unnatural world around us (Think about a bird's keen eyesight and its graceful swoop over its prey or the over-representation of sharpshooters in warlore.) That partly explains its enduring allure. A yearning for more efficiency through targetted means is something that's deeply inherent in all us - especially when we're expending our own resources. (Which can hardly be said to be the case for planners and adfolk recommending clients to loosen their purse strings willy-nilly.)
So, while not targetting is an option, targetting is a legitimate strategy too - and clients are right to pursue it. Some may decide it's not for them, some may experiment with it and then decide it's not for them - and some may persevere in demanding better targetting technologies and techniques. (The US Army didn't stop pursuing accuracy in bombing even after the dismal failure of the Norden Bombsight. Which is why we have cruise missiles and Predators now - not everyone's idea of progress, but progress nonetheless.)
And what is it about receiving targetted ads that drives us nuts? When targetting in advertising doesn't work or draws attention to itself, it's failure is starkly obvious to its quarry. We understand the ruse and are repelled by its naked artifice - by its profiteering motive and its seeming use of 'pilfered' data. Like a victim aware of a conman's trick, we then either mock its failure or are repulsed by its successs.
That may very well be what the non-advertising professional will continue to do. But for those of us in the business, it's a chance to observe how and when our (or someone else's) best-laid plans don't work. To contemplate ways to improve or change that; and while we wait for things to get better, to explain to our clients the risks of targetting as is practiced now. (And for this reason alone, every person working in advertising should suffer from the failure of targetted ads.)
And it is here that the final act in the story of Carl Norden and his bombsight has something more to impart to us. Though its track record in WWII was dismal, the Norden bombsight was ironically used to drop the atom bomb - a device of destruction built to make the very idea of accuracy redundant.
And that's the lesson we could also be sharing with our clients and with our comrades-in-advertising. To either pursue better bombsights or more destructive bombs - but not both together*. That targetting has its place, but not alongside rampant repetition and the racking up of exposures.
(*To be fair to the US Allied forces, they spent a lot of money on the bombsight and the atom bomb but probably did not expect to use the atom bomb in the first place.)
His conclusion about targetting - it's a great idea in theory, but in practice it's better to waste ad spend reaching a broader audience. Hard to disagree with that, especially if one has been at the receiving end of this pesky tactic.
Malcolm Gladwell in a recent TED talk addressed this issue as well - not with regards to advertising, of course. He recounts the story of Carl Norden, a Swiss engineer who created the Norden Mark II Bombsight - a complex device designed to vastly improve the accuracy of dropping bombs from aircraft. Adopted by the US in WWII, it took a lot of money to develop and was great in ideal conditions. But in battle, it hardly achieved its promise. Another case of targetting failure.
But what these two examples reveal is that targetting is a ruse that's not just limited to advertising or to dropping bombs - it's a widely adopted stratagem in the natural and unnatural world around us (Think about a bird's keen eyesight and its graceful swoop over its prey or the over-representation of sharpshooters in warlore.) That partly explains its enduring allure. A yearning for more efficiency through targetted means is something that's deeply inherent in all us - especially when we're expending our own resources. (Which can hardly be said to be the case for planners and adfolk recommending clients to loosen their purse strings willy-nilly.)
So, while not targetting is an option, targetting is a legitimate strategy too - and clients are right to pursue it. Some may decide it's not for them, some may experiment with it and then decide it's not for them - and some may persevere in demanding better targetting technologies and techniques. (The US Army didn't stop pursuing accuracy in bombing even after the dismal failure of the Norden Bombsight. Which is why we have cruise missiles and Predators now - not everyone's idea of progress, but progress nonetheless.)
And what is it about receiving targetted ads that drives us nuts? When targetting in advertising doesn't work or draws attention to itself, it's failure is starkly obvious to its quarry. We understand the ruse and are repelled by its naked artifice - by its profiteering motive and its seeming use of 'pilfered' data. Like a victim aware of a conman's trick, we then either mock its failure or are repulsed by its successs.
That may very well be what the non-advertising professional will continue to do. But for those of us in the business, it's a chance to observe how and when our (or someone else's) best-laid plans don't work. To contemplate ways to improve or change that; and while we wait for things to get better, to explain to our clients the risks of targetting as is practiced now. (And for this reason alone, every person working in advertising should suffer from the failure of targetted ads.)
And it is here that the final act in the story of Carl Norden and his bombsight has something more to impart to us. Though its track record in WWII was dismal, the Norden bombsight was ironically used to drop the atom bomb - a device of destruction built to make the very idea of accuracy redundant.
And that's the lesson we could also be sharing with our clients and with our comrades-in-advertising. To either pursue better bombsights or more destructive bombs - but not both together*. That targetting has its place, but not alongside rampant repetition and the racking up of exposures.
(*To be fair to the US Allied forces, they spent a lot of money on the bombsight and the atom bomb but probably did not expect to use the atom bomb in the first place.)
Iqbal Mohammed is Head of Innovation & Strategy at a digital innovation agency serving the DACH and wider European markets. He is the winner of the WPP Atticus Award for Best Original Published Writing in Marketing & Communication.
You can reach him via email or Twitter.
// Subscribe to Iqbal's weekly data newsletter. //