media

Angering Ourselves To Death – Postman’s Brave New World Re-Re-Visited - Chapter 1

Chapter I: Postman’s Portent – The Brave New 1984

 Neil Postman.

 “We were keeping our eye on 1984. When the year came and the prophecy didn't, thoughtful Americans sang softly in praise of themselves. The roots of liberal democracy had held. Wherever else the terror had happened, we, at least, had not been visited by Orwellian nightmares.

“But we had forgotten that alongside Orwell's dark vision, there was another - slightly older, slightly less well known, equally chilling: Aldous Huxley's Brave New World. Contrary to common belief even among the educated, Huxley and Orwell did not prophesy the same thing. Orwell warns that we will be overcome by an externally imposed oppression. But in Huxley's vision, no Big Brother is required to deprive people of their autonomy, maturity and history. As he saw it, people will come to love their oppression, to adore the technologies that undo their capacities to think.” – Neil Postman, Amusing Ourselves To Death (1985)

In the 2018 documentary, Behind The Curve, a look at the worldwide community of people who believe the Earth is flat, filmmaker Daniel J. Clark asked prominent YouTuber Patricia Steere what sources of information she trusted. “Myself,” she said, laughing. “I jokingly said if there’s an event like…I’ll just use Boston Bombing again,” referring to the 2013 bombing incident at the Boston Marathon, “I won’t believe any of those events are real unless I myself get my leg blown off.”

It would seem her wilful ignorance when it comes to the curvature of the Earth is an apotheosis of the media as an environment as a culture – the magic of YouTube and the internet has “undone her capacity to think,” as author, media ecologist, and father of modern media as environment scholar Neil Postman said in his 1985 landmark book, Amusing Ourselves to Death.

What’s more telling is that her philosophical solipsism, that being that only the self is all we can really know of reality, is not an isolated phenomenon. To be fair, it's hard not to these days.

In the theory of Julian Jaynes seminal book on the evolution of consciousness, The Origin of Consciousness in the Breakdown of the Bi-Cameral Mind, he argues that our pre-antiquity consciousness was not defined by recognising our thoughts as our own, but as one side of the brain “speaking” or “hallucinating” to another part that listens and obeys its commands. These commands were interpreted as Gods. As bi-cameralism broke down, we externalised these voices into Oracles, churches, prayers, and eventually, scepticism that any such voices were derived from on high. A vestige of bi-cameralism, the verb phrase to understand which means to perceive the intended meaning or comprehend it, means to literally stand under a God who is giving instructions to a human receiver – or in this case, an unconscious hemisphere of the brain commanding, and a conscious hemisphere obeying said commands. Though we’ve moved past this bi-cameral state, we have not moved towards a state where we can authenticate information as “true” or “factual” just by looking at it.

As humans, we are limited. We use language and media to transmit our ideas, desires, knowledge, etc. to other people. As the semanticist Hayakawa put it, we use the “nervous systems of others” to help us achieve our goals. His most famous example is a soldier calling out to an observer for information on what is going on, and the observer reporting back to the soldier – he has “borrowed” his eyes and ears and gained a report thanks to a “loan” of his sensory systems. However, if the observer reports back false information, the soldier has not gained any knowledge at all. To use a well-worn analogy from the great General Semanticist Count Alfred Korzybski, the “map” the observer has provided for the “territory” or reality of what is going on is not only inaccurate, but false. The observer may have relayed zero enemy activity, when in fact he has seen multiple targets. The soldier is now imperilled due to his internal “map” consisting of this false image.

And the images we create each day are staggering. We, as humanity, produce 2.5 quintillion (2.5 x 1020) bytes of new data each day, and the rate is accelerating. It would be impossible for any one human to observe and analyse the data we create, per day, in a lifetime. We are not oppressed by an external imposition; we are oppressed by how gigantic our media environment has become. If Patricia and her Flat Earth friends only observed one-one-thousandth of this data generated per day, that would still yield 25 terabytes of data – 250 million images, 35,714 hour-long videos, or even 416 hours of Virtual Reality content. With humans being this limited and navigating information systems so vast, can you blame Patricia for this ignorance? Ms. Steere could, if she wanted, live out her entire life without ever encountering an opposing viewpoint. She could call out only to observers who confirm her bias for the rest of her life and never run out of data to comb through.

From this perspective, Postman was right.

A Familiar, Not Brave, New World

However, we now have another layer of oppression to contend with; that the technology we adore is used simultaneously for our surveillance and gratification. One cannot exist without the other. In the 80s and 90s, civil libertarians called for a dismantlement of the “surveillance state;” CCTV cameras on every corner, police providing a watchful eye on the populace. In authoritarian regimes such as China, these cameras and listening devices serve this very function, a generational echo of the German Democratic Republic’s STASI invading the private lives of citizens. China’s internet is censored around the clock by a “Great Firewall of China” which blocks certain foreign websites with pro-democratic or anti-Chinese content, as well as moderation by government agents in social media such as WeChat or Sina Weibo.

In 2013, former NSA contractor Edward Snowden revealed with the help of Washington Post and Guardian journalists that our electronic transmissions, such as those used by Facebook, Google, and other social media were being systematically harvested. Our data, which we freely gave to these media, are used as part of the NSA-developed XKeyscore and the Boundless Informant data collection and visualisation tools, used for covert surveillance without due process.

Over one billion people use Instagram, for example. Apart from its uses as a data harvesting tool for advertisers or as a platform for marketers, it arguably has no functional purpose. It does not provide a solution for transmitting photos to other people – it could be perceived as another pleasurable toy, such as those found in the vain and self-absorbed culture of Brave New World. Psychologists and others have linked social media to addiction, as other users’ “likes” and ego-strokes can often release the neurotransmitter dopamine, which is known to scientists as the “feel good hormone.” Dopamine is our “reward.” In that regard. Like many addictions, dopamine “rewards” lose intensity with frequency. Bigger and better “rewards” are required to feel the same “high.” In a cynical view, Instagram and other social media are much like a dopamine dispenser, in the same way rats use a mechanical food dispenser in experiments.

Postman said we would come to love our oppression through the adoration of technology. He was, to an extent, saying feelings would become more sought after than facts. Though we live in Brave New World, there is a sinister apparatus that belies it – the world of 1984. Since we are unable to trust our media environments – the nervous systems of others – or even make proper sense of it due to the sheer volume of data we can interact with, the maps we will carry around in our heads will be of lower and lower accuracy and quality. The amount of information we are aware we are not in possession of, or will never be in possession of, is near incalculable.

And for many reasons, as this series will explain, has made us very, very angry.

To be continued in Chapter II: The Media Malware Machine

Cos You Don't Wanna Miss A Thing: Twitter, music and predicting the present

If it’s good for celebrities, it’s good for you too. Endowed with mystical properties making their eyes gleam and teeth porcelain, they’re just better than us in every conceivable way. If you can convince them that Twitter’s new Music app is useful, the unwashed masses will stream it on to their tablet as if it was mana from heaven.

Maybe not.

It does raise a question in this new age of music you rent in perpetuity; what use does this new Twitter app actually have?

Having read media theorist Douglas Rushkoff’s new book Present Shock, he posits that media-as-a-culture is no longer preoccupied with “futurism” but centred on “presentism.” We’re interested more on what’s happening now than contextualising our experiences as distinct from past and future. For example, Twitter is only useful in the now (not borrowing too heavily from Eckhart Tolle) losing worth as time elapses. Furthermore, the now is such a diffuse, high-level abstraction it’s like attempting to catch a mosquito with a pin and a thimble.

Consider the mathematical equation. An equation is an expression of variables of which one is unknown. The unknown variable is found using mathematical principles flowing forward in linear time, from A to B. The solution is clear cut.

In computing and information technology, programs and hardware are thought of as panaceas for “problems” it’s not uncommon terming them “solutions.” These "problems" are not structural, i.e, the problem is not the inability to arrive at an unknown variable. The majority of problems lie in not getting it fast, cheaply or efficiently enough to stay relevant in the "now."

Simply, what problem does twitter’s music app actually solve?

It doesn’t solve anything – for the consumer. In the age of the present, app developers aren’t savvy problem solvers, they’re actually problem finders. They convince the market that there exists a problem, contend to have solved it and profit handsomely.

Apps such as Pocket or Evernote, as useful as they are, “solved” the problem of keeping track of links or writing notes previously inaccessible on one device when they were stored on another. There was nothing structurally wrong or overly inefficient with say, writing notes on pads of paper. Solutions readily existed.

Apps exist on your phone to solve problems that weren't problems until "realising" they plagued you. Not knowing the name of the song playing at the pub by Journey was never a life-threatening predicament, yet Shazam solves that problem for you. Easy.

But it cannily it does purport to have discovered a problem. Twitter is in the process of convincing us that emerging and popular trends in music are so complex and so amorphous you need an app to navigate this ever-changing terrain of current music. The problem is that you’re lagging behind what’s cool and what’s about to be cool. The solution is this app. Get it now, bask in the electronic water of fleeting musical omniscience.

Except this app wasn’t designed with you in mind. It’s another column in a vast data set powering predictive analytics. It tracks, in real-time, the influence of users and who is being influenced. What the influence channels users towards, and so on. Spotify and Rdio’s blindspots in terms of creating accurate big data sets is they don’t know who influenced what music is being played at any given time, nor to what level. No one gives a shit about your shitty indie band unless someone gives you a reason. Sometimes that reason is none other than “who” rather than “why.”

The dimension for the data set for playing Belinda Carlisle 40 times in a row is discrete and limited. Spotify will know I love Belinda Carlisle. If an external force influenced me, it has no real way of gleaning that information unless I directly clicked a link to the track from a certain page or twitter feed.

By using Twitter’s new music app, Spotify, etc. can track the locus of the influence. Music companies can make safer bets on pushing artists ahead of time. The guesswork on releasing a hit isn’t eliminated but it’s significantly reduced, again. Why sign ten acts to yield one hit when signing two or three definite winners is possible?

It does solve a very real problem, and that problem lies in the A&R departments at the major labels. The jump in music sales, the first time it’s done so in over a decade, is partially due to this new "taxi fare" or pay as the meter's running model. How does an exec fire up the sales from a simmer to roaring boil? You glean better data from more sources and tailor your strategies to the analytics.

So can this Twitter app really tell us what is really hot right now? Without the mind of Nate Silver and the processing power of CERN at my disposal, I don’t know. And neither do you.

 

---

Read more: My post on the Spotify (counter-)revolution.

Are we Goebbels' stepchildren? (and other journalistic conjectures)

When the ethical standards of the media slip we expose ourselves to ruin. So we're told. In Melbourne on February 12, inventor of the World Wide Web (W3) Sir Tim Berners-Lee alarmed us to the fact a tweet can travel faster than an earthquake. Someone in the epicentre of a seismic shift underfoot can alert others faster than the quake can travel itself. If you have five followers under an eggy avatar with a handle of @ahzzzopll001 and you offer nothing but FREE BEATS BY DRE then your tweets aren’t going to have much impact. But if you have thousands, millions of followers and may broadcast your message through airwaves, optic fibre and print to countless more one's noblesse oblige on integrity increases exponentially. Have we learned anything from Goebbels’ media manipulation in the electronic media’s infancy or have we all become his stepchildren? (Oooh, how deliciously evil)

Have Lies, Will Travel

Goebbels' once wrote that “[t]he English follow the principle that when one lies, one should lie big, and stick to it. They keep up their lies, even at the risk of looking ridiculous.” The big lie today is that the internet is so vast and interconnected the transmission of big lies would be caught, debunked and refuted before their virus’ deadly payloads had a chance to inflict any real damage. We’re thinking in what McLuhan termed the rear-view mirror with little inclination to look forward. But are reporters really lying?

In 2010, American Apparel marketing director and media strategist Ryan Holiday fell victim to this new craze of divesting oneself of accusations of unethical conduct by reporting in a time-honored yet disingenuous way.

Feminist website Jezebel, a masthead of Gawker Media, posted a claim by staff blogger Irin Carmon that American Apparel’s new nail polish contained hazardous material. Holiday was asked for comment after the post was live. His company’s official refutation was published as an addendum once “dozens of other blogs were already parroting her claims.” Despite Gawker Media's shoulders aching from the ideological barrow they push, their conduct insofar as it pertains to ethics finds itself in a strange loop.

The email contained in the report – that nail polish ought to be removed from shelves and that someone (in management? Operations? It’s unclear) mentioned the product could be considered ‘hazardous material’ in a conference call – is the report. Ms. Carmon could argue that the public was unaware of said email and Ms. Carmon was bringing it to light. On higher level of abstraction, not reporting the leaked email may have caused more harm than running it without attempting to confirm the presence of hazardous material (not the contents of the email, which are self-validating, provided it was not doctored.) Fact checking may have unreasonably delayed disposal of the product, leading users into harm. So which approach was ethical? One, the other, both and neither. It’s like Schrodinger meets William Randolph Hearst.

We can take rightful umbrage if this story was incomplete - that is, if it they were reporting one level up on Hayakawa's abstraction ladder, i.e., that the nail polish indeed contained hazardous material. Jezebel and Gawker Media could have conducted a chemical analysis, consulted with experts, interviewed manufacturers or actually waited for a response from American Apparel before running the piece. But none of this was ethically necessary insofar the scope of the report is concerned. In terms of reporting this story – the wider publication of a "damning" email and what may have been said in a conference call – their obligations to ethics were mind-bogglingly internally consistent. However, the entire head-scratching episode superficially resembles a variant of investigative reporting instead of “blogging” (which I will expound upon later.) The former relies on external sources to confirm or refute claims. This so-old-it's-new style is akin to what I term publicity driven journalism, as opposed to 'traditional' news journalism.

The ethical functions in publicity driven journalism

Any form of journalism that does not rely on the independent verification of more than one source to make a substantive claim could be reliably dubbed as publicity driven journalism. Publicity driven journalism is usually publisher-backed, industry recognised and profit-driven. As broad categories, these include but are not limited to entertainment, sports, technology, lifestyle, Gonzo, opinion and criticism. Opinion and criticism do not ethically require sources to make claims. Entertainment journalism such as music journalism may blur the distinction between opinion and fact; however pieces such as interviews only require one reliable source (i.e., the interview subject) to which their own conjecture is reported as the fact. (“It’s the most accessible yet heavy record we’ve ever done”, “We’re going to take it one day at a time, but we’ll definitely trounce our rivals.”)

Its ethical obligation is to not misquote or misrepresent the conjecture–bearer as a matter of public record. This is constrained by the tripartite model as described before – publishers will not come into disrepute by disseminating copy riddled with falsity, the industry will delegitimise any publication that does so and profit margins will decline as advertisers and the subjects of the copy (artists, products, etc.) withdraw their business. We now live in an age where conjecture-as-fact, not event-true-to-fact is the standard for what's reasonably assumed as ‘credible’ journalism online. (See what I did there?)

Gone Bloggin’

Blogs, short for weblog are part of an amateur journalist or diarist tradition. Even the first blogs or “webdiaries” had no ethical constraints placed on them; conjecture-as-fact informs its process and output. For example, the Drudge Report could reasonably print a headline “Is Obama a Maniac?” in which one of his opponents described him as “a maniac.” Moreover, tabloid magazines print stories which might appear “patently untrue” such as “Is Prince Harry of Wales a Nazi?” – The story itself might be a “source” overhearing a conversation in which Prince Harry of Wales is alleged to have uttered Nazi sentiments. “Is Kate Middleton an alien from outer space?” and etc. The fact itself is derived from the initial conjecture. (Even though the headline sure as shit isn’t.)

When mastheads such as The Times or Daily Mail manoeuvre themselves to drive up pageviews, drawing on their reputation as event-true-to-fact tellers using this new online conjecture-as-fact model, the entire ethical framework for truth in reporting be it amateur or professional ought to be called into question. But if we’re bombarded by tweets and blogs generating 2.5 quintiillion bytes of new information each day, who has the time to say “Hang on a minute?” It's precisely what we must ask ourselves now when we read almost anything online. The unadorned truth does not go viral, not any more.

Facts aren’t being discounted; they’re just being reframed, and most of real reporting isn’t actually reporting in the traditional sense. Is it ethical? Technically yes. Does it make us prone to manipulation, as if we were sired by propaganda and popular enlightenment? If we look backwards to look forwards, we may as well be.

Updated: Go Australia! Here's an example from national broadsheet The Australian, half consumer panic and half free publicity regarding one (one!) software developer's claim Google Play might be passing on user details to vendors after app purchases