media ecology

Angering Ourselves To Death – Postman’s Brave New World Re-Re-Visited - Chapter 4

Chapter IV: Analog People, Binary Computers

Composite image of Abraham Lincoln and Stephen Douglas.According to polling and analytics firm Nielsen in their 2018 Total Audience Report, adults over the age of eighteen in the United States use electronic devices for over ten and a half hours a day. More time is spent using electronics than sleeping. Use of information communications technology – computers, smartphones, TVs, tablets, etc. – is an inescapable part of modern life. The widespread adoption of the Internet in the late 1990s was as revolutionary to civilisation as the mechanical clock or the printing press. There is no going back to a state prior to ICT – it was not an additive invention; it is a transformative one.

In Postman’s and French sociologist Jacques Ellul’s view, the bargain we make with new technology is Faustian; what problems new technology solves seems to take something away at the same time. In the late 1990s and early 2000s, some magazines and TV programs seemed to pine for a pre-ICT era of face-to-face interaction and community; while the counter-argument was found in widespread adoption of Usenet groups, BBS, forums, and nascent forms of social media such as LiveJournal, Xanga, and in the latter part of the 2000s, MySpace, Twitter, and Facebook.

As discussed in the previous three chapters, the downsides to our uncritical flocking to smartphones and mass communication media has created one of paranoia, distrust, and mass surveillance. Each day, hour, minute, we’re inundated with terabytes of information and are standing at crossroads every time we scroll a screen – do you like this, or don’t you?

Weiner’s conception of man using machines to improve man has been turned on its head; we’re now using machines to gain feedback from man to improve the machines. The almighty algorithm processes data in binary terms – PageRank for Google; News Feed for Facebook. In effect, it has transformed our consciousness of how we interpret and respond to information. The new mass media has purposely excluded the middle since its entire architecture has erased it entirely. There is nothing in between 0 and 1 in binary – it either is, or it is not.

Amid the heightened emotion and shock of the September 11, 2001 Terrorist Attacks on the United States, President George W. Bush addressed the nation, saying "Our grief has turned to anger, and anger to resolution. Whether we bring our enemies to justice, or bring justice to our enemies, justice will be done…Every nation, in every region, now has a decision to make. Either you are with us, or you are with the terrorists.”

The post-Cold War era of a “multipolar” or “unipolar” world order had come to an end. At the dawn of the binary computer came binary politics and binary culture. There is no nuance. There will be no negotiation, no terms. You’re either with us, or you’re against us.

The Aristotelian law of excluded middle is not new; this was used to great effect in totalitarian dictatorships across time. Nazi Germany and Soviet Russia being the prime examples. All things deemed acceptable were “Aryan” or “Bolshevik,” while whatever was outside this purview was “degenerate,” “Jewish,” or “Bourgeois.” Like 1984, this required doublethink to pull off with any degree of success; for example, Germany’s military pact with Japan required Nazi lawmakers to accede that the Japanese people were “honourary Aryans” to match their binary worldview.

Much of today’s “thinkpieces” by the identitarian Left and Right seem to ascribe to a two-valued orientation – either you are on our side, or you are “Nazis”, “white supremacists,” or “Socialists.” These tags are used in wild abandon – they were applied to moderate Republican Mitt Romney in the 2012 Presidential Election, resurrected in 2016 during the ascendancy of Donald J. Trump’s candidacy. The Trump candidacy could be the first to use the two-valued orientation to its advantage – either you wanted to Make America Great Again, or you wanted America to fail. This identitarian trope backfired for the Hillary Clinton campaign, calling “half” of Trump’s supporters “a basket of deplorables” on September 9, 2016 – almost 15 years to the day President Bush made his rallying speech:

“They're racist, sexist, homophobic, xenophobic – Islamophobic – you name it. And unfortunately, there are people like that. And he has lifted them up. He has given voice to their websites that used to only have 11,000 people – now have 11 million. He tweets and retweets their offensive hateful mean-spirited rhetoric. Now, some of those folks – they are irredeemable, but thankfully, they are not America.”

Clinton’s two-valued orientation is clear – you’re either With Hillary as an American, or an irredeemable “deplorable.”

The ensuing tribalism is a feature (not a bug) of our artificial-intelligence driven “siloing” of people among racial, gender, political, etc. lines. Where as mass market advertising broadcast messages such as buying jeans to “look sexy,” only a sub-section of a sub-section of a population may have been receptive to the message. To the rest, it passed undetected. By giving up our preferences and dislikes to an algorithm, it has sorted us into self-selecting groups that are binary in nature: you are either for “spicy memes,” or you are a “soyboy cuck” or “white supremacist patriarch” to use two opposing invectives.

Rushkoff calls this AI driven, agenda-adopted polarisation of civil society an “exploit” of our human emotions. That these memes and narratives are designed to trigger our amygdala brains into fight or flight, kill or be killed – bypassing our rationality and logic as granted in the neo-cortex. Postman in his Amusing Ourselves To Death remarked that we, in 1985, were living in a soundbite culture; where a complex idea or nuanced debate was chopped up into bite-size, three to four second clips. This is distilled even further into repeating GIF images, image macros, screenshots of tweets, and other internet errata. The algorithm serves to reinforce long-held viewpoints, shut out debate, exclude the middle. Worse still, it is learning new methods of exploiting us, as Facebook's AI learned to speak its own created language in 2017.

Even search engines are engineered to cater to our biases. For example, if one searches for the “benefits of stevia,” one may never see results for “drawbacks of stevia” unless one searches for terms that criticise stevia as an artificial sweetener. There is money to be made spruiking both sides of the argument. Harmless in debates of taste, but not of fact. This can lead to disaster such as the January 2019 measles outbreak in the continental United States, partly blamed on pockets of unvaccinated people.

The binary orientation does not require context, nor does it require extraordinary evidence to back up its often extraordinary claims. To the identitarian left, the Muller investigation into President Trump’s collusion with Russian agents during his presidential campaign is a “smoking gun” or “did not go far enough.” To the identitarian right, the President was subject to a “witch hunt” and “dirty politics” by his opponents. The excluded middle may not be the factual truth, but it may lead us to further questions and increased (though not total) accuracy in reporting and coming to a conclusion.

In April 2019, podcaster and entertainer Joe Rogan of the Joe Rogan Experience invited Twitter CEO Jack Dorsey and Twitter Legal, Policy and Trust & Safety Lead Vijaya Gadde to debate journalist and YouTuber Tim Pool as to whether Twitter’s policy on free speech is biased against conservative and libertarian voices. Prominent right-leaning personalities such as Alex Jones and Milo Yiannopoulos were banned from the platform, for example.

The entire episode runs for three hours and twenty-five minutes; longer than the 1858 “great debates” between Republican Abraham Lincoln and Democrat opponent Stephen Douglas. Of course, the worlds both podcast and oratory inhabited are alien to one another. Commentators chopped up Rogan’s podcast into soundbites, often decontextualised to fit identitarian, binary narratives. Setting aside three hours of screen time to focus on one debate is folly, considering the sheer amount of content on offer at any given time.

The computers are using us to profit from us; with the soil fertile for commodifying dissent, how does one cash in using the modern mass surveillance, binary-oriented media ecology?

 Next Chapter: Angry Reacts Only – Harvesting Cash from the Media Ecology

Angering Ourselves To Death – Postman’s Brave New World Re-Re-Visited - Chapter 3

Chapter III: The Medium Is The Mass Surveillance

 An Amazon Alexa-enabled device.

In March 2018, a whistleblower told Observer newspapers that UK-based political consulting firm Cambridge Analytica had harvested over 50 million Facebook profiles in a breach of data and privacy. Christopher Wylie who worked with an academic at Cambridge University to gather the data said “We exploited Facebook to harvest millions of people’s profiles,” Wylie told the Observer. “[We] built models to exploit what we knew about them and target their inner demons. That was the basis the entire company was built on.”

The data was collected through an app called thisisyourdigitallife, posing as an online personality test. Exploiting various weaknesses in Facebook’s application programming interface (API), it collected profile information not only from those who authorised the app, but their friends and their friend’s friends.

The information was used to target American users during the 2016 United States Presidential Election and the 2016 UK Referendum on the question of remaining or leaving the European Union.

At least Cambridge Analytica had the courtesy to allow users to opt in. The invasive XKeyscore and Boundless Informant programs used by the NSA to collect signals intelligence and conduct mass surveillance on US and foreign citizens afforded users no such luxury.

As mentioned earlier, Facebook and other social media do not sell products or services directly but facilitate a platform for marketers and advertisers to do so. The well-worn aphorism “the product they are selling is you” is a misnomer. If we take the semanticist Korzybski’s maxim to heart – the word is not the thing – they are not selling you specifically, but a 1 to 1 simulacrum that extends beyond your own consciousness. Human consciousness is also tempered by human unconsciousness; we forget, misplace information, and have moments of complete unawareness of our own behaviours.

Computers don’t.

A computer has perfect memory, perfect algorithms, perfect recall. It can know you better than you know yourself. Thus we, as humans, employ computers to learn more about our habits, wishes, frustrations, and desires. This is not ill or good in and of itself but can be used by humans in either fashion.

If we are also being programmed, we are also being labelled, sorted, objectified, and tabulated. Facebook and its ilk have shifted human consciousness into accepting computers as a wholesale extension of our senses. For instance, it has become acceptable for activists to comb through large data sets such as Twitter feeds for politically incorrect comments; In December 2018, US entertainer and comedian Kevin Hart was ousted from his position as host of the 2019 Academy Awards due to making anti-gay slurs on his Twitter between 2009 and 2011.

Where as a human being may struggle to remember specific comments uttered by anyone almost a decade prior, computers enhance our collective memories by providing a library of instant storage and retrieval of anything and everything we have said or posted online. This leads to another aphorism: this is a feature, not a bug.

It is possible that the original founders of Facebook, Mark Zuckerberg and Eduardo Severin, had no intention to create a mass surveillance medium the likes of which the world has never seen. According to after-the-fact reports, Zuckerberg created “FaceMash” in his Harvard University dorm room as an application to rate the relative attractiveness of girls on campus in 2003. It was later renamed “the Facebook” then simply “Facebook” in 2006. The original app was limited to colleges in Boston, then expanded to all university-level institutions, and eventually, all people with a valid email address (and over the age of 13) in September 2006.

Facebook exploited our desire for convenience and want for human interaction. People could add “friends” to their Facebook and share their opinions, photos, videos, and other content with one another. They could also join in on games. They could express their desires by an “opt-in” – the Facebook “like” button. “Liking” topics or webpages built up a profile of your preferences and interests; albeit manually. As of 2019, this is achieved via machine learning and artificial intelligence.

Facebook acquired photo sharing app Instagram in 2012; instant message service WhatsApp in 2014. It launched its own proprietary messaging platform, Messenger, in 2015. According to the End User Licence Agreements, Facebook could use these applications on your phone to harvest data about your habits, including your location. In 2016, Facebook strenuously denied eavesdropping on conversations, using one’s smartphone camera or microphone to pick up vision or audio. Facebook has spent millions of dollars on PR to counteract these claims, saying that advertising that pops up in feeds is a result of “frequency bias” or just plain coincidence.

It was confirmed in April 2019 by Bloomberg that human technicians in the employ of Amazon listen to voice searches and other audio picked up from Alexa-enabled devices. This mix of contractors and employees based around the world are tasked with refining the voice search algorithm to produce better results. However, the nature of the medium is to have an “ear” out for keywords and phrases at all times. According to the article,

“Sometimes they [employees] hear recordings they find upsetting, or possibly criminal. Two of the workers said they picked up what they believe was a sexual assault. When something like that happens, they may share the experience in the internal chat room as a way of relieving stress. Amazon says it has procedures in place for workers to follow when they hear something distressing, but two Romania-based employees said that, after requesting guidance for such cases, they were told it wasn’t Amazon’s job to interfere.”

The media we consume and produce for is the mass surveillance; the Faustian bargain we’ve made with technology is coming back to haunt us in myriad ways. Mass surveillance by private entities is chilling enough; however, the panopticon effect of moral busybodies and invective-slinging do-gooders has also cost people their livelihoods. This public shaming by internet mob was made most famous in 2013, when corporate communication director Justine Sacco tweeted just as she departed for Cape Town: “Going to Africa. Hope I don’t get AIDS. Just kidding. I’m white!” A joke in poor taste; it nevertheless was the Number 1 worldwide trending topic on Twitter, creating a storm of controversy before Ms. Sacco had even stepped off the plane.

Because of these interconnections both public and private, the mass surveillance nature of media is inescapable. The mass surveillance is having a profound effect on the way we parse language and the meaning of that language; and breaks down the tacit disconnect between language as action and language as thought in action. Nuance is impossible, tribal and identitarian sentiments are rising. We are analogue people, being programmed to think in binary ways.

Angering Ourselves To Death – Postman’s Brave New World Re-Re-Visited - Chapter 2

Chapter II: The Media Malware Machine

Norbert Weiner.

In 1994, media theorist and documentarian Douglas Rushkoff said the media is ‘the extension of a living organism; [media and communication technology] is a circulatory system for today’s information, ideas and images.’

The media as an environment in 1994, a decade on from 1984, seems quaint by today’s comparison.

In the average Western household, you could likely find a television set, a radio, a telephone, and of course, paper, pens, and envelopes to send letters. Many households likely had a subscription to print media; newspapers, periodicals, magazines.

If you considered yourself “tech savvy” you may have had a computer of some kind. An IBM-PC compatible, a Commodore Amiga, Atari ST, Macintosh, or even an Acorn RISC PC.  If you worked in the C-suite, you may have had a laptop or a cellular portable telephone. If you were on the geeky side, it’s possible you used dial-up modems to call into Bulletin Board Systems (BBS) to trade files, or even have a connection to Internet (as it was referred to back then.) Internet had a scarce handful of web pages; the World Wide Web Consortium (W3C) was only formed during this year.  Most users connected through portals such as CompuServe, Prodigy, or AOL (in the US) to interact with Usenet newsgroups or read their email.

In today’s world, we have only one standard computer processor architecture (the x86-64) which is found in Windows, Linux, and Mac based PCs and laptops. All web traffic is routed through Domain Name Servers, as standardised by the Internet Corporation for Assigned Names and Numbers (ICANN) which operated the Internet Assigned Numbers Authority (IANA). IANA was overseen by the United States Department of Commerce as recently as 2016. In 1994, the circulatory system had many hearts.

Today we have one.

Content is the blood pumped around by that heart, transmitted and duplicated at frightening speed. In 1994, there existed 2,738 websites. If every tweet is counted as a webpage (which it very well could be, considering each tweet is given its own URL), over 6,000 webpages are created each second.

Each of those websites likely had their own web server. A beige box sitting under a desk, usually in a university physics or computer science department. In the time you read that sentence, about 24,000 more webpages exist, perhaps with images, audio, or video content attached. To enable the transmission of all this content, the internet “backbone” is its servers.

According to the Synergy Research Group, Amazon Web Services controls one-third of the world’s public cloud computing capacity, more than Microsoft, IBM, and Google combined. Formed in 1995 as a book retailer, Amazon.com has exploded into a global conglomerate that not only sells goods online but facilitates the sale of said goods by providing the infrastructure they’re sold on. In 2016, Amazon.com captured $1 out of every $2 spent on retail goods online. Amazon has a vested interest in facilitating content to create more opportunity to sell its products. In 2013, Jeff Bezos, owner of Amazon, bought the Washington Post for $250 million.

As mentioned earlier, the volume of information we can encounter is infinitesimal, and growing exponentially. If we position content being distinct from information, that is, data we can interpret and use in some meaningful way, the web-enabled media environment has not been “hijacked” by fake news or disinformation as many thinkpieces and hot-takers point out.

The current media ecology a simply enables larger and larger quantities of this content to be created, shared, and consumed. In the words of German renaissance philosopher Paracelsus, “sola dosis facit venenum” or in English, the dose makes the poison.

His thesis was that all substances, at high enough dosages, are poison. One can drown if they consume too much water. Once can also be harmed if they breathe molecular oxygen at increased partial pressures. With no map to guide us on what content is “useful” and what is not, we succumb to overwhelming content overload. The well itself is not poisoned, but the amount one can drink from it will inevitably make you sick.

We now have untold media power, both as consumers and as potential producers. Anyone can connect with virtually anyone else across the globe in real-time. But for everything we gain from new media technologies, we also lose something.

It would seem the side-effect of this is polarisation, or a binary, two-valued orientation. This is not the cyberneticist Norbert Weiner’s sincere wish of using technology to enhance human beings for human use; it is technology shaping our conception of reality, and perhaps even our consciousness, to better fit machine algorithms. Rushkoff touched on this nearly a decade ago in his “contract” with technopoly – we either program or get programmed ourselves.

Now in the electronic media world, linear narratives are unimportant; we can tune into a tweet, watch three different YouTube clips at a time and use RSS readers to aggregate thousands of articles, picking and choosing the few that are worthy of our dithering attention. 15% of the world’s internet traffic is dedicated to streaming content from Netflix.

In any cybernetic system, as Weiner defined it, information control is integral to the health of that system. In his treatise Human Use of Human Beings, he describes a power station where the flow of information between man and machine flows in both ways, perhaps as status reports and commands as feedback in response to those status reports. However, a communication system requires a filter on erroneous or non-useful information. The filter was obvious in Postman’s time – TV news editors, newspaper editors, etc. The prevailing critique was that said editors may impose biases based on political or corporate diktats, restricting the “authenticity” of what was being presented.

The inherent problem with the overabundance of content is filters at the source (the publisher) yet no filter at the consumption level (the readers.) The messages are not filtered, however the information contained within the messages are. The media malware is inherent due to the lack of filtering for bogus or “fake news” at the consumption level. Is there a non-invasive filter for such information, that does not rely on Orwellian tactics? As Postman once said, information for the sake of information may not be useful; there is no value to knowing Princess Adelaide had the whooping cough.

Publishers can publish skewed perspectives, lie by omission, or flat-out create fictions which consumers may or may not have the inclination to root out. The ultimate filter would be to deny these publishers ad revenue and have them go out of business. Between January-February 2019, sites such as BuzzFeed, VICE and Huffington Post (HuffPost) laid off approximately 2,200 journalists in the US, with BuzzFeed Australia signalling similar cutbacks to its editorial division. Perhaps the poisoned adrenaline pumping through the media circulatory system will wane – the market, it seems, has decided.

Content filtering is draconian and ought to be given the disdain it deserves. What our media environment lacks is a transparent, decentralised, and self-correcting information filter; one that is unbiased, impartial, and robust enough to counteract the terror of disinformation. Aggregation, or positioning oneself on a higher level of abstraction, such as those found in apps like NewsVoice, may be a possible fix. It presents a filter for the disinformation crisis. However, it does not solve the content crisis; and at present, there may be no logical solution. The cybernetic feedback goes both ways but seems to benefit the system and not the user.

It would seem the information system on which we have come to rely is programming us, and we're being injected with malicious binary code.