Seeing the forest fire for the trees


I am still trying to catch up with the classics. The other day, I gave The Wire a go for the first time, having never watched it before. While I appreciated the detective story set-up of intrigue and bluster, what really struck me about The Wire was how the police officers on the upper floors were willingly using typewriters to do their work. The first series of The Wire was released in 2002; computers were commonplace but it wasn’t yet kitsch to use a typewriter with added irony or absurd to use one without; the typewriter was on the turn. Now, a standard office not populated with ubiquitous screens – be they desktops or the laptops of the employees themselves – sounds truly ridiculous. Personal computers and the World Wide Web are integral parts of our waking days, they’re how we store information, how we access it, and how we process it. Even books, the standout stalwart in the face of a digital onslaught, are today drafted on computers. This is the sort of truth that we all know but rarely consider outright. The reason I raise it now is to make plain just how accidentally insidious this transition has been.

First, a series of disclaimers. Obviously, computers and servers and all their interlocking parts are more efficient than ledgers and notebooks and paper forms. The efficiency brought about by the technological revolution is a net benefit to humankind provided it is used responsibly, without a shadow of a doubt. The spread of information across digital media is immense and incredibly positive. Even ten years ago, it would have been difficult for me to find much detail on the ageless and apparently never-aging American actor Paul Rudd. Now, a cursory Google search throws up thousands of results, precisely because all the information is integrated. Libraries and archives are segregated to the sidelines; a thousand encyclopedias abound in the new hub of knowledge.

But this ubiquity comes with a cost. As highlighted in Tim Dowling’s recent column, most of the things we do in our daily lives nowadays we do via the internet. Alternative avenues have become peppered with roadblocks and burdensome costs. Why buy an A-Z if you have Google Maps? Why plan your journey if you have Citimapper? Why stop at this one restaurant out of curiosity, when you can visit the best one in town according to TripAdvisor? Of course, all of these changes have their pitfalls that I needn’t outline here; melancholia can only get us so far. Regardless of whether new, integrated online services are doing a good job, they are definitely the dominant service in a myriad of sectors. Our computer or phone or tablet is our go-to for a wide range of tasks when previously the personal computer was merely an object for coding or curiosity. It puts conspiracy theorists in the mind of some HAL-9000-alike or god forbid, Skynet; once the machine is omnipresent, it can mount some violent takeover. Unfortunately for us, there is a takeover, but it is totally, inevitably, passive.

“Screen time” is a taboo phrase. You need only consult Apple’s latest range of updates or the childhoods of some of us with particularly antsy parents; we recognise that we should be placing limits on the time we spend on screens, not just because of their supposed “burn” but because we know it can become unproductive and harmful. What we’re only just starting to countenance is that screens are things we can become addicted to. This isn’t just drawn from the potential of the eternal blue glow, but the myriad ways that programmers can design applications to draw us in; the somatic horror of the push notification, the red “1” alerting us to attention, or even the promise of such. This is a self-reinforcing vicious circle that draws us in, and it isn’t just a social thing. Desk jockeys are slaves to the hegemonic power of email, the taunting jingle of Microsoft Outlook. In Amazon warehouses, workers receive a “ping” when they have a new order that needs fulfilment. Anyone working in freelance is parlay to social media, not least because, at least in media, it can often make or break a career. The notification complex, starving and saturating us at once, coddles us at work and follows us home where it grows ever stronger, once we are freed from the social customs that would look down on us for mindlessly staring at our phone. This addiction is harmful because it creates a disconnect between our honest interests and how we end up spending our time. We stop being the ones in control of our own behaviour and become like Super Hans in that episode where he runs to Windsor.

How did we get here? For one thing, we didn’t see the forest for the trees until the forest itself was very much on fire. We let our mental health be wracked by the unending advance of the World Wide Web once it had already integrated itself in our culture. Before you cry, “social media is not the same as the internet proper!” I could point you to Slack, to Trello, to a whole gamut of applications that attempt to “gamify” work the same way that social media has gamified leisure. I could argue that even without all these notifications ramming our pleasure centres, the urge remains to blithely skirt Wikipedia or scroll Facebook. Gratification takes many forms, as does what we associate with relaxation. As Adorno most plainly put it, our everyday lives have become so taxing that we are less likely to use our leisure time for something that would be taxing in the same way; we become less prone to intellectual inquiry and more prone to mindless scrolling.

Another implicit cause has been the confusion of intentions and outcomes regarding the World Wide Web. The architects of cyberspace, like John Perry-Barlow and John Gilmore, saw the creation of a “civilisation of the mind” over the internet that would overcome the growing tragedies of the late 20th century and the end of a popular, pervasive counterculture. The libertarianism pronounced by these cyberactivists did not come to pass, and now we live with an internet that limits freedom of speech for the sake of corporatism and threatens what spectre of privacy remains. Moreover, the encyclopaedic marketplace of ideas that some idealists might see in the internet has been superseded by a total disorder of irreverent content that, more often than not, seeks short term amusement – because that is what collects the most advertising revenue. The marketisation of internet content, unparalleled in the real world, has enabled the prevalence of the easily accessible over the difficult, touchy topic and prioritised the shallow swim over the deep dive; the necessity of crowdfunding services like Patreon are the exception that prove the rule, barely sustaining content that tends on the thoughtful in the face of a steady flow of Baby Shark-esque bile. This change is violently at odds with the aspirations of the social engineers of the 60s and the philosophers that inspired them: that we could use education to bring intellectual thought to the masses. The laissez-faire attitude to both public education and internet policy is alarming.

This is a vile Catch-22. The mutually reinforcing nature of systems on the World Wide Web is bound to turn more and more passive viewers over to content that is easy rather than content that is necessary, while a placid addiction to the screen is compounded by the changing nature over everyday life. The only way that this borderline dystopic social malaise can be avoided is by radically altering the internet itself. We need supranational intervention on a massive scale to change the way social media, internet communications and internet entertainment all work. Otherwise, we risk sowing the quietest revolution in history: the slow, serene disengagement of the world from itself.

Comments