Since the rise of Barack Obama as the first major presidential candidate of the social media era, the ability to become an amorphous vessel for a vast and often contradictory set of voter desires has become a defining feature of success. Election postmortems are now just iterations of this political psychology: the outcome merely affirms the writer’s already-existing view of politics. But what’s most striking to me about post-election analysis is that even when it threatens to land on a salient point, there’s a creeping sense I get of the author’s naive assumptions about most aspects of contemporary politics. Namely, the author betrays the belief that politics is about the collective action of individuals with agency. These individuals, they seem to believe, participate in conversations about ideas, develop views based on those conversations, and then act upon them via some measure of involvement in the political process. This involvement may take a number of forms, from street activism to the ballot box, from fundraising to organizing to the formation of paramilitary organizations. This organization of politics has not existed in quite some time. A new one began coming into being years ago and now has become dominant.
1.) INDIVIDUALS WITHOUT AGENCY
Amidst the current craze for demanding accountability in online space, it will surely annoy (infuriate?) many when I suggest that much of what takes place in online space is not quite the result of consciously contemplated much less serious personal expression. Our experience of the internet is fragmented but also subsumed into a seamless flow. All but the most unfamiliar with how being online is experienced know that the flurry of clicks and keyboard strokes begin to take on a semi-conscious, somewhat automatic character. Part of this has to do with the basic fact of how brains learn. What a guitar player once had to think about consciously in their early lessons becomes automatic later. This intersects with a key element of contemporary political discourse: artificial intelligence and social media algorithms. Because as we know this is where most of our conversation about politics now takes place.
Before going much further on the question of algorithmic intelligence I want to stake out a position on the question of just how smart AI on social platforms actually is. A common supposed rebuttal to fears of AI dominance comes in the form mocking the algorithm’s surmise: “Facebook thinks I want to see the group Farmers For Trump LOL.” The analysis that the algorithm is relatively dumb is correct. The conclusion that it is therefore nothing to worry about is incorrect.
It is precisely AI’s stupidity that we must be concerned with here. An AI-administered society or political space where the AI was genuinely intelligent would certainly carry its own risks, but they would be different from the ones we face. The problem with which we must currently contend is that in their current incarnations they are blunt instruments designed to do something useful and profitable with the massive scale and speed of information made possible by digital technology. What are they designed to do?
It’s by now a truism to note that in the beginning the internet was a wild anarchic array of unrestrained activity. Many recall it as an exciting and fun time, when all kinds of bizarre and interesting possibilities were unfolding. But that internet wasn’t profitable. Social media and “Web 2.0” (shorthand for the collection of feature innovations such as the comment section, the like, the share, and the friend/follow) changed that. A increasing amount of human attention was being devoted to online space but it couldn’t be measured. Social media invented what appeared to be metrics for attention, and once that attention could be measured it could be sold. This is relatively uncontroversial. But speed and scale of information has all kinds of structural problems. For one, when information moves fast, so does peoples’ attention. How do you hold on to it?
It would be tedious to rehearse the history of how social platforms discovered that they could “hack” cognitive biases and keep users engaged by favoring emotionally charged content. The industry has all kinds of euphemisms and buzzwords for this like high emotional valence and arousal. The key no matter where one derives ones understanding of this from is to always listen to the developers and true believers in these kinds if technologies because they speak about it with the most honesty. They talk about how to create habit-forming technology, how to get people hooked, how to “kindle an emotional fire.” And they speak unambiguously about how to target the oldest, least conscious sections of the human brain.
If we just stop here and summarize the points until now, an alarming picture comes into focus. In an effort to create profit from the scale and speed of information, tech corporations have targeted our nonconscious brains in order to keep our attention focused long enough to monetize it. Now, if an apparatus like this required hoses and wires to be attached from a main computer to each human brain so that individual agency could be undermined and our behaviors directed towards pure profitability, we’d recognize that as the kind of campy mind control machinery portrayed in various 1950s mad scientist films. It is the mere lack of need for those items that creates the illusion that we are not each hooked up to such a mass mind control machine. And it is the imagery of these old films that we tacitly understand is supposed to be silly, therefore we avoid the proposition and reject the vocabulary before it’s even proposed. What if it’s accurate?
There’s a tension between the kind of self-appointed tech industry conscience figures that appear in “The Social Dilemma” and their critics who rightly note that these figures are sort of rebranding; that their remedies don’t really sound that different from the problem. These critics tend to dismiss or shift focus away from the critique of someone like Tristan Harris who is saying outright: we created a mass mind control computer. I think that’s accurate, but calls for something far more drastic than he recommends. Someone like Harris claims that we could make our devices work for us rather than us work for them, but I think a far more urgent task is convincing people they work for their devices.
But part of what makes it difficult is that even the notion of work implies self-directed activity. The way in which we work for our devices can’t be viewed outside of the means by which our devices exploit our nonconscious brain. We are being used to do work that we have not chosen to do. We do this work not fully aware that we are doing it, or more to the point, convinced that we are doing other work. Work that we want to do, work that we believe in. Work that is urgent, necessary and if we do not do this work, the world will be incinerated in the very near future. It’s worth noting that nearly everyone from nearly every political persuasion believes this, and that since this notion has gained such widespread purchase, almost nobody believes their lives are improving. And yet the basic premise of the Great Work Which Must Be Done is rarely questioned.
2.) CONVERSATIONS AS NONCONVERSATIONS
The Great Work Which Must Be Done is not limited to but always contains a key component of writing things on the internet. In a pantomime of the 19th and 20th centuries, nearly all political identities embrace the notion that we must make our voices heard. What can this possibly mean after the mass worldwide protests during the lead-up to the Iraq War? By some counts this was the largest collective effort at making voices in all of human history and it resulted in…nothing. Or more to the point, it resulted in the invasion and occupation of Iraq. As someone who has been on a computer since the Apple IIc and PC Jr., has has the internet since Prodigy and AOL, and had my political awakening somewhere in-between, I can say without much reservation that the Iraq War was the first political issue I consumed almost all news about online. This tool that made so much connectivity possible didn’t seem to translate that connection into power. Not in the way we’d hoped and imagined (this may have been the moment when a certain cadre of people noticed that 21st century western capitalist democracies could actually absorb and manage an incredible amount of tension between the will of the people and the actions of the state.)
And yet somehow making one’s voice heard remains a cornerstone of every political idea that can draw something on a flag and many that can’t. One might be forgiven for thinking this is bizarre, but even more strange is my sense that some deeply ingrained value or ideology guards against the asking of this question and instead redirects it back into more opinion-voicing and stand-taking in the name of The Great Work Which Must Be Done.
What if what we call our conversations are not conversations at all? What if the appearances and habits and contents of communication have been taken hostage, broken up into pieces that can be easily sorted and administered by a dumb, profit-based AI using semiconscious human subjects as vessels?One way to avoid the sci-fi-ness of this assertion is to think instead about structure and logistics. The internet allows for a radical diversity of ideas. If ever we might have encountered a communication technology that allows for complex identities and parallel reasonings, for nuance and nonbinary thinking and being, it is the internet. So why do we experience social platforms as orgies of the A vs. B? Of simplified identities and zero-sum reasoning? It is because social media AI is basically stupid. It hasn’t learned how to monetize nuance and solution-based communication but is very good at monetizing good vs evil severe binaries.
The problem with stupid AI has always been that the categories of its understanding will become the categories according to which society is administered. This is the underlying theme of every robot-averse bit of pop culture from the 20th century. In Robocop the problem with OCP was that human behavior is too complex to be codified into something an ED-209 can grasp. But the moment you give it guns and authority to kill, you force society to bend itself to the stupidity of ED-209’s understanding of the law. Here we can finally return to election postmortems and the ongoing Twitter “conversation” (i.e. The Great Work Which Must Be Done) One may unambiguously state that racism, for instance, is evil. But those that spend their time online doing so, sorting various behaviors into good vs evil boxes, still contend that this kind of stand-taking matters. They say this even as none would answer that the world has gotten better the more stand-taking that goes on. This is because stand-taking is an encouraged activity for the mass mind control machinery. The more of it one supposedly does, the more profit the machine can deliver. Stand-taking is gamified, subsumed into a mostly ethics-free monetized informational flow.
Imagine how much harder to the game would be to play if The Great Work Which Must Be Done consisted in sorting persons and ideas into multiple, sometimes non-competing categories. Fewer people would play, and fewer would play as effectively as they currently do. Imagine if the game had an endpoint or was oriented towards solutions. When such a point was reached, profits which were being generated by game play would decline sharply. For as much popularity as post-Enlightenment and posthuman concepts have gained in recent years it’s interesting that anyone could seriously entertain such a humanistic Enlightenment assessment of social media usage and related political debate and action. The moment voters who have been inundated with (what appears to be) information on their newsfeeds go to vote, we practically treat them as classroom ideals, subjects with agency who take ideas to the democratic process in order to make their voices heard. There is no good reason to do this. Assertions are just tools for the game. Twitter disagreements, topic threads, downthread discussions, Reddits, these far make more sense as blunt instruments of dumb, profit-oriented AI than as the technological extensions of the subject and the political process in a democracy.
3.) YOUR VIEWS AREN’T YOUR OWN
If what I have sketched above feels more true than untrue, then it shouldn’t be terribly controversial to suggest that the naive classroom view of a person with an identity and a set of ethical values sorting through information to arrive at a view which they then express via the levers of democracy, is no longer common or likely thing. It shouldn’t even be shocking to suggest that perhaps the dumb — yet extremely powerful — AI can exploit people in a nonconscious state of game play flow to instill or at least strongly suggest certain political positions.
Social platforms are association makers we are happy to admit. Such associations come to us in the form of suggestion, which implies a psychology of choice: “It’s just a suggestion, you can choose otherwise if you want.” But everyone who understands the basic history and methodology of advertising knows that this is a lie. If suggestion had no effect on choice, nobody would advertise anything ever again. The power of suggestion over a fully conscious individual is impressive enough. The rhetoric of digital empowerment leads many to believe that whatever amount of suggestion they encounter online, they are less susceptible to manipulation than those who existed in previous media landscapes. In fact, the opposite is true. The constant attack on our lizard brains makes contemporary digital citizens less empowered than ever.
This is why it’s reassuring for onlookers to see the rise of global neofascism and claim that we’re only seeing the hate that already existed in people. To accept that perhaps this hate is being inculcated sometimes intentionally but also is a natural outgrowth of a supposedly empowering technology, strikes at the sense of ourselves as free agents operating for the Forces of Good on the Frontiers of Tech-liberation and Progress. It also horrifies those who rightly insist on mechanisms of accountability for those leading new antidemocratic and authoritarian movements. They are scared that if people aren’t responsible for their beliefs, they’re not responsible for their actions. I would focus instead on an inversion of Eugene Debs’ famous quote which I think can animate a weird kind of hope. Debs said, “I would not lead you into the promised land if I could, because if I led you in, some one else would lead you out.” But following Debs’ point if people can be led this easily led into hell, they can be easily led back out.
4.) ACTION & INVOLVEMENT
When we look back on the recent presidential election in 25 years I strongly suspect that one of the smaller stories now will be one of the biggest then. Trump and his supporters understand it, if not on a theoretical and conceptual level, at the gut level. There was a last minute Comey letter style intervention in this election. It effected outcomes at the most basic level of the dynamics I have outlined above: it fundamentally altered game play in the home stretch. Crucial rules which draw the boundaries of the game and determine how it is played, the kinds of suggestions are being made to millions of people at a time while they sit transfixed in a flow of semiconscious game play, suddenly changed. Twitter blocked the NY Post’s story on Hunter Biden’s laptop, then started hiding Trump’s tweets about election fraud behind disclaimers. Facebook removed multiple QAnon-style groups with hundreds of thousands of followers who were hammering users with inflammatory falsehood.
We used to say “Sunlight is the best disinfectant” during the time of the political subject with agency who has conversations about ideas and then enacts power to realize their convictions via democracy. But today, ideas are a torrential flood, and the rushing water is either blocked or it runs free. What changed in 2020 was that the floodwaters ran down new pathways. The people were ever-so-slightly (see: the thin margins of victory) led out of hell by the forces that had led them there.
When people conceptualize action as stand-taking and take-making (or take-approving, take-disapproving, or take-sharing) they’re conceptualizing their involvement as participants in a conversation that only appears to be happening. They’re embracing an outmoded notion of free flow of information where truth triumphs over lies in a world of empowered individuals. But our current dilemma is that a vast mind control machinery wields the most actual power and all of those previous notions of self and action have been rendered largely to history’s dustbin. The machinery draws individuals into this process but its rules, not the laws of rational debate, determine political outcomes. This means in some way that all politics is now cyberwar, and much cyberwar involves mastery of the video game at its structural level. Meanwhile the take makers and those doing the Great Work Which Must Be Done talk about politics at the level of how the videogame’s outcomes are experienced and perceived. These perceptions rarely describe what’s really happening.
Many, whether partisans or resistors of global neofascism and anti-democracy movements, seem to now agree that history doesn’t bend towards progress. I think it can be shown or at least persuasively suggested that the mind control machinery bends towards fascism, or whatever word best describes the new postdemocratic authoritarianism. In the next post, I hope to map this tendency not of a neutral tool which acts however its programmers tell it to act, but of a biased infoweapon which, like a gun, tends to be used for certain things and not others…