Critical Topics: AI Images, Class FIVE

Images and Surveillance:
Nobody is Always Watching You

George Orwell's novel 1984 imagined a future where language was severely constrained and all technology was oriented to state surveillance. The first time I read 1984 was 1986. It was the first novel I’d ever read. My grandfather had it laying around his house and I picked it up, swept into a world of politics, torture and propaganda that has stuck with me for most of my life. I credit that to the sheer force of Orwell’s writing, but also because this particular book would be a sheer blunt trauma to the head of any first grade kid who’d never read a real book before. 

This year, 1984 is old enough to be in the public domain, so I encourage all of you to go out and publish your own editions. I think you’d find it a challenge to modernize. I’ve seen adaptations over the years, where new technologies play a larger role in Big Brother’s omniscience, but the thrust of Orwell’s argument is not about what the technology is, but how we use it. The telescreens people watch (and that watch people) are not that different from the hidden microphones used elsewhere in the book. There’s nothing fancy here: no robot super soldiers, no talking cars, not even hovering police drones. 

The technology of surveillance in 1984 isn’t the result of smaller circuit boards or smarter artificial intelligence systems. Instead, the future is wrapped up in ideas about what the limits on surveillance could be, and how omniscient it could become. Orwell’s world is a dystopian, totalitarian state where surveillance is not only conducted outside of you, but inspires its citizens to surveill inside of themselves. Orwell imagined telescreens in every home, broadcasting propaganda and entertainment, but also serving as a two-way mirror to monitor those who watched. These were windows into the prison cell, but the cells were our homes. 

Jeremy Bentham’s layout of a “humane” prison, with the center being a steeple for a church. Within each pentagon, a control tower looks out to rows of prison cells. No prisoner knows where the guard is looking, and so each prisoner acts as if they are the one being observed.

Orwell’s vision was a nation-wide upgrade of a system designed for prisons by the philosopher and social scientist Jeremy Bentham in the 18th century, called the Panopticon. The panopticon is simple, and it’s still used today: a central tower with opaque windows, surrounded by a circle, or pentagonal arrangement, of cells. This design meant that the guard could look out from any window of the tower and see every prisoner. The prisoners, on the other hand, never knew when the guard was watching. Because they didn’t know when the guard — the eye of the panopticon — was looking at them, they behaved as though the guard was always watching.

This self-imposed imprisonment meant one guard could keep tabs on an entire building or wing full of prisoners. The floor plans of prisons look like a child’s drawings of flowers or suns: here you see six different panopticons with a church steeple at the center, suggesting the eye of God. For Bentham, this was a humane improvement, showing that power could be exerted without torture or beatings, but through the exertion of psychological control. 

Orwell replaced the prison tower with screens. The screens came on and blared propaganda and, it seems, allowed for some 1948 version of video conferencing. But in Ingsoc Oceania, television watches you. Orwell wrote:

“…there was of course no way of knowing whether you were being watched at any given moment ... you had to live ... in the assumption that every sound you made was overheard, and, except in darkness, every movement scrutinised…"

Because the citizens of Oceania had no idea when Big Brother was watching them, they behaved as though Big Brother was always watching them. 

If A Tree Falls

This does not seem to horrify us the way it horrified Orwell back in 1948. I started to realize this sometime in 2010, when I started seeing updates on Facebook about the locations my friends were visiting: “So-and-so is the mayor of McDonalds.” That was an app called Foursquare. It let you tell your friends where you were at all times, and how often you had been there. Soon after came Instagram, and its endless stream of location data and food photos told me not just where my friends were but what they were eating. Along with these technologies emerged a surveillance economy, all tied together into the hottest commodity of the 2010s: Big Data.  

Big Data is the byproduct of your digital transactions. If you swipe a credit card or place an order on Amazon or any other digital retailer, it keeps records of what you’ve purchased, linked to your name and email addresses. This information is cross referenced with, say, grocery store purchases or gas fill-ups that you make in person. The idea is to anticipate your purchases, and make personal suggestions and recommendations: Netflix can tell you what to watch, Amazon can tell you what to read, Spotify can tell you what to listen to, and Tinder can tell who to date. YouTube can recommend what conspiracy video to watch next, Google can tell you which website to look at. Facebook can tell you what to buy, read, eat, watch, listen to, and vote for. 

Somehow, being observed and directed all the time in this way doesn’t seem to horrify any of us the way it terrified people back in Orwell’s day. Instead, we are thrilled with the convenience. Or, to be less judgy about it: I am thrilled by the convenience. I love finding new music. I met my partner through a dating website. I wouldn’t be the first person to suggest that the sheer volume of data has lulled us all into a sense of security: we thought we had control over who saw our data and when we shared it, forgetting that the apps are tracking us either way. We feel that with so much data out there on so many people, nobody is watching all of it, and certainly nobody is watching little old me. 

If you’re reading this on your home telescreen, web sites and telecoms are gathering your data. They’ll know how long you read, how much you skipped, maybe even what you typed to your friends in the chat window, but you’re probably comforted by the fact that nobody will ever look at that information, because there’s just so damn much of it. Or, if they do, you will be an anonymous blip in a spreadsheet. Nonetheless, this is surveillance, and I’d ask where you personally draw the line.

Last year, Microsoft acknowledged that contract workers in China had access to your Skype calls, and could listen to them in their homes in order to work on improving voice recognition. They didn’t know your name, per se, but they heard your voice, knew how you spoke and what you spoke about. Is that comforting? Siri, Alexa and Google all have similar practices. Facebook has access to everything you’ve ever written, including DMs, and has, for whatever reason, been purchasing data on women’s menstruation cycles and sexual frequency from your health apps. 

The age of Big Brother is now the age of Big Data, and we have moved from an interest in monitoring the behavior of citizens to monetizing the behaviour of citizens. So it’s easy to see that the majority of people don’t actually care about this stuff, or at least don’t know about it. Perhaps it’s because we have all made an unconscious bargain: we trade our privacy for information about ourselves and the conveniences that gets us. But I want to think through some of the connections here, to think about what Orwell might make of our hands-off attitude to the stuff of our lives. Orwell was explicit about the role of Big Brother: its citizens behaved as if someone was always watching them. But our thinking about Big Data is the inverse: if everyone is always being watched by machines, if everyone’s data is sitting in a massive repository of hard drives somewhere in California, then nobody is being watched at all. And I want to frame that for what it is, which, dare I say, is actually “Orwellian” — War is Peace, Freedom is Slavery, and now: Surveillance is Anonymity.

1948

A photograph of computers as they were in 1948: women doing difficult math problems.

When Orwell was writing the book in 1948, the word computer had a different meaning. Computers were people — typically women — who did arduous math with slide rules for practical purposes, such as weather prediction or engineering. The first American weather computer was created in the 1870s: an office full of women who took in weather reports and other information and crunched numbers in intense, two-hour bursts, sending information across a “signal service” network using telegraphs.

When Orwell describes windowless buildings full of vast bureaucracies sorting through and revising data, I don’t think it’s too bold to say that some of the institutions in Orwell’s books were literally computers as they were defined in his time, powered by human calculations and judgements. We can trace an imaginary timeline up to what those structures would look like using today’s technology and infrastructures. 

The year that Orwell was writing 1984 is the year that Norbert Wiener wrote a book describing a framework that showed how engineers might begin transforming human computers into mechanical ones. That book was called Cybernetics: Command and Control in the Animal and the Machine. Command and Control have scary connotations — especially as we’re reading Orwell — but while Wiener was interested in the idea of automation, his writing is marked by a deep skepticism and fear of its consequences. Wiener is notable for writing about it with tremendous pessimism. In a book he wrote to popularize Cybernetics, he added this warning:

‘‘When human atoms are knit into an organization in which they are used, not in their full right as responsible human beings, but as cogs and levers and rods, it matters little that their raw material is flesh and blood. What is used as an element in a machine, is in fact an element in the machine.’’ [185].

Wiener and Orwell were both concerned with communication and control, and both were pessimistic. Wiener saw the development of automated industrial technologies that widely came to pass, in his own lifetime, in the robotic assembly line. I don’t know if Orwell ever read Cybernetics.

This fear of computer surveillance made its way into culture in other ways. In 1965, the French filmmaker Jean-Luc Godard made a film called Alphaville, in which Big Brother is reimagined as a computer system called Alpha 60. 

In that film, Godard imagines a future where all human bureaucracy is handed over to a computer system, and humans hand over all faith to its decisions and pronouncements, regardless of whether they understand them, answering questions with a shrug and the expression, “Never ask why, only say because.” 

That line sums up the history of building artificial intelligence systems. In the early days, automation was simpler: if a human worker had a series of steps to complete to produce an object on an assembly line, you simply created a program that could imitate all of those repetitive motions. That’s what a computer program is: a series of steps, that open up a different series of steps, to achieve some goal. In the late 1970s and 1980s, automation started aiming to automate human decision making in a machine, to accomplish more complex tasks. Sorting out how humans made these decisions was left to computer programmers, rather than social scientists, psychologists, or anthropologists. These programmers would only have to figure out what decisions people made, rather than why they made them: “Never ask why, only say because.” In effect, you would send a computer programmer into an existing bureaucracy to look at the decisions being made by those workers, and then build a model to replicate those decisions the way you would build a robot to repeat the construction of car parts. 

The aim was to remove the human element of human knowledge and make it consistent, repeatable, and replicable. Machines would remove the messier, emotional and relational approach that human beings have when making decisions, and focus on making the same decision for every applicable purpose. Gone would be the imbalances of emotion, and soon mankind would enter the world of objective neutrality: cold and hard, like facts.

If you believe that someone stealing a piece of chocolate for a starving child deserves less severe punishment than someone stealing a piece of chocolate from a hungry child, you can begin to imagine how this approach might start to fail us. There are endless exceptions to situations, and exceptions to the exceptions. Yes, human decisions are irrational, swayed by emotion, and lacking objectivity. That does not mean that computer-driven systems are inherently rational, objective, or fair (and even if they were, that doesn’t mean they are desirable). Nonetheless, much of the industry still believes that the opposite of an emotional decision is a fair decision. And that is a value that gets encoded into the systems they build.

Newspeak

Artificial Intelligence systems don’t make decisions on their own, and they never will. Humans tell them what to do. Humans give them their priorities. Humans program them to serve our will, in a world that we explain to them. The problem is that the language we use to describe our world is hopelessly constrained to the logic that computers can understand. When we have to code the world into the constraints of computer memory or programs, the complexity of the world is inherently reduced. 

When I read the segments of Orwell about language, this is where my mind goes. 1984 comes with an appendix on Newspeak, the language used by the party to limit and reduce thought, which has some parallels to the way we have to describe the world to a computer. Orwell writes about a series of vocabularies, and names the most basic set of words the “A vocabulary,” :

“It was composed almost entirely of words we already possess, words like hit, run, dog, tree, sugar, house, field, but in comparison with the present-day English vocabulary their number was extremely small, while their meanings were far more rigidly defined. All ambiguities and shades of meaning had been purged out of them — Newspeak words [were] simply a staccato sound expressing one clearly understood concept.”

Newspeak was also binary: the word for cold existed, but the word “warm” did not, being replaced by a 0: “unwarm.” If you spend time in computing, you can see parallels: computers need concrete, explicit categories, on or off. They will not make a guess, because they can’t. It will drive you crazy how dumb a program is while you’re writing it. A lot of people assume that this demand for explicit instructions means computers cannot be influenced by society or culture and are therefore cold, rational observers and logisticians. The opposite is true. For a computer to act or decide or predict, a human has to sort the world into small boxes of categorization. That process is entirely the product of human decision making, so it reflects human bias, assumptions, and ideas.

It requires the reduction of ecological, social, and emotional complexity into as few lines of text as possible. This is practical for the sorting of books or for an assembly line designed to manufacture car parts. It is less helpful, and borders on folly, when we attempt to impose strict categories on human behavior and society. Because when someone begins to carve complexity out of the world, somebody is making decisions about what they want to keep in. That person’s decisions end up inscribed into a mechanical process and repeated until the machine stops being used.

Hannah Arendt, in 1958, warned that in the translation of our world into the symbolic logic of machines, speech loses all power — and could not be translated back into something comprehensible to human tongues or books. If we don’t acknowledge the ideological positions and personal experiences that shape the way computers understand these categories, we hand the task of defining rules and making sense of the world over to machines: machines that follow laws that only computer programmers get to write, and make decisions we can’t explain but trust anyway.

One of the more chilling elements of 1984 is that Winston is part of the apparatus he so despises. Orwell tells us that Winston’s “greatest pleasure in life was his work,” (38) and that he looked forward to “jobs so difficult and intricate that you could lose yourself in them as in the depths of a mechanical problem.” In fact, the work he loves so much is erasing people from history so that they can be executed without leaving clear traces. He is literally vaporizing them, eradicating any proof of their existence. But he is able to abstract his work by focusing exclusively on the technical details, which he can carry on with joy even as he despises the state and its tactics. He hates the lies, but loves the craftsmanship of the lies he tells. Winston never pulls a trigger, he only shows up to work intending to solve problems. He doesn’t care, or cannot bring himself to see, the actual shape, or consequences, of the problems he is solving. Winston does not ask why, only because. Why are we even building these systems

What “Orwellian” Actually Means

There is a system deployed throughout the United States: automated face recognition software. Today, cities around America are using artificial intelligence to identify people accused of crimes. To create an artificial intelligence technology that can identify human faces, here’s what you have to do. 

Abe Lincoln Image Classification Series CC-BY-SA, Tomas Smits and Melvin Wevers. The image is first broken down into shades or colors. Those colors match an index of numerical values When certain sets of numerical values are consistently clustered, the system may associate them with facial features: for example, a series of very dark cells surrounded by gray cells would be recognized as an “eye", if a string of dark pixels is clustered near that cluster, it may be determined to be an “eyebrow.”

First, you have to tell a program what a generic human face looks like. You could sit and program coordinates of eyes and noses. There’s Abe Lincoln, his face broken down to recognize clusters of pixels. Dark and light here are assigned numbers, and those numbers represent colors in the image. We used to have to manually tell the computer that clusters in the center are a nose, just above that to the side are your ears, etc. Today, thanks to machine learning, we can sort through thousands of photos and design a program that finds the patterns between faces. For our discussion today, the gathering and analysis of data for an artificial intelligence system is called “training.” The mathematical rules used to identify patterns in that data is typically referred to as an “algorithm.” Algorithms are used to make predictions: a weather algorithm might tell you when it will rain, a music algorithm might suggest a song you’d like. 

Facial recognition systems are largely built from images that programmers pull from the World Wide Web, with one of the largest and most used models coming from portraits people posted to the photo website Flicker. (That’s the dataset). Instead of writing code that describes every possible face, neural nets can study thousands of images, find patterns between them, and predict things. For example, one model was trained to produce a random face whenever we want one. 

Here are two photographs of people that don’t exist. These are portraits made by a machine learning model that was trained on 70,000 images, and asked to create something that follows all of the same rules that it learned by looking at those images. These are just two images: in ten seconds, you could create literally hundreds more. It began by identifying patterns common to all of those images, and quickly sorted out that if an eyeball is here and a mouth is here, then a nose is probably in the center of your face.

When a surveillance camera sees a person in a parking lot that it decides doesn’t belong there, an artificial intelligence tool can use this same learning to identify and match aspects of a real face to an archive of images in a criminal database, or license database, or passport database. For example, the FBI has a photo archive of 640 million faces drawn from state drivers license data that it can compare with images from grainy footage of parking lots. Now, we start the process of sorting and categorizing until we have a match. It can quickly see that the center of your face has a nose and compare that nose to the pixel arrangements of noses in every photograph in those databases, then it might look through those and see which ones have matching eyes, then ears, etc, until it narrows down an entire photographic database of millions of license photographs down to a few pictures. Then it can identify you, compare it to a criminal database (or not) and pass that information on to agencies interested in knowing who is hanging out in a parking lot. 

Only after these systems were deployed in various cities around the world did anyone stop to look at the data that these systems were trained on — those 70,000 pictures downloaded from Flicker. And they only started looking because these systems were constantly getting Black, Asian, and Native American faces wrong. Even technology built for innocent purposes, like face ID for turning your iPhone on, wasn’t working for black folks. The same systems were already deployed in less than innocent contexts, including police surveillance systems. In these surveillance systems, Black men in particular were 100 times more likely to be mistakenly identified than the faces of white men. I want to be really clear here: this doesn’t mean that black folks hanging out in parking lots were getting away with crimes. It meant that when the machine saw a black face, it was more likely to link it to someone else’s name, and that person would get accused.

So, how did this happen? Well, when I looked at the dataset being used to train these programs, it quickly became clear that the images that researchers downloaded from Flicker weren’t representative of every racial demographic in America. Instead, these systems are being trained to recognize one category of human being: humans who use Flicker to share photographs of themselves or other people. I went into and looked at a sampling of the 70,000 images of faces that were used to train that model. The faces in the archive were mostly white: 3% of the faces were black women, for example, whereas nearly 30% of those faces were white women.

The program became very good at recognizing white faces and very bad at identifying black ones. This claim has been verified repeatedly — most notably by Joy Buolamwini at MIT and Deb Raji at Mozilla, whose research work (alongside the work of many activist groups and grassroots leadership) encouraged IBM and Google to walk away, or at least pause, the development of face recognition surveillance systems. Yet, many engineers at smaller security firms and startups continue the work of designing face recognition software using flawed datasets without ever looking to see if there were broad racial disparities in them, despite knowing the false arrest and imprisonment rates of black men in the US. In 2019 the General Accounting Office noticed that the FBI had absolutely no steps in place to test the accuracy of its facial recognition software, and there are few laws on the books mandating audits of these systems or restricting how they are used. 

Another case in point in the Chinese government’s use of facial recognition systems to recognize members of the minority Uighur population, and which would send alarms when a Uighur was recognized, essentially a program designed specifically for the surveillance of a minority group that has been regularly detained, with the UN reporting more than 1 million arrests of Uighurs has occurred in China under vague and spurious legal charges. 

It doesn’t stop with face recognition. Another example of this logic is crime prediction technology, which, shockingly, is already a thing. Every once and awhile, a state or a country gets the idea that we can predict whether you will commit a crime. Consider a system that looks at arrest rates in certain neighborhoods. The system would then send more police cars to those neighborhoods. Well, when you send most of your police to one neighborhood, most of your arrests are going to take place in that neighborhood. So this system kept assigning police to patrol one neighborhood, which drove up arrests, which led to more police being assigned to that neighborhood, which drove up arrests. This is a reinforcing feedback loop.

Consider how that algorithm feeds into another algorithmic system: communities using software to predict repeat offenses, and using that to determine bail. The software was shown to consistently suggest higher bail rates for black defendants, because it was trained on data that suggested that Black men were more often re-arrested.

Last week the Washington Post ran a story on Nijeer Parks, a black man whose face was recognized at the scene of a crime even though witnesses placed him 30 miles away. Nijeer had a 10-year-old arrest record, but was still in the database. When police ran a photo of the real perpetrator, it did what these systems are trained to do: it looked at his nose, eyes, ears, and hair, and looked for the closest match in the system. It ended up falsely pegging Nijeer as the suspect, and police arrested him. While he was in the system, another algorithm recommended he would be a flight risk because he had a criminal record, and in the end, Nijeer almost pleaded guilty to the crime — which, let’s be clear, he knew he didn’t commit — because the computers had dug up so much evidence saying that he had. If he pleaded innocent and a judge sided with the computer, he’d have had a mandatory 10 year sentence. Luckily, lawyers intervened, and Nijeer avoided incarceration. But he is not the only example of this phenomenon, there are countless others around the country and the world. 

A scene from Alphaville.

While Orwell talks about “thoughtcrime,” of being arrested by the state based on its prediction of what your thoughts are, I can think of no better example of technology than the baffling tie between flawed surveillance tools and flawed sentencing tools that we continue to develop in this country. In Orwell’s brutal part three, Winston is repeatedly forced to deny the evidence of his eyes and ears: to say that whatever the party says is true, regardless of his own logic and knowledge. In essence, a computer decided that Nijeer Parks was the criminal, the police agreed with the computer, and with no better options, Nijeer was on the verge of confessing that there were five fingers when he knew there were only four. 

People have this tendency to trust the outcome of a machine as if it were more objective than that of a human being, and to feel less personally responsible for the decisions a computer makes — what NYU New Media professor Clay Shirky has called “algorithmic authority.”

The result is a system that lets everyone off the hook for sustaining the ugliest of human biases: when we begin to rely on algorithms and computers, we tend to believe that the machine must be making good decisions, and that we don’t have the authority to question those decisions. We also tend to think that a computer cannot be racist.

The true consequence is that we surrender even the impulse to think critically about the messages the machines give us, and the contexts in which they are deployed. This, despite that they were built by engineers miles away from the communities where they are used, sometimes using decades-old data, and usually using data that nobody even looks at. We ask the machine to make a guess, and trust that the guess applies to every situation we meet. We assume that the machine will be applied to everyone equally: regardless of their gender, race, or disabilities. Companies have designed police patrol scheduling software without ever considering that assigning more patrols to neighborhoods would influence the arrests in those neighborhoods. They have built sentencing software on data without looking at whether existing prejudice might have influenced those statistics. Never ask why, only say because. 

When these systems fail, and we ask who is responsible, we hear from police departments that it was the engineers; the engineers say it was data scientists, the data scientists say it was the whoever produced the data, and ultimately, time and time again, you see errors of this type simply chalked up to being the result of a “flawed algorithm.” But algorithms do not spontaneously emerge: they are designed by humans, and too often, those humans do not have the proper oversight to develop technologies that have vast social consequences. The mindset that we are safe in a vast sea of data, tiny raindrops likely to go unobserved — obscures the reality that for some, when the eye turns to look at them, nobody is there to help, to clarify, or intervene. 

This is what I refer to in the title of the talk: Nobody is Always Watching You. In our effort to reduce really difficult problems without easy answers into step by step instructions for computers to follow, we’re bound to get things wrong, lose context, and prioritize people who have a similar background to the people designing these systems. 

What’s Next?

We can do a lot to mitigate this: we can hire more diverse perspectives in engineering, so that people from communities that are directly affected can contribute to the design and implementation of these systems. But some folks, like Timnit Gebru, formerly responsible for ethics at Google, has long asked the question of whether we need to build these systems at all. If we know that a process has greater negative effects on one group of people over others, do we need to start automating it at all? Do we need technologies to assist in the massive disproportionate use of the prison system against Black men or Chinese Uighurs? Might we invest in making those systems more justice-centered first? Technology is not the only way to build better systems. We can shift priorities. We can look at things from different positions. This is another strength in diversity: it amplifies the breadth of imagination in a room. 

This isn’t to lean into the glorification of pure emotion or candy-coat the hard work of design and engineering at the expense of reason. We need logic, we need reason. But a complex world also requires imagination, emotion, and subjective human judgement. The danger of emotion has never been that we might use it to guide us toward compassion and care for others. The danger of emotion is that we mights convince ourselves that reality conforms to our rawest, emotion-driven intuition. When we look through the lens of compassion, we know that there are deep human challenges of judgment and morality and justice that are irreducible to algebraic equations or consistent, repeatable rules. We need to abandon the idea that we can simplify hard choices into easy ones. 

Because that is the world of Big Brother, where we look at five fingers and say there are four. We know that predicting the future is a joke. We know we don’t like every movie Netflix recommends. And yet, here we are, 20 years into the 21st Century, believing that handing computers decisions about human freedom and dignity is a good idea.

Never assume that the incredible achievements you’ll see happening in the field of artificial intelligence are a social good just because it is spectacular. I’m reminded of that horrible passage, early in the book, where Winston goes to see a propaganda film and remarks at the image of a child being disembodied by an explosion and calls it “good.” Winston is seeing only the technical capability of that film; he asks only whether it does what the filmmakers set out to do. Because it does, it is deemed good.

“Winston’s greatest pleasure in life was his work.” Well, I believe we can start to look at our algorithms and machine learning systems with more than the critical detachment of Winston in the cinema. Rather than admiring what technology has shown it can do, let’s ask if this is a scene we want to see. Instead of praising the skill of engineers in squeezing even more capabilities out of the machines that surround us, we might ask if the machines are doing the things we need. I fear that the great risk of artificial intelligence is not that we build a machine that rises up and enslaves us, but that we build a machine that we surrender to out of boredom. That our exhaustion at the idea of chiseling out ethical, moral perspectives for ourselves means we will hand life and death decisions to a Roomba. 

1984 is about the malleability of truth. I leave it to you to figure out where truth falls into the maths of machines, trained on general facts but applied to individual cases, leading to the imprisonment of innocent people. When we encode the past into an algorithm, and ask it to decide on the future for us, we aren’t creating an objective, neutral system: we are creating a digital zealot that persecutes everyone according to a set of limited principles it mistakes as universally true. The past will endlessly control the present.

Parting Words

Orwell gave us a tragedy, so I can’t go to his ending for an upbeat note to finish this talk. But I find a useful reminder where Orwell writes:

“The terrible thing that the party had done was to persuade you that mere impulses, mere feelings, were of no account, while at the same time robbing you of all power over the material world. Only two generations ago, what mattered were relationships — not loyalty to a party or country or an idea.”

Let’s put relationships at the center of our conversations around technology: if Facebook is making us hate each other, let’s ask whether we need Facebook in our lives. If automated surveillance systems are leading to false accusations against our neighbors, let’s ask if we need them in our neighborhoods. 

I can’t really turn to Norbert Wiener for optimism, either, though he did leave us with a powerful push to ask hard questions:

“It is relatively easy to promote good, and to fight evil, when good and evil are arranged against each other in two clear lines, and when those on the other side are our unquestioned enemies and those on our side [are] our trusted allies. What, however, if we ask, each time and in every situation, where is the friend and where is the enemy? When we have to put the decision in the hands of an inexorable magic, or an inexorable machine, [how might] we ask the right questions in advance, without fully understanding the operations of the process by which they will be answered?”

Instead I’ll end with Alphaville, Godard’s B-movie reimagining of 1984, set on a distant planet that looks suspiciously like Paris. Godard’s Alpha 60, the central computer that runs the city of Alphaville, is defeated with a few lines of poetry. Godard hits on something here: the human mind can be unpredictable, and that unpredictability can be what drives us to produce beauty, and art, and imagination. It suggests that human behavior can change, that people can grow, learn, and transform. When we build computers to predict who we will be based on who we have been, we give up on human potential. But technology does not always have to be this way. We decide the ends to which our systems are aimed. So let’s prioritize our relationships and our communities. Let’s ask harder questions about what technology promises. Let’s give space to humanity’s capacity to become better: to recognize the limits of what we know and how we know it, and build a future open to all of the mysteries we haven’t answered yet. 

Works Cited

  • Diana Forsythe (1993) Engineering knowledge: the construction of knowledge in artificial intelligence. Social Studies of Science 23: 445. 

  • George Orwell (1949) 1984. Signet Classics.

  • Jennifer Light (1999) When computers were women. Tech & Culture 40(3): 455-483. Paper

  • Norbert Wiener (1948) Cybernetics: Command and Control in the Animal and the Machine. MIT Press. 

  • Norbert Wiener (1950) The Human Use of Human Beings. MIT Press.