Critical Topics: AI Images

Class Two: Cybernetic Serendipities

Lecture by Eryk Salvaggio

To understand the history of art and AI, we need to start with the history of computers, and what might be the earliest example of generative artwork. We also need to ask a broader question that may seem quite basic: what is a computer? There are lots of precedents, but I want to start in 1914. That year, Torres Quevedo created a machine that could play chess.

The Chess Player, exposed gears. CC0. Source.

The first model could only play a specific endgame, where the board was reduced to just two rooks and two kings. In 1917, he invented another model, where the pieces could move using electromagnetics. The machine was even able to speak, telling players when they had broken the rules, and would reset the game if its opponent made three illegal moves. It had a little phonograph record that would play.

The automated chess player was able to do this through mechanics, that is, there was a series of rules enforced through the design of the gears. You set these mechanics up underneath the board by placing the pieces magnetically, and then it was a matter of clock works: sliding pieces moved gears, which locked other gears, and so on. I don’t have any evidence to suggest this, I suspect that when we talk about “game mechanics” in video games, we’re talking about the same thing: an unseen system that controls the moves of the game, determining the start and finish of play. 

The Chess Player wasn’t just the first working game-playing automaton, it was also the first real computer game, however limited it was. Today we might think of it more as a clock-game, but computers are one of those words that changes over time. The game was limited by its literal mechanics: the complexity of playing a game beyond two pieces would have been astronomical because of how intricate the gears would need to be. This was the constraint on mechanical thinking that inspired a whole range of computer scientists to think about new ways of calculating and solving problems. 

In 1936, Alan Turing wrote a paper to solve this interesting problem. To really understand what Turing did, it’s important to understand that computers in 1936 were human beings, and that definition really held true up to the 1970s.

Women working in the “computer room” in 1924. The computers were them — that was their job title. The machines were the tools they used.

The image above is from the Computer Room of the US Veterans Bureau in 1924. If you’ve seen the movie Hidden Figures, you may be familiar with human computers. Basically, you had physicists and engineers or mathematicians working on theoretical concepts. These concepts would then be handed to a room full of people - usually women - who did the actual calculations. Turing was interested in how to build a machine that might be able to turn these conceptual, abstract theories into the appropriate math: how do you replace human calculation with mechanical calculation? 

The result is called a Universal Machine, latter dubbed the Turing Machine. Turing never built this thing, which is why we call it a theoretical machine, but it proposed a framework for eventually building what we would recognize as computers today. Here’s a video to explain how Turing thought it could work. I recommend watching from 5:28 to 11:45, just about six minutes.

By 1937, Turing starts talking to another scientist, John Von Neumann, about the possibility that the human brain may act in a similar way to this Turing machine. Von Neumann makes some major contributions to computer science based on this idea that the brain works like a machine. And this is where we get the idea of the Turing Test: can you make a machine that could confuse an interviewer about whether it is human or not? 

Around this time, Warren McCulloch and Walter Pitts propose something called a neural network. They did this not to build a computer, but to argue that the human brain worked like a machine. They understood machines of their era, and started grasping at ways of applying machine metaphors to human thinking. The neural net works like this: the neuron is in your brain and certain things activate that neuron. These we would consider variables. Neurons have to fire chemicals, or not fire chemicals, in order to send information to other neurons. 

So picture the image to the right as one part of a vast network of neurons in your brain. Each neuron has to make a decision, and each neuron fires or doesn’t fire according to certain thresholds. 

Let’s use three variables to keep it simple. If any of these three conditions are met, the neuron might fire, based on what the threshold is. Then the other half of the neuron takes that sum and sends it along as information to the next neuron. This is, basically, a children’s book explanation of this, so do dig deeper, I want to give you a very, very rough idea of how these work in theory.

How does a neural net make the decision of whether to eat some pizza on a table in front of you? Again, the information in a neural network isn’t as clear cut as this. This is a really simple overview. But let’s convert those variables into something clearer: you’re very hungry so your threshold to eat pizza is one. If you were hungry, you’d eat pizza. If you weren’t hungry, you might eat pizza anyway. If you like pizza, you might just eat pizza for the hell of it.

To represent this as a neuron, any one of these factors would be enough to predict you would eat a slice of pizza - hence, 1 is the threshold. But you in fact have three factors in play! So there is a very strong signal that you’re gonna eat that pizza. So this represents a kind of information flow where if any of the information comes in, it will fire. 

But let’s say you’re hungry but also extremely polite. There’s pizza on a table but nobody has offered you any. You now have 2 criteria to “fire.” But your threshold is actually 3. So no, you won’t eat pizza. This arrangement would require ALL of the variables to be met before passing on the information. 

This model of neurons and perceptrons was remarkably agile not just for thinking about the human brain, which was the goal - remember, this was scientists taking the metaphor of a machine and trying to explain human decision making. Later on, computer scientists would look at this model of the human brain and use it to replicate human decision making. I say this because there’s something pretty important there: Scientists looked at machines to model a human brain, and then years later, other scientists looked at that model of the human brain and used it to model computers.

But this is just one model of how the human brain works: it isn’t necessarily the only one. But this relationship between neuroscience and artificial intelligence started there. This is where we come up with the term “artificial intelligence.” It’s not that we’re making a replica of a brain. We’re building something that behaves according to a simplified model of a brain. The “artificial” part means a lot!

The Neural Network model was pretty widely ignored until it was picked up by a guy named Norbert Wiener. During World War 2, Norbert Wiener, a mathematician and electrical engineer at MIT, was tapped to solve the problem of tracking airplanes. The UK was constantly under siege from Nazi air raids, including a 256-day stream of air bombing over London. The US asked Wiener to develop a technology that could prevent them. So he went about it by looking in an unusual place for the time, which was information theory.

To build a better anti-aircraft mechanism, he looked at the interaction between the gun and the pilot as an exchange of information. The gun fires, the pilot acknowledges it and changes course. The gunner then readjusts course, and the pilot then readjusts course. Wiener articulated this as a feedback loop. He saw the machine as a tool for mediating that feedback loop to the advantage of the gunman on the ground. The gun worked – through mechanical means, it was able to regulate and anticipate the action of the gunman and adjust itself accordingly, reducing the time needed to re-calibrate and fire. You would follow the aircraft with sites to calculate its speed, and then if the shot missed, the machine would begin to reset and adjust to the new position, until the plane visibly changed course, at which point you would stop firing and recalibrate. 

A cybernetic feedback loop.

Later, this feedback loop would be at the heart of cybernetic decision making. So you input into the crosshairs, and that information moves to adjust the gun (A). If you hit the plane, success, you restart on a new plane. If you fail, that information goes to step (B), and then machine adjusts itself to where it predicts the plane will be next based on a mechanical calculation system. 

This feedback loop in machines came to dominate Wiener’s thinking. He started seeing feedback loops everywhere he looked. Here we see Wiener looking at the mechanical chess player we talked about earlier. Through publishing his work in this area, he was able to garner enough interest that a conference was held – the Macy Conferences, which had started in 1941 but in 1946 pivoted to this new idea that Wiener was popularizing, called cybernetics. 

Cybernetics comes from the word kubernetes, which is a Greek word for the steering of a ship. Weiner was inspired by an early technology of steering: a governor was a bit of machinery designed to control the speed and direction of steam engines. As the boat moved forward quickly, a spinning wheel would accelerate from the speed of the current; if the wheel spun too quickly it would push a lever to close oxygen to the steam. Thus, if a boat was moving too fast, it would automatically be slowed down. If the boat was moving slowly, the wheel would spin slower too, and so the mechanism would allow oxygen into the steam engine. So it comes from a mechanical history, but would quickly come to define the way computers would be built, and especially shape the early field of what we today call artificial intelligence.

The idea that drives these cybernetics conferences is that information moves through systems, and the systems respond to that information. The participants were interested in the question of how these systems respond and if it would be possible to build systems that adapt the way biological systems do. 

So they started thinking about these feedback loops in different ways. They took this simple idea of the responsive steam engine governor and abstracted it. They were able to zoom out and give it a more general frame: Sensing, Comparing, and Acting. Your ship has a goal, but it senses when the path is interrupted. It compares its new speed and direction, and then it acts — it recalibrates speed and direction. You can connect this idea to the idea of McCulloch and Pitts: variables enter the neuron and influence the neuron to act in certain ways. Like steering a ship. 

This abstraction was helpful because the cybernetics conferences were trying to do something unique for the time, which was to bring people from all different disciplines into the same space. This included mathematicians, anthropologists, psychologists, engineers, ecologists, biologists. And because these fields had been so specialized, they had some early difficulty communicating with each other. The language of a biologist and the language of an electrical engineer were terribly disconnected. In every system, people could see feedback loops and describe how they worked to one another. They could then talk about what influenced those feedback loops, and begin to draw common maps. From this, there is a direct lineage to the idea of making computers that act like giant electrical brains, for example. The idea of ecological systems came together because biologists, botanists, and meteorologists could start seeing weather and plants and animals as part of an interconnected system, which they always knew, but could also begin explaining to each other. 

The next step in all this was the idea of building models that could represent complex, interconnected systems. So cybernetics lead to computer simulations. Computers would be given the rules of a system, and then the computer would be given information as variables. The goal was to build models that could calculate not just the mathematics of business computers, but the mathematics of machines that could move around freely, or choose their own problems and ways of solving those problems.

ENIAC, 1946

Early computers were just mazes for electricity. To the right we see ENIAC, the first true electronic computer, with two programmers in 1946.  Electricity moved into 18,000 vacuum tubes and was stored there until certain conditions were met. Think back to that neural net! When this threshold hits a certain condition, it releases that energy. A computer works by calibrating a series of signals to connect or disconnect circuits. On or Off. We call them one and zero, but on a physical level, these early machines understood them as the presence or absence of electricity in a tube, or whether light passing through a hole punched into a card or was stopped by the card. 

So we’ll jump ahead a bit, because we have this idea of light and the circuit and tubes, and talk about the rise of complex, modern computing. We’re going to watch a video from 1953, produced by Charles and Ray Eames for IBM. It’s talking about the telegraph system — and the telegraph, as a series of communication, is also a physical interface for sending information. This morse code signal was transmitted by tapping, according to morse code. This video moves from that to computing. It’s a very good presentation of how this binary logic - on or off — can become incredibly powerful as you increase the ability of a machine to process those decisions. Punch cards were the major leap forward at the time. 

This entire video is genius, and a great inspiration if you’re a designer or interested in communication from the lens of information theory. But for now, let’s watch the section from 14:58 to 17:36.

The principles of computing at the time were pretty simple: things turn off or on, and that can happen in certain sequences. Those strings of off-and-on signals - what we call “ones and zero’s” today - became this binary code that allowed machines to understand more complex calculations. As you saw with that image of the dots printing a picture of a young girl, you can do a tremendous amount with simple on or off pieces of information if you pack them densely enough. And this is what computers aim to do: pack these binary decisions of yes or no into as small a space as possible, in the fastest way possible. 

Now, we’re finally at a time when we might introduce art to this historical record. 

Part Two: Even Necktie Designers Can Use Electrons

In 1947, Popular Science Monthly published an article about the strange shapes produced by television test equipment, and suggested that industrial designers might turn to these abstractions for inspiration. Remember that computers in that era were all electric signals moving through directly onto devices and screens. So television test equipment drew lines based on the electrons coming out of whatever they were measuring, representing that information as electrons. You just literally had a screen that lit up when electrons hit it, and the electrons reacted and made certain shapes or lines on the device. It wasn’t quite a computer, but it worked the same way a computer did. Hence the title of the Popular Science Magazine article: “Even Necktie Designers Can Use Electrons.”

Oscillons, Ben Laposky.

In 1952, Ben Laposky found this article and went about using an oscilloscope and a long-exposure camera to document the movement of waves on the screen of test equipment. Oscillons, the result of this experiment, is held up as an inspiration for computer art that would follow. The waves of the oscilloscope were captured onto the film negative and when they were developed, they looked something like what you see on the right.

This may or may not fit the criteria for computer art, but it was certainly an early example of electronically generated art, and it was built on these cybernetic principles of feedback. 

Meanwhile, computers started to get more sophisticated. We can jump forward in time now to the IBM 360, which was considered small and lightweight for the time. They did math on tape - those wheels in the towers you see in the back. This was an evolution from punch cards – but still modeled on this idea of the zero and the one in binary code. 

Imagine this ramshackle device, bulky and heavy and difficult to maintain. But these electronic calculators also offered a very new way of thinking about information. How do you sort it, organize it, and communicate it in the most efficient way possible? 

By the late 1960s you started seeing computers, machines, electronics reach a level of sophistication where artists started to think about what they were and how to use them. In our first artist talk you’ll hear about one of them, Gordon Pask, who was a cybernetician who created something called Colloquy of Mobiles. There are others, too, and we’re going to look at some of them today. But before we get to images, I also want to talk about the early days of artificial intelligence through conversation, something called ELIZA. 

In 1964, the computer pioneer Joseph Weizenbaum, who had designed computer systems for banks in the 1950s, had created the first program that allowed people to speak with a computer rather than through a computer. And Weizenbaum would almost immediately come to regret it. ELIZA was modeled on Rogerian therapy: a kind of psychology you often see parodied on sitcoms, where the therapist simply reframes your statements back to you in the form of a question, in order to get you to speak more deeply about your thoughts, feelings, and perceptions. Here’s a video about ELIZA from the BBC.

The experience was quite a radical shock for Weizenbaum. He knew the computer had no capacity for understanding what was being said to it; he also saw people who had worked on the project and knew how it worked were being sucked into long conversations with it. And then he watched as colleagues, against his wishes, started heralding it as something it absolutely wasn’t. 

Weizenbaum’s turn toward critique started with the reception of ELIZA, which he built to imitate Rogerian therapy (an approach that often relies on mirroring patients’ statements back to them). Although he was explicit that ELIZA had nothing to do with psychotherapy, others, such as Stanford psychiatrist Kenneth Colby, hailed it as a first step toward finding a potential substitute for psychiatrists. Weizenbaum’s colleagues, who supposedly had a sophisticated understanding of computers, enormously exaggerated ELIZA’s capabilities, with some arguing that it understood language. And people interacting with ELIZA, he discovered, would open their heart to it. He would later write in his book Computer Power and Human Reason: From Judgement to Calculation (1976) that he was “startled to see how quickly and how very deeply people conversing with ELIZA became emotionally involved with the computer and how unequivocally they anthropomorphized it.” He would ultimately criticize the artificial intelligence project as “a fraud that played on the trusting instincts of people.” — Abeba Birhane, Fair Warning. (Go read the whole thing).

So here we see, for the first time, how the imagination creates a relationship with a computer, and becomes immersed into the story that the screen is telling us. And we started to see this trusting instinct – this sense of belief in the screen and what it shows us – as being an incredibly powerful tool for storytelling and interaction and imagination. And it shows us, too, how important it is to think carefully about how those tools are used, and toward what ends we deploy them.  

You may have heard of GPT3, the conversational chatbot, or ChatGPT. These technologies are more sophisticated than ELIZA, but I want to highlight here that even from the earliest days of interaction with these machines, people projected intelligence and understanding onto them. But ELIZA really isn’t a person. And today’s tools, again, they’re significantly more sophisticated, but the fundamental principle is the same: we create the illusion of a conversation with them. 

So let’s move to 1968. 

To bring you up to speed on where we are with the idea of artificial intelligence, by the late 1960s the AI pioneer Marvin Minsky has told Life Magazine that, in “three to eight years we will have a machine with the general intelligence of an average human being.” So back then you had people saying things like AI was just around the corner: Minsky would have us at full artificial intelligence by 1978 at the latest. It didn’t turn out that way. But people were excited. 1968 is important for AI art because it was the year that we met one of the first sentient AI robots, even if that robot appeared in science fiction. This is HAL 9000, from Stanley Kubrick’s 2001: A Space Odyssey. (Watch Video, 3 minutes)

So there’s a bit of an AI fever in the air, something that I think we’re in the midst of today.

But in 1968 Jasia Reichardt is a curator at the ICA in London. And in 1968, there’s a lot of interest by artists in generative techniques. We need to understand what was happening a bit in the art world at this time. For one, you had movements like Fluxus, which came about in response to John Cage. John Cage had written a piece called 4 minutes and 33 seconds, where he came out on stage during a concert of new piano works, and he simply sat at the piano. Now, over time, Cage has articulated what this was all about: it was about changing the way we listen, to the idea that music needs to be performed and written by a composer, with the audience simply sitting still and enjoying it. Cage said something else: he said, there’s already music all around you. Tune in and figure out how to listen to it, how to recognize the coughing and footsteps and passing cars as if it were music. 

Fluxus was a loose group of artists, international in scope, who responded to this idea. And they were very interested in the idea of taking artists out of art production. Now, at the time, they were protesting a very affluent, powerful group: the art world, curators, museums, most of which was well funded, often by billionaire philanthropists, and quite exclusive. 

They weren’t against the guy in the park painting watercolors for fun. They were in favor of what they called democratizing art. A lot of their work was produced in series, so they’d make 100 of the same piece in order to give them away. They also did these performances where they would craft situations - they called them “events.” George Brecht pioneered this practice of writing short instructions on notecards and giving them to people as instructions for producing their own work of art. Most of this art was surreal or silly: the idea was to make art valueless, after all. 

George Brecht’s “Universal Machine II.”

An example of Brecht’s work is here, a direct response to Alan Turing’s Universal Machine. Called Universal Machine II, this is a box of random images and words cut from newspapers and encyclopedias. The instructions suggest that the user should close the lid, shake the box, and reopen them to find new arrangements of ideas, words, and pictures. Here, he’s criticizing Turing’s idea of logic as the organizing pattern of human thought, and replacing it with chance and randomness.

In the 1960s you also had the rise of Pop Art. You had people like Andy Warhol, who was incredibly interested in producing images on a massive scale. So he took to screen printing images: he’d make one image, often from sources like advertisements or news photographs, and he’d make it into a template, so he could simply lay it over with different colors and make as many copies of his own work as he wanted. It was designed to be fast, easy to replicate. 

So all of this is going on in the late 1950s and 1960s. And so there’s this real shift in thinking about what art can be, in relation to popular culture, and even, among a small subset of artists, what computers and technology can do to transform and democratize art. The idea that art is this high brow thing for the elite is coming apart, and the movement is saying: what else can be art? What else can we start to include as art? 

This is the context where Jasia Reichardt comes up with the idea of doing a show where machines make the art. And in 1968, that show opens: Cybernetic Serendipity. The title refers to two things: one, Cybernetics, which you know from earlier in the lecture. And two, serendipity: introducing the idea of chance explicitly into the work. Computers were working to sort of make stuff at random, but with some small degree of control. 

Here’s a video that was made of that exhibition. Feel free to click around it as if you were browsing the ICA London back then, or watch it the whole way through. It’s a neat art-historical gem.

These works are the kinds of things you might have seen if you’ve been to a children’s science museum. In fact, the show traveled to the United States afterward, and it ran at a children’s science museum, the Exploratorium in San Francisco. And this created a new, popular interest in machines and art.

We’re going to watch another video that shows you how one artist, Michael Knoll, created a ballet with a computer. It takes you step by step through the process involved in making computer images at this time, a painstaking process. Have a look. This one is just about 7 minutes.

So this is the era of really big machines. There’s no off-the-shelf software like photoshop or premiere to help you make a movie. There are no engines. If you want to make a movie with a computer, you write the code and run it. So I want to talk about one more artist. 

Harold Cohen is arguably the first of these artists whose work really qualifies as what we would call artificial intelligence today. In the 1960s Harold Cohen was a painter with an impressive resume, the kind of artist who shows their work at the Venice Biennale. By the late 1960s, he began teaching art at UC San Diego. That’s where he started programming computers, working on the beautiful DEC PDP-11, which you see here. 

In 1972, Harold Cohen starts working on a piece of computer software which would eventually become AARON — software that could generate its own images. Once the program was running,  Cohen’s art was entirely co-created with this software. He describes his motivation:

AARON began its existence some time in the mid-seventies, in my attempt to answer what seemed then to be — but turned out not to be—a simple question. "What is the minimum condition under which a set of marks functions as an image?" On the simplest level it was not hard to propose a plausible answer: it required the spectator's belief that the marks had resulted from a purposeful human, or human-like, act. What I intended by "human-like" was that a program would need to exhibit cognitive capabilities quite like the ones we use ourselves to make and to understand images. … All its decisions about how to proceed with a drawing, from the lowest level of constructing a single line to higher-level issues of composition, were made by considering what it wanted to do in relation to what it had done already.

The first true AARON program was created with Stanford’s AI Laboratory. In those days, the approach to AI was knowledge capture: breaking activities down into a series of decisions. That chain of decisions and actions was a program. A computer that made its own decisions was “AI.”

AARON was designed to encode Cohen’s personal choices about lines and arrangements into a series of rules. When it drew a line, it was able to reference those rules to determine what the next line might be. The work varied because the starting line was randomly placed.

Today’s AI images systems do not generally work with individual artists articulating themselves in the same way. Instead, they break down images into pixel coordinates, find repeating patterns, and tap into those patterns to create “new” work. These systems “learn” how to do this, rather than being explicitly told via if/then statements, like AARON. 

Early on, AARON presented drawings that Cohen would color in with pencils or paints; later, as Cohen expanded the code to new capabilities, the drawings became more colorful on their own. These works - the lines, shapes, composition, and even colors - were generated algorithmically. They were not produced by the kinds of AI we see today, but defining AI is a notorious challenge.

As an artist, Cohen wasn’t interested in creating an autonomous AARON. He could have coded it to evolve, to make changes to itself, but chose not to. Compared to today, it seems more difficult to find any expressed desire for autonomous art machines during this period of history. Most artists were interested in augmentation. It may be a limit to the available technologies, or a reflection of the general ebb of interest in the technology. 

Nonetheless, by the 1980s, inspired by watching children draw, Cohen programmed AARON to respond to itself, starting drawings by making its own stray (random) mark. This was the closest he got to full autonomy before taking control back from the system.

According to an article from the Association for the Advancement of Artificial Intelligence, in 1995 AARON was “one of the most creative AI programs in daily use.” They describe what AARON could do at the Boston Computer Museum (64):

“His machine would compose images of people in rooms, then draw them, mix its own dyes, and color the drawings. This exhibition turned out to be the apex of AARON’s career as an autonomous representational artist. Representational painting was evocative, but not in interesting ways, and although Cohen loved to interact with gallery audiences, he worried that the spectacle of the painting machine detracted from the art itself.”

Cohen was a painter, so his idea of art was more traditional than other artists like George Brecht. But he was more directly engaged in the question of machine-generated art. In a 1974 article, On Purpose, he writes:

“Any claim based upon the evidence that 'art’ has been produced would need to be examined with some care, and in the absence of any firm agreement as to what is acceptable as art we would probably want to see, at least, that the 'art' had some very fundamental characteristics in common with what we ordinarily view as art. This could not be done only on the basis of its physical characteristics: merely looking like an existing art object would not do. We would rather want to see it demonstrated that the machine behavior which resulted in the 'art’ had fundamental characteristics in common with what we know of art-making behavior.”

He goes on to say:

“We would probably agree, simply on the evidence that we see around us today, that the artist considers one of his functions to be the redefinition of the notion of art.”

In other words: we might be tempted to evaluate art produced by machines through comparisons to what art has already been, but art, at least since the 1960s, is itself dedicated to the question of expanding what art is.

Cohen considered the relationship between artist and machine as primarily driven by purpose. An artificial intelligence program behaves with purpose. As you look at how the machine’s purpose is reflected in its decisions, however, there’s a notable distinction between himself and his machine. The machine, after all, didn’t invent a purpose for itself, as an artist would. Instead, the artist creates the machine for some purpose. The purpose, he says simply, is to draw lines. And the purpose of those lines is to make the next line.

While AARON drew (literally) from a series of possible lines, I don’t see this as very different from what image synthesis models do, aside from the size of the data available to them. Obviously, today’s models are denser, and reduce things down to relationships which can then be reassembled. The size of the training data — and corresponding vastness of the parameters in the neural net — certainly enhance the complexity of what they can produce, but that alone doesn’t suggest a fundamentally different purpose from AARON.

“Creativity … lay in neither the programmer alone nor in the program alone, but in the dialog between program and programmer; a dialog resting upon the special and peculiarly intimate relationship that had grown up between us over the years.” 

Soon after Harold Cohen died in 2016, AARON was wiped out by a power surge from a lightning storm. Weird, but true.

Part Three: AI Hype & Winters

In conclusion, we sort of focus on 1968, and Minsky promises full Artificial Intelligence in place by 1978. Stanley Kubrick has us flying across the galaxy in spaceships piloted by AI by 2001. The future seemed very bright for AI, because they had seen the trajectory of just a few decades. Machines the size of factories were now the size of rooms. Everything was getting smaller, denser, more complex. There were breakthroughs in modeling and feedback and information science. 

What we got instead was what we call an AI winter. Artificial Intelligence investment moves in cycles: when there’s a breakthrough, lots and lots of progress happens. Then, it stalls. When it stalls, so does investment. 

In 1973, we saw the UK government’s investigation into AI say: a lot of this is hype. In their words: "In no part of the field have the discoveries made so far produced the major impact that was then promised.” The Lighthill Report lead to the suspension of support by the UK government into artificial intelligence research. 

In the United States, it was DARPA -  Defense Advanced Research Projects Agency - that stopped funding AI research, but that was because of a 1969 law that changed the way science research was funded by the US government. Most research was funded in the 1960s based on mission alignment: DARPA wanted artificial intelligence, so it funded people who did AI research to do whatever they wanted. The Mansfield Amendment for government funding instead focused on deliverables: a researcher had to show what the outcome of their grant would be, or was anticipated to be. So there was less funding to try things, and more focus on funding projects that could show real, concrete results. When AI consistently failed to show results, DARPA’s funding for that project dwindled. 

Most research was left to companies like IBM or Xerox. And AI kind of went to sleep, at least in the sense of that initial optimism, for much of the 1970s and 1980s, what we call an “AI winter.” It became focused on solving very narrow management and business issues. In the 1990s, most research shifted to the internet, and that research was commercial. With the Web, communicating with large groups of people in real time was as promising a revolution as anything AI could dream of. Eventually, this new technology was going to be the basis of an information revolution that would herald an AI spring in the late 2010s. 

The first AI boom gave us flight automation software and automated teller machines. Nothing quite as inspiring as household robots that paint. Until now — when it seems as if overnight, that’s exactly what we have.

In the next class, we’ll look at AI since 2001 — and how its meaning has shifted as we accumulated mountains of data from social media and the World Wide Web.

Want More Stuff to Do?

Here’s a great conversation I had with Paul Pangaro, the President of the American Society for Cybernetics. Paul was a student of Gordon Pask, and worked on a reconstruction of Gordon Pask’s Colloquy of Mobiles. He talks about those experiences in the video below, as well as outlining key differences in what AI means in cybernetic terms and what we have in AI today. Paul’s talk was originally a guest lecture in class three.

If you’d like to read more on Gordon Pask and contemporary AI, I have a paper for you. I wrote it for the Aksioma Festival. Check it out here: Conversations with Maverick Machines.

More To Read:

More to Watch:

Works Referenced in this Lecture:

  • Up and Atom (YouTube Channel) Computer Science Was Invented On Accident

  • McCulloch, W.S., Pitts, W. A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics 5, 115–133 (1943). https://doi.org/10.1007/BF02478259

  • Eames, C., & Eames, R. (Directors). (1953). A Communications Primer [Film]. Eames Office, LLC.

  • Birhane, Abeba. (2020). Fair Warning: For as long as there has been AI research, there have been credible critiques about the risks of AI boosterism. Real Life. https://reallifemag.com/fair-warning/

  • Kubrick, Stanley (1968). 2001: A Space Odyssey.

  • Brecht, George (1965). Universal Machine II.

  • Reichardt, Jasia (1968). Cybernetic Serendipity (Interview). BBC Late Night Lineup, via YouTube.

  • Cohen, Harold (1994). The Further Exploits of AARON, Painter. Stanford Humanities Review. (PDF).

  • Cohen, Paul. (2017). Harold Cohen and AARON. AI Magazine, 37(4), 63-66. https://doi.org/10.1609/aimag.v37i4.2695

  • Cohen, Harold (1974) On Purpose: An Enquiry Into the Possible Roles of the Computer in Art. Studio International. (PDF).