New Macy SESSION 3
Building on the questions refined in New Macy #2, New Macy #3 aimed to move closer to concrete prototypes, or at least, a reframe of abstract questions in more concrete, actionable terms.
This is a reassembled transcript, clustering connections under my own subjective rubric, and should not be considered an objective record. If you are a participant who would like to have your comments removed (or just your association with the comments removed) please let me know — I’m eryk.salvaggio at gmail.com.
Introduction
Paul Pangaro introduced the session by revisiting the origin and intent of the New Macy series: to create alternatives to “the pandemic of today’s AI.” Moving beyond transdisciplinary gatherings of the original Macy series in the late 1950s, Paul called for a “transglobal, diverse and inclusive, transgenerational and open to all worldviews.”
Paul defined the goal of New Macy 3 as “Conversations that lead to agreements which lead to coordination which lead to change,” fanning out beyond the Macy gatherings “toward concrete actions.”
Four themes, refined in New Macy #2, were presented to groups to be tackled in self-assigned breakout rooms. New breakout rooms could be created in pursuit of divergence from those questions, pose new questions, or pursue unique frames on the same question.
Themes
The initial themes, each with its own facilitator, were:
Disciplines. The dissection of education into disciplines encourages modes of interactivity that valorize digital binary frameworks (science) and dismiss analogic ones (arts). This needs to be challenged. Could cybernetics assist us in developing methodologies and/or technologies to address the issue? How so? (Facilitated by Eryk Salvaggio)
Sustainability. Automated and interactive systems should be designed for environmental sustainability and the long-term viability of all we cherish as human beings. Are there any cybernetic precedents or ideas that could inform the development of such technologies? What does it need to help them? (Facilitated by Michael Munton).
Conversation Machines. Technologies should actualize in the forms of conversation machines. Could Paskian machines, Pangaro’s update to Thoughtsticker, Beer’s VSM, and Bernard Scott’s educational technology be considered models or precedents? Why? Are there other examples? (Facilitated by Larry Richards)
Integrative Approach. Increasingly intrusive AI needs to be countered by an approach to systems theory and practice that integrates design and the arts. Cybernetics is such an approach, encouraging improvisation and creativity, drawing on multiple levels of consciousness and connection and involving both humans and the world of which we are a part. What are the precedents or models one could build on? How could one develop this further? (Facilitated by Claudia Westermann)
Format
Here’s how the conversation was structured.
Participants chose a topic of personal interest and joined a breakout room.
Participants took three minutes to write down initial notes (to avoid “groupthink” at the outset and to help focus the idea through writing).
Each participant shared what they wrote in less than one minute.
Discussion followed for finding common and distinct points, with an eye toward consensus on what could move forward into something concrete.
Participants left the breakout room for a wider discussion and feedback with everyone.
New breakout groups were selected for a longer discussion of the revised topics.
Breakout groups discussed their topics for 20 minutes.
Break.
Returning to larger session, all participants report and give feedback.
Outcomes
The following sections are sorted by theme. Though they were developed over multiple sessions, it’s been edited so that the discussion for each topic area is in one place. Verbal conversation has been combined with the text chat from the Zoom call.
Disciplines
The dissection of education into disciplines encourages modes of interactivity that valorize digital binary frameworks (science) and dismiss analogic ones (arts). This needs to be challenged. Could cybernetics assist us in developing methodologies and/or technologies to address the issue? How so?
(Facilitated by Eryk Salvaggio)
Eryk Salvaggio: There was a lot of discussion of whether question four and question one were necessary to understand one another. We moved forward anyway, to explore how we might reframe ideas of disciplines around cybernetic models. We wanted to understand systems that worked, according to, at least, some of the following words, which we need to unpack and explore.
First, questioning the idea of the subjective and objective position. Thinking about decolonizing arts and sciences — and cybernetics, more broadly. We came to the idea of working toward bottom-up, iterative, democratic, inclusive approaches with curiosity at their core.
Michael Hohl: I accept the validity of the question. Education is fragmented, how can cybernetics change it? But I’m interested in what would make such a cybernetic approach distinctly “cybernetic.” For me, that is bottom-up, iterative, and involves a long-term perspective. There’s feedback involved, democratic decision-making taking everyone into account. Curiosity, humility, listening and understanding, and making room for learning, error, and course corrections. This would make an approach distinctly cybernetic. Then we could discuss how this might apply to education, for example.
Kate Doyle: The separation of themes one from four is fundamentally a problem, because this idea of embracing error, experimentation, and curiosity plays out in what theme four addresses. So we’re missing some of the pragmatics or following-through in the idea of education if we aren’t addressing it within the context of theme four.
Paul Pangaro: So we can convene another group that connects discipline groups (1) and integrative approaches group (4).
Phase Two
Michael Hohl: Our conversation started with ‘education is fragmented, our world view is fragmented as a result, how can cybernetics help approach this problem?’ and empower us?
Eve Pinsker: I have a problem with the shorthand when people say ... 'can cybernetics contribute?’ or 'can cybernetics empower'? I'd rather say … the cybernetics community (then you ask which people) or 'cybernetic ideas' (then you specify which ideas...)
Robert Johannson: Empowerment is giving someone the authority to make decisions for the group.
Eve Pinsker: Well, other people use empowerment to mean giving a wider group of people the platform or opportunity to meaningfully contribute to decisions made that concern them. ... We have to be wary of reifying cybernetics as a “thing,” we’re talking about the cybernetics community or cybernetic concepts. Then we can specify which concepts.
Michael Hohl: So, what would make an approach distinctly cybernetic? This was the last session, bringing in a systems view (because you only see what you know), it’s bottom-up, iterative, long-term perspective, feedback, democratic, curiosity, humility, making room for learning and course correction.
So we looked at examples of people already in practice trying to do this. Maybe not through a cybernetic lens but with a systems thinking approach of integrating knowledge and connecting it to heritage or traditions. This contributed to the more systemic, open, connected, humble mindset of a lifelong learner. For example, Fritjof Capra and Alice Waters integrated teaching and learning in the SF Bay Area. There are school gardens where the maths teacher and the arts teacher and the biology teacher devise a lecture, about Mendel’s law of inheritance, say, then they go out into the school garden and the children learn, drawing and painting those beans and learn something about Mendel’s law and apply it to maths, for example.
This is very much connected to what Heinz Von Foerster was thinking and doing in his higher education in the 1970s, or Neil Postman’s “Teaching as a Subversive Activity,” or Paulo Freire. All these educators, even Dewey, emphasized the importance of feedback and reflection. In Hawaii, we have the Kohala Center and Kamehameha schools. Schumacher College in the UK for older students.
So we ended up with the question, how can cybernetics contribute to changing education in general, to lead to this integrated way of knowing, and getting away from this disassociation and fragmented worldview to get to a more integrated worldview? To integrate disciplines but also to integrate knowing.
Fred Steir: One of the things that stands out for me with Kohala Center’s work is that, given their choice of thinking about how to build it, they decided the entire island was the science center. That invited other people in. And the origins of that were expressly cybernetic in terms of rethinking boundaries and the role they played.
Ben Sweeting: There was a discussion of art and problem solving, and it seemed, in the context of AI or automation and optimization, these are things that come out of a “problem solving” mindset. A lot of the problems we have come out of solutions. And one of the things art does is the problematization of solutions. So maybe we have to have ways in which how we solve problems can become problematized.
A Second Group Emerged to combine themes of disciplines and integrative approaches.
Disciplines + Integrative Approaches
Eve Pinsker: Here are the key points of the second discussion. We wanted to emphasize arts as facilitating learning, and to support the flexibility of organizing systems. Out of that came the need to integrate art into very early education — pre-school, kindergarten. Then there’s the question: what is a creative AI? How do we embody that? Then we came back to the dissection of education into analogic and digital frameworks, which is a problem. How would a more integrative education play into the way we design and support designing more flexible, human and creative systems? We talked about reimagine what has come before, which is lost in the imagining of AI today.
There are opportunities. We also talked on building on arts to cut through problems and build a global response — Live Aid was an example. It showed that communication technology can support endeavors that benefit the global community. That kind of technology can go beyond a structured format and go into powerful alternative forms of sharing, like music.
We’ve reinvented the environment of human life, and we’re seeing generational change. We need new metaphors. Superman changed his clothes in a phone booth, well, what’s a phone booth? So we need new ways of communicating these ideas.
Kate Doyle: Part of the reasoning for bringing together the disciplines and integration themes is that we can’t just create dissections within an educational setting, but have to consider the dissection in the professional workforce. If we consider adjustments to the structures of education, we have to think how that affects beyond that particular domain. We talked about the idea of intervention and reciprocity, in addition to what Eve summarized, but key was the idea of exchange between education and what lies beyond it.
Damien Chapman: We were talking about balance, and the opportunity of interaction. How do you engage that not just on a local level but across borders, so the interactions spread beyond a particular group. So that was that world-changing communication of Live Aid. Education is a wonderful space that isn’t just about our youth but about all of us. It’s creative and develops new ways of thinking that none of us have yet imagined, the potential is phenomenal.
Fred Steier: Things that came into focus in merging the two groups is that the way we see the world, in the way that we frame disciplines into education, is also showing up in everything else that we do. And AI is reinforcing that, in terms of how we think of logics of menus, for example. How do we bring creativity and music into that example? Or think through different metaphors through music, for example?
Michael Hohl: In my view, art is perhaps not the best way of looking at problems or solving problems. Maybe it’s a way of looking and identifying problems. If you look at traditional arts education, it’s about serendipity, inquiry, always personal and often self-centered. So what makes an artistic approach distinctly artistic? The open-endedness? The playfulness? There are many ways of doing art. Is it a mastery of something? Sometimes I supervise art students. I’ve come to realize they work really really hard, at least as hard as designers, but art is a way of asking questions, whereas designing is a way of solving problems and providing answers, simply put and really coarsely.
Eryk Salvaggio: It's my axe to grind that art has a role in illuminating and revealing problems to be solved, and in creating broader views of the "solutions...”
Ben Sweeting: In the context of automation/optimisation algorithms, perhaps we need less solving problems and more problematising solutions.
Gopal Tadepalli: Just wondering if a reimagining of what has gone before is lost through AI or repackaged through AI? We may not be losing much. We are just repackaging.
Robert Johannson: The nature of art is a communal nature. When you’re writing a play, plays are re-written. As a playwright you have to talk to the directors. The actors talk to the audience. The audience has a reaction. As a company, you have to look at the audience and what the audience is doing. That changes what you do as a company. Every audience is different, but there are some things that work and some things that don’t work. It’s a communal project. A play is play. One of the most fundamental biological characteristics is play. Without play, you don’t have a human.
Integrative Approaches
Increasingly intrusive AI needs to be countered by an approach to systems theory and practice that integrates design and the arts. Cybernetics is such an approach, encouraging improvisation and creativity, drawing on multiple levels of consciousness and connection and involving both humans and the world of which we are a part. What are the precedents or models one could build on? How could one develop this further?
(Facilitated by Claudia Westermann)
Claudia Westermann: There was a sense that “increasingly intrusive” was not strong enough, that we are already at a stage that is a true emergency. Additionally, the idea emerged that “AI” should probably be accompanied by some commentary that it is soul-less, and that the call for it to be balanced should acknowledge that the monster cannot be “balanced.” The question may need to be rewritten, possibly to account for the difference between design and art.
We need examples for how to make AI more transparent. Regarding counter-balancing, we could consider displacement as something intrinsic to the ‘human’ social and political realms that is not considered part of today’s AI. Should also consider ethics, creativity, intuitivity, the unexpected and expecting the unexpected. Institutions could take into account biological systems. Art and design, then, provide a unique opportunity to access this “other” which is NOT AI.
Damian Chapman: We talked about how the potential “Other” might be enabled through intervention, and how that intervention may be cited through the creative realm, potentially through the arts which provide a very constructive access point.
Claudia Westermann: So we should emphasize intervention.
Peter Jones: There are algorithms and bots that change data, system conditions, media content, etc. in all the social media we use. Algorithms change interactions completely without our awareness. This insidious mission-creep of the uses of machine learning is maybe what's meant by "increasingly intrusive." A real systemic design issue for me is the inability to access or change the institutions and firms that develop or contract AI for "strategic" purposes, or to use tech to exploit human agency and data for corporate and policy goals that remain hidden and inaccessible to change by balance or feedback.
Jamie Rose: AI systems were developed initially for the benefit of humanity. At some point, it became an issue where it became its own priority — the preservation of AI systems regardless of whether it was for the benefit of humanity or not. And that’s an issue we’re facing every single day.
Peter Jones: Jamie, AI is now embedded in the "tech stack" for many cloud services - so then the question is "what technology can be trusted and why should we?"
Phase Two
Claudia Westermann: So the question as we started was about increasingly intrusive AI. In our discussions, there were numerous questions. Do we define AI more specifically, this soul-less golem? Do we talk about balance, or can you consider balance with a monster?
We need examples for how to make AI more transparent. And responding to balancing, we might consider displacement. Displacement is a principle pointing to what is intrinsic to the human, social, and political. The ethics, creativity, intuitivity and expecting the unexpected must be considered when we design systems, including institutions.
Art and design allow for unique opportunities to provide access, through intervention and its capacity to reveal and disclose what is hidden, to us humans who are not AI or autonomous systems. We will need to think more about what this is, the Other, the soul-less, the Golem.
Ben Sweeting: I enjoy this connection between transparency, reveal/disclose, and Golem. What if the concern with transparency in typical AI ethics was a concern with disclosure?
Jeff Glassman: I want to add a different sense of what that Other refers to. It had to do with, what is other than AI? What do we share widely, publicly, globally, that we have in common that is not within the expanse of AI. So the Other is not exactly related to the Golem or the Soul-less. The Other might be the opposite of that.
Claudia Westermann: That could use more clarity. For me, the question is also what allows cybernetic concepts to discuss activities of revealing/disclosing. These are words that would appear in the discourse, whether by Martin Heidegger or not, collective/center, this kind of discourse. Is that similar to what cybernetics does? Should it be different and distinct, because Heidegger didn’t take steps into practice?
Sustainability
Automated and interactive systems should be designed for environmental sustainability and the long-term viability of all we cherish as human beings. Are there any cybernetic precedents or ideas that could inform the development of such technologies? What does it need to help them?
(Facilitated by Michael Munton).
Michael Munton: We’re interested in cybernetic technologies or lenses to build or foster inter-species feedback, communication and reciprocity.
Igor Perko: We went into the reciprocity processes, basically learning, and into interactions to develop relationships which are not with humans, but with, basically, “all we cherish as human beings.” We should. First interactions were with animal and plant life, protecting them and evoking them. And another was interesting, which can build a bridge to theme four (Integrative Approach), which is the interaction with artificial intelligence technology. Can it be, at some point in the future, as cherished as human beings? That evokes learning, and evolutionary development.
Fred Steier: We were talking about the importance of holding on to variety. Reciprocity means holding on to the idea of variety in changing circumstances.
Carlos Castellanos: Taking into consideration non-human organisms — can we make conversational machines, let’s say, for sustainability that take into account some organism we may take as resources. Phytoplankton, 60% of the oxygen generated in the ocean comes from them — could they have a say in how they are used and not used? Can we have interspecies communication that leads to some kind of agreement between very different life-worlds?
Igor Perkins: We would need to redefine language.
Carlos Castellanos: And where do we put system boundaries? Is an organism the boundary, or super-organisms, like human cells? Where do we put those boundaries?
Phase Two
Michael Munton: We raised the question “what is sustainability” and who defines it — and what’s being sustained, and is sustainability static or dynamic? So we’ve expanded our theme to say this:
“Automated and interactive systems should be designed for environmental sustainability and the long-term viability of all we cherish as human beings. Here we draw upon precedents in cybernetics to build a model for conversations around ‘sustainability,’ moving towards meta-structures which facilitate the emergence of ‘sustainability’ concepts and imaginaries.”
So, we don’t want to define sustainability ourselves but rather want to introduce meta-structures that allow many voices to define it in an ongoing and adaptive conversation. So there are questions that serve as facets of a conversation, or prompts to get to what we mean by sustainability. For example:
What is sustainability? Is it static or dynamic?
What is the relation to viable, learning systems?
How do we define a framework for conversations?
What cybernetic models can we build so that others can communicate, participate and collaborate?
How can we build something or create experiences for collective, coordinated action?
What are meta-structures?Who are the stakeholders and who are the customers, what are the borders?
How can non-human views contribute to the conversation and to the design of future systems?
What is being sustained?
How do we build, learn, and grow?
How do we self organize and promote organic growth and emergence?
What are the sustainable design principles?
How can AI be designed to promote sustainable objectives?
Is AI inside or outside? Do these distinctions need to be reframed for AI?
Can a model represent what is being sustained, a concept that changes, and considers the different definitions?
Deborah Forster: Add the notions of ‘resilience’ to qualify and bring more nuance to sustainability. Maybe because we can bring into question which pieces of what-we-had/do-now do we actually want to sustain?
Paul Pangaro: Is there a way to get a grip on this and move it to something more concrete and actionable?
Igor Perko: Actually it can, because if you’re enlarging the understanding of ‘sustainability,’ there are several questions that you need to open. Those are standard questions for stakeholders — (broken up) — and that’s the next question because we’re talking not only about human but non-human views of sustainability. That’s really fine understanding of what is sustainable. ... Particularly, what cybernetics could do, it could open the discussion of this particular view.
Larry Richards: There’s a body of knowledge called donut economics. It is an attempt to find a balance worldwide between the resources needed for basic human needs and the long term sustainability of those resources. If we were to take on donut economics, it would require some form of AI device or programming in order to continually monitor those aspects of world sustainability. My reading is, what they’ve done represents current best knowledge, which can change over time, and such a system could be there as the need arose.
Conversation Machines
Technologies should actualize in the forms of conversation machines. Could Paskian machines, Pangaro’s update to Thoughtsticker, Beer’s VSM, and Bernard Scott’s educational technology be considered models or precedents? Why? Are there other examples?
(Facilitated by Larry Richards)
Adler Looks Jorge: It was a really rich conversation, especially looking at a different approach to the role of technology. We probed into the definition of a conversation machine, and what we would do with whatever it is. So we looked at a few approaches, for example, absorbing distractions, augmenting our conversations, or being a container for conversations, such as the VSM. And the last one, we talked about how technology could enable the movement from conversation to participation.
Larry Richards: Is there a role for hierarchical, sequential structures in our machines as we move toward more conversationally friendly AI? And we talked about therapy, going back to Eliza, is this a conversation machine or not? Why not?
Paul Pangaro: Reminds me of Pickering, a conversation after the last session, thinking about the nature of a stimulus from something like Eliza. It stimulates a conversation with myself even if it’s not participating. So there is a list of dimensions to think about what occurs in the course of conversations with or without machines, to varying degrees, depending on the instrumentality of the machine and its sophistication.
Deborah Forster: Vygotsky’s approach to development views conversations and interactions between individuals as just the first step to having conversations with yourself. What we do between is a model for how we converse inside. That happens in development, but it could also be a model for something we orchestrate more and more deliberately.
Iannis Bardakos: And of course [we have] the observable or emergent absurdity, or not, between the conversation between two Eliza-based machines.
Eve Pinsker: It occurred to me that as we were discussing this and related points in previous meetings — I don't think anyone mentioned Roger Schank's work? His work (Tell me a Story in 1995 and his subsequent work in education and learning), and the work he and Abelson did earlier, on scripts-as-stories-as-AI? Seems like that's relevant to this question on conversation machines.
Phase Two
Larry Richards: This is more my reaction to the group discussion. If we want machines to facilitate human well-being, which I assume includes our interactions and conversations that we have with each other in language — and we know machines themselves cannot generate conversations. Yet we know we want machines that are “conversation friendly,” so how might we think about and proceed to imagine such machines? What would be their function in the world, what would be minimal requirements for design? How do we as social beings have to change in order to realize the value of such machines? Where, if at all, does biology enter this enterprise?
I add that because it came up from Deborah, but we also — remember that one of the originating cybernetics laboratories was the “biological computing laboratory,” — is there still relevance today?
Stuart Umpleby: I’ll add a comment since I spent some time at the biological computing laboratory. I often puzzled over that title, I didn’t know there were biological computers around. But human beings are biological computers if you think of the nervous system as a computational device. Heinz von Foerster was interested in cognition, doing neurophysiological experiments on cognition. The Maturana article, “What the Frog’s Eye Tells the Frog’s Brain” being an example from BCL. It defines computation broadly to define all communication as a computational activity: what am I doing, what am I trying to achieve, what is an appropriate strategy? That can all be defined as computing. And if you think of all cognitive human activity as computing, and that’s what you want to understand as cognition, then you have the biological computer laboratory.
Paul Pangaro: It was cybernetic in the broadest possible way. His proposals were always difficult for those in the digital realm to comprehend... is that too harsh?
Stuart Umpleby: No. They saw computers as electronics and digital, and this was a different idea.
Paul Pangaro: And that’s what New Macy is moving toward, is to find analogical frameworks, analogic interactive frameworks — as opposed to digital interactive frameworks that the tech industry is obsessed with, the cheaper, faster, smaller Von Neuman machines that have been imposed on all of us.
Jeff Glassman: Adding to references to the BCL work, there was the proposal for a Socially Beneficial Information Processor (SBIP) that came from Herbert Brun and Marianne Brun. They input that into BCL and ASC conversations. It wasn’t taken up sufficiently. This was before the Internet and the web of course. They also brought students from “the arts” into the BCL seminars, and they themselves were part of it all for sure.
Stuart Umpleby: Some of the early work that was done at BCL was the creation of analog computers. The younger people among us may not know what an analog computer was, or looked like. But before digital computers, there were analog computers. These were electronic wired devices that people used to solve certain kinds of problems. Then the world moved on to digital computers. But Heinz von Foerster was also involved in computer music and dance, and many other things. He was interested in understanding cognition, and viewed neurophysiological research as a way of understanding how the brain functioned. So if you keep that in mind, that one of his approaches was to study neurophysiology, and he had an appointment in physics and electrical engineering.
Adler Looks Jorge: So there was a gap between analog and digital computers?
Stuart Umpleby: Not a gap, a natural evolution. But Heinz was very interested in the arts. The last project held at BCL was this big thick book, The Cybernetics of Cybernetics, which was organized as a website in 1974. They put holes in the book and you could put in a needle to jump to different sections of the book, long before the existence of the World Wide Web. People who were associated with BCL always had pleasant memories of BCL, because Heinz ran it like a Master of Ceremonies. You can imagine him orchestrating it like a three-ring circus. But it was hard for other engineering faculty members to understand what he was doing, and some considered him a charlatan or whatever...
Paul Pangaro: A magician.
Stuart Umpleby: Yes, a magician, which he was as a child. And he had this magician’s way of taking you on one long loop only to take you back to somewhere from where you started.
***
Gerard de Zeeuw: I would like to continue with what Stuart was saying, and answer the questions. If we are talking about machines as supporting individuals, then apparently there is something in individuals that goes beyond the machine. So the answer to many of the questions proposed here is actually: every combination of the human and the machine such that the human being is in control, is what we want to actually achieve. So that’s all, that’s my proposed answer to those questions.
Paul Pangaro: You use the word ‘control.’ The humans were in control. How would then limit the agency of the machine?
Gerard de Zeeuw: The word ‘control’ is obviously laden. What we are talking about, suppose I use a bicycle. A bicycle is a machine. If I control the bicycle, the bicycle will function in any way that it is possible for it to function — as a machine. At the same time, the combination of the bicycle and me is able to do a lot of things, like going somewhere, but also, to create a flying machine, for example, or creating an electric generator. That’s what I mean by control.
Paul Pangaro: And Steve Jobs famously called a computer a “bicycle for the mind.”
Deborah Forster: Today in robotics and in AI and machine learning, they talk a lot about shared control. In biological systems, maybe the best example of shared control is walking. The lower nervous system generates the pattern of gait, but the intention of where to go is more cortical. When you say you are going to get a cup of coffee, you’re activating shared control between parts of the nervous system. So that generalizes well.
The first Macy, it was a moment of humility, a brave new world — we just went through two world wars. It was time to be open to reconsider boundaries between biology and machines, and that’s the spirit we have to conjure up today. And it’s already being performed. If you think, today, there are autonomous vehicles navigating city intersections. If that’s not conversation, I don’t know what is. A driver in China fell asleep while in the car, largely because the autonomous vehicle is reading bicycles and people and cars and making decisions about how to get across an intersection. Look at a Roomba. We’re performing machine conversations already in our world. The boundaries are already porous. So how do we get out conceptual frameworks and meta-conversations to match what we’re already acting out in the world.
Paul Pangaro: Conversation, as you just said, is more contained than the conversations in which there is an exploration of collaboration, intention and possibility rather than execution. I would add, what’s the boundary of a conversation we’re talking about? What’s the scope of it? But I appreciate so much of what you’re talking about — the nature of today’s interactions between humans and machines. It’s a rich interaction of communication, such that we’re pushing the boundaries of the emergence of new possibilities for co-design. That was what Negroponte had intended but never reached, which is to have a conversation about designing, rather than just a conversation.
***
Ben Sweeting: “If we want machines that are 'conversation friendly...” This makes me think — what is the conversation about, and what is it like? It’s not that 'conversation friendly' will necessarily equal it being 'good'. It’s like 'self-organisation', it sounds like a good idea, but it’s also neoliberal economics and Facebook echo chambers and so on.
Richard Bye: If you fly a kite, are you in control or are you just holding the string?
Jamie Rose: The kite is a conversation with the wind.
Ron Villalon: Just a thought: if you want a good conversation, it can’t be limited in any sense — it needs to “eat variety” to find better future possible conversations and goals.
Eve Pinsker: Roger Schank's point, related to building AI systems that are 'conversation friendly,' is that conversations are built on shared stories, and sharing stories. And stories circulate and grow in communities. Lack of a shared understanding of stories provides boundaries between communities. What we don't want AI systems to do is to impose stories.
***
Sebastian Benthall: Arguably, Zoom, which we are using now, is a conversation machine. Or, more widely, the moderated New Macy process that involves Zoom, Google Slides, etc., is the effective use of machines for conversation. It would be interesting to think about how these tools/processes can be expanded for New Macy's own conversations.
Adler Looks Jorge: Conversations vs. Interactions… with (vs. within) machines
Paul Pangaro: Sebastian, I prefer a distinction between a conversation channel (such as Zoom) and a conversational machine — a non-biological agent that contributes to the goals and means of intentions. But that being said, I want what you want in tools and processes for our New Macy conversations.
Larry Richards: I would argue that Zoom is not a conversation machine, nor is what we are doing here a conversation.
Eve Pinsker: Then what is it?
Larry Richards: What we are doing here is interacting through a machine, which may be stimulating conversations I have with myself, but the intensity of dynamics essential to conversation is not present or possible.
Paul Pangaro: What dynamics?
Larry Richards: Gestures, movements, visuals, three dimensions, environmental effects, mutterings, etc.
Paul Pangaro: Why do we need those things? Why are ‘these words’ so limiting? What is missed?
Larry Richards: It's in the dynamics. Without the dynamics, it is not conversation.
Sebastian Benthall: Unsettlingly perhaps, that might imply that a more soulful AI would need to be more intrusive, in order to have the bandwidth to engage in conversation.
Paul Pangaro: But Larry, you’ve not sufficiently defined ‘dynamics’, which you seem to hold as a specific, special idea.
Andrei Cretu: The term "dynamics" is sometimes used as shorthand for "dynamic range" which essentially corresponds to the notion of variety.
Claudia Westermann: Larry, can we measure these dynamics?
Sebastian Benthall: I agree with the direction of Paul's question -- I can't count the number of significant conversations I've had that have been through text correspondence. Even with paper/snail mail. On the other hand, I think one of the more interesting issues raised so far is the biological aspect. It reminds me that natural language is embodied -- consider Lakoff's linguistics.
Larry Richards: Dynamics — a separate domain from relations; where change is fundamental. A pattern of changes not of content or relations.
Paul Pangaro: Larry I love the care you bring to these ‘conversations,’ but your definitions are ‘reserved’ in a way that is difficult to make clear in my mental model.
Larry Richards: Understood. I enjoy the conversations I have with myself, and the interactions may contribute to further conversations when we get a chance to meet again.
Paul Pangaro: Do you solicit a disadvantage by reserving ‘conversation’ to be so restricted? Have you become more restricted than Pask? Would you consider an adjective to modify ‘conversation.’
Larry Richards: Gordon [Pask] told me that the interactions in synchronous video systems were not conversations. The channel capacity was not adequate to come even close to capturing the dynamics of what was going on. However, he placed high value in the conversations that participants could have with themselves, or their local friends and neighbors.
Paul Pangaro: Should we even be having these conversations?
Larry Richards: I find these interactions to be useful!
Paul Pangaro: Ah good. I wasn’t sure :)
Larry Richards: Conversation is an end in itself, even if that doesn't sell well.
Conclusions: Moving to the Concrete
Highlights from the open discussion.
Deborah Forster: Now they have chips that are using lab grown DNA molecules to do computation. Robert Rosen was a mathematical biologist, who wrote about relational machines and biological ways of looking at machines that broke through this binary of biological and human-made machines. That’s useful because the notion of mechanism was so central to how biology developed explanations of how things work. The cybernetic moment was one moment in the continuum of seeing our best understanding of ourselves in the machines that we built. The cybernetic moment was one where the machines could actually do feedback. What we desperately need is to revisit the relationship with what ‘human machines’ are today. It’s a situation the Macy Conferences could have only dreamed of, to say, ‘OK, I want a car to get through an intersection, how do we make that conversation happen?” The safety issues are a matter of making that conversation more fluid. We have examples of it now, cars on the road, but is it a good conversation? Like biological mechanisms, they’re limited in the work that they can do and how they can be effected. It’s heterogeneity. Enzymes in the digestive system don’t go outside the stomach. We have ways of exploring the boundaries of mechanisms that play out in ways that help whatever ‘whole’ we choose. Those are advantages for those who want to engage in a New Macy. A lot is changing. And we need artists to play with it. We need these things to be in play.
Fred Steier: We’re in Zoom, a designed space with certain affordances. There’s a wonderful conversation about conversation happening in the chat while we talk about BCL. Zoom is a designed space, but is it affording the conversations we want to participate in and in the way we want to participate in them? Conversation, con + verse, “turning with.” It’s not a criticism. What we’re doing we could not do and save face in a shared space, passing notes behind the speaker’s back. Is it a richer conversation, or two separate conversations? It’s relevant to the larger conversations around AI spaces, designed spaces, and living with each other and being present to each other.
Serena Wang: One of the main issues I see with AI that it’s makers perpetuate is that AI is pretty much built by white men for white men, which is causing a lot of issues with racism and sexism. And ironically, at this moment, this group of people trying to change AI is still predominantly a group of white men. I’m not sure what kind of diversity initiatives have been taken already, but I would like to challenge the ASC to include more diversity of world views and backgrounds. Otherwise, if we reflect on it in a second-order cybernetics framework, perhaps we will see that the same biases and assumptions and worldviews that created the problem are also trying to fix the problem. That’s probably not the best approach to be taking. We’ve established cybernetics to be transdisciplinary, but we would benefit from being transcultural as well.
***
Andre Cretu: As scientists, I think the most concrete thing we can do is bring clarity to these issues. Anything practical — could we create specifications for a new internet protocol, for example, or a new type of computing platform? I thought about it, and my conclusion is that we don’t have the resources. We’re restricted to bringing awareness and clarity to these issues. So I wonder what people think about this, with regards to moving to the concrete.
Paul Pangaro: I think we can do more, and have been trying to do more than bring clarity and awareness. My goal is to say, we understand what’s going on in the digital realm, is there an analog realm where we can bring best practices, design patterns, case studies, prototypes from the present and from the past? Can we bring something to show — which is maybe awareness and clarity! — but for me it’s more active than that. We are not only scientists. There are also designers, artists, critical theory people and many others. The diversity of that — although not in the dimensions that Serena and many of us are concerned about — is still a different dimensionality. We want to increase that.
Adler Looks Jorge: There are people from different regions, cultures, generations, who have been at past meetings. But I wonder why they are not here. Perhaps different meetings appeal to different people. The ways we interact could be rethought to include them.
Michael Hohl: In my view — Andre — this sounds deterministic, that technology happens to us. But we want a choice. I go with Harari, that it’s something we have to discuss and control. This doesn’t just happen to us. We have to decide, together.
Andre Cretu: But control has been taken away from us. Technology in the 70s empowered the user. Technology today empowers corporations. These issues are systemic. They won’t be solved without major political changes. The only actors that can hope to solve these issues are state actors. This explains my pessimism to anything beyond clarity and awareness. The goals are worth pursuing, but what do we come up with? A new app must function within the Google or Apple ecosystem, function on hardware, etc.
Paul Pangaro: I respect that. I can’t say, we can’t take on Facebook, capitalism, platform companies, but we can make alternatives. Beyond talking, we can make and promote things as alternatives to the digital. I’m trying to stick to the knitting with which I feel I cannot be challenged in competence.
Ben Sweeting: It might be good to track back to some of the amazing things Claudia said, or what I said about problematizing solutions. If you think “how do we solve this” it seems impossible. But there are also ways in which, as Deborah said, we understand ourselves through the technologies that we have. And it is reasonable to make things that let us re-understand, reveal, and disclose things. That could be impactful, a different way to think about the things you make, where the purpose is not necessarily a practical one but one of social transformation.
Kate Doyle: Starting explorations of prototypes and experiments, something more concrete — something that appeals to me about cybernetics are the experimental forms that have come out of it. I’m looking at experimental art practices, and I believe in the power of metaphor in making change. So maybe this is a conversation for the future: what forms, what metaphors are we creating that might be productive? Not just as discourse, not just showing something, but making and doing something. What forms are possibilities that might emerge?
Related Links
#NewMacy events: https://asc-cybernetics.org/newmacymeetings/
ASC events: https://asc-cybernetics.org/events/
Feedback and questions: paul.pangaro@asc-cybernetics.org
Request to be added to the mailing list: https://forms.gle/UHLVt3bSuiK1pZ9c8