August in San Diego 5. Learning and Creativity
Neuroscience For Architecture, Urbanism & Design
Michael A. Arbib
This is the fifth of a series of nine posts on the A«Nblog reporting on the “Neuroscience For Architecture, Urbanism & Design” Intersession held at NewSchool of Architecture & Design in San Diego on August 12-15, 2019. The individual posts range in length from 1300 to 3000 words. The first post provides an overview of the series, along with a Table of Contents with links to each of the posts. A PDF of the whole series may be found here.
Clearly, creativity (this post) and design (the previous post) are overlapping topics, but where the previous post centered on the functions of the hippocampus and its environs, the emphasis here is on synaptic plasticity and the way artificial neural networks (ANNs) have led to the current power of Artificial Intelligence (AI).
Neuroscientists have charted many brain mechanisms that support learning, but much emphasis has been placed on synaptic plasticity rules whereby the “weights” of the synapses (a synapse is a structure where one neuron acts upon another) change with experience. There are three main rules:
In Hebbian learning, if both the presynaptic and postsynaptic neuron fire at the same time, the weight of the synapse between them will be increased. This was postulated by the neuropsychologist Donald Hebb (1949) and later shown to exist in the brain, first being identified in the hippocampus (Bliss & Gardner-Medwin, 1973).
Supervised Learning was first developed in the Perceptronmodel by the psychologist Frank Rosenblatt Rosenblatt (1958), postulating that input synapses should be weakened if a neuron fired when a teacher said it should not have fired, and strengthened if the neuron failed to fire when it should not have fired. David Marr (1969)and Jim Albus (1971)treated the Purkinje cells (the output cells of cerebellar cortex) as Perceptrons, though Albus “got the sign right” – reversing the Perceptron rule because Purkinje cells are inhibitory. A breakthrough for machine learning (but a departure from neurobiology) came with the invention of backpropagation, an algorithm for “structural credit (or blame) assignment” that could be used to guide adjustment of synapses for neurons in “hidden layers” (i.e., other than output neurons) in a feedforward network (Rumelhart, Hinton, & Williams, 1986).
Reinforcement Learning does not require a teacher to specify correct firing of each cell in response to the current input pattern. Rather, a “critic” rates the overall behavior controlled by the network, and provides a positive or negative reinforcement signal to all neurons. A crucial extension was temporal difference learning that addressed temporal credit assignment in cases where reinforcement was intermittent (Sutton, 1988; Sutton & Barto, 1998). Although AI turned its back on learning techniques for several decades, one of the earliest great papers in AI anticipated temporal difference learning by developing a technique for machine learning while playing the game of checkers (draughts) where reinforcement comes only when one wins or loses the current game (Samuel, 1959).
ANNs are simple abstractions from biological neural networks, augmented by varying “tricks” that augment the computations that support learning. Intriguingly, all these techniques were in place by the mid-1990s (Arbib, 1995)but their impact on AI remained limited. They can now do amazing things and drive AI thanks to their being able to work vastly faster than our biological neurons, the availability of immensely faster computers, and the ability to tap vast and reliable databases.
We humans work with imperfection, but like other animals we have neurons of a diversity and complexity that are unmatched by ANNs, and we have a distinctive anatomy of brain regions, with distinctive neurons and connectivity that is related to that of other creatures (Kaas, 2017)– but with innovations that support unique abilities such as language. However, these neural and neurological subtleties have yet to factor into AI.
Neil Leach brought the role of ANNs in AI – deep learning – into play by starting with the anecdote that, recently when boarding a plane in China, he did not need his boarding pass. Instead, an AI system recognized his face. The movie Blade Runner was released in 1982 but set in 2019. What did Ridley Scott get right and wrong? Replicants, no. Flying cars, no; but lots of drones. Dynamic facades, yes. And lots of AI. Deep learning had many successes that gained widespread attention: Deep Blue beat Kasparov; IBM Watson beat a human champion at Jeopardy; and AlphaGo won at Go.
Can AI be creative? Can computers dream? A feedforward net can be trained with backpropagation to, for example, output BIRD with high accuracy when the input image contains a bird, but not otherwise. The Deep Dream study sought to infer a typical picture from the trained network by “running the weights backward” from the BIRD output. In this case, the picture contained lots of scrambled bird-like images. In a TED talk, Anil Seth built on this to discuss. “inside-out in perception,” while noting that too much yields hallucination. [This is similar to my “theory” of dreams but with interesting videos thrown in. Long ago, Richard Gregory (1967) argued that seeing machines will have illusions because economy of computation will require short cuts that usually work in the machine’s environment.]
Another type of ANN is the generative adversarial network (GAN) in which two neural networks contest with each other, like an “artist” and a “critic.” Given a training set, a GAN can learn to generate new data with the same statistics as the training set. For example, after training one GAN could create images that looked like human faces to human observers even though they were not photos of actual people. All this requires massive computing. This method has produced “This person does not exist” and “This Airbnb does not exist” sets of images. Shockingly, Portrait of Edmond Belamy, created by a GAN, sold for $432,500 on 25 October 2018 at Christie’s in New York.
Wanyu He, CEO Xkool (Ex-Koolhaas and super kool) employs massive cloud computing and new algorithms to develop a database of buildings as a basis for generating new buildings. Currently she works only with 2D images. A database of Zara Hadid’s buildings can generate diverse novel Hadid-like forms. What happens as resolution increases, and the database goes to 3D? Is this the death of architecture? Well, rather than a team of architects, Leach imagines a lone architect with an app rapidly generating alternatives. Humans will increasingly interact with “packets” of AI as intelligent assistants [human-machine symbiosis], but will generally interact with apps rather than humanoid robots. In some sense, ANNs offer a Parametric Architecture with millionsof parameters. In designing computer chips, engineers design for efficiency. Architects also consider beauty. [For some related notes, see (Arbib, 2019), “Poetics and More in Performative Architecture: Towards a Neuroscience of Dynamic Experience and Design.”]
With this background, Leach then turned to a more general discussion of creativity. Can AI be creative? Are architects creative? Going back to AlphaGo he noted that in the second game with world champion Lee Sedol, Move 37 was a move that humans had not made before. At first, it seemed mistaken, but it proved successful. Is this AI creativity? “AlphaGo showed anomalies and moves from a broader perspective which professional Go players described as looking like mistakes at the first sight but an intentional strategy in hindsight.”
The University of Sussex cognitive scientist and philosopher Margaret Boden defines creativityas the ability to generate novel and useful ideas, distinguishing between historical creativity – it hasn’t been done before in history – and psychological creativity — you haven’t done it before. It may be combinatorial, exploratory, or transformational. Leach speculated that maybe architecture is not that creative? To what extent does it follow a set of rules? Compare jazz as variation on a pre-existent theme. Is most architecture just variations on prior patterns? Compare Gehry’s Bilbao and LA Phil. There’s a signature there. Is deep learning so different from architectural training, viewing a huge range of images, with design as a variant of search on a vast latent space? [Recalling my riff on Zumthor, fourth post, I would suggest there is a distinction between minor variations and major innovations, even though both must build on prior experience. To my ear at least, some jazz players are more creative than others – though there is a fine line here between innovation and losing the audience.] To refute “the myth of originality,” Leach says Utzon just looked at the sails on Sydney harbor to come up with the idea for the Sydney Opera House. [But this is wrong, as I explain in an extended case study in When Brains Meet Buildings.]
When will we stop calling AI artificial, Leach asks, and apply an adjective to us, labeling human intelligence as a special case? Compare our switch in terminology from “horseless carriages” to “cars.” We have here another Copernican revolution, moving beyond intelligence as human-centered. In many ways, Blade Runnerwas not about replicants so much as about what it means to be human.
Marcos Novak commented that “if I take a robot to the gym it could lift the weights for me, but that would miss the whole point.” [Back to human-machine symbiosis. Getting the computer to aid our humanity. Cars let us travel faster, but they don’t destroy our interest in human movement and healthy activity. Asked in the 1960s whether AIs would take over the world from humans, Marvin Minsky replied “If we are lucky, they will keep us as pets.”] He also asked whether we would be more original if we did not use rules. Part of our creativity is developing new rules that can then allow new possibilities. Returning to Boden’s definition of creativity, to make something different is trivial. Even to make something useful is a limited form of creativity – a novel two bedroom apartment may not be well designed.
Q: “What is consciousness. Do we attribute only human qualities to this? Can the machine have consciousness?” Gepshtein suggested that an enactive view (embodied cognition) removes consciousness from the brain and with that, there is no hard problem. Leach cited the view of David Chalmers that consciousness is a movie playing in our heads – but much of what we do is unconscious. But does this matter for machines? If a car driven by a human collides with a self-driving car, we ask about consciousness of the traffic of the former; for the latter the question is irrelevant. 
Albus, J. S. (1971). A theory of cerebellar function. Math. Biosci., 10, 25-61.
Arbib, M. A. (2017). How Language Evolution Reshaped Human Consciousness. In R. R. Poznanski, J. A. Tuszynski, & T. E. Feinberg (Eds.), Biophysics of Consciousness: A Foundational Approach(pp. 87-128). Singapore: World Scientific.
Arbib, M. A. (2019). Poetics and More in Performative Architecture: Towards a Neuroscience of Dynamic Experience and Design. In K. Mitra (Ed.), The Routledge Companion to Paradigms of Performativity in Design and Architecture: Using Time to Craft an Enduring, Resilient and Relevant Architecture: Routledge.
Arbib, M. A. (Ed.) (1995). The Handbook of Brain Theory and Neural Networks Cambridge, MA: A Bradford Book/The MIT Press.
Arbib, M. A., & Hesse, M. B. (1986). The Construction of Reality. Cambridge: Cambridge University Press.
Bliss, T. V., & Gardner-Medwin, A. R. (1973). Long-lasting potentiation of synaptic transmission in the dentate area of the unanaestetized rabbit following stimulation of the perforant path. J Physiol, 232(2), 357-374.
Boden, M. A. (2006). Mind as Machine: A History of Cognitive Science. New York: Oxford University Press.
Gregory, R. L. (1967). Will Seeing Machines have Illusions? In N.L.Collins & D.Michie (Eds.), Machine Intelligence 1: Oliver & Boyd.
Hebb, D. O. (1949). The Organization of Behavior. New York: John Wiley & Sons.
Kaas, J. (Ed.) (2017). Evolution of Nervous Systems (Second Edition; in 4 volumes): Elsevier.
Marr, D. (1969). A theory of cerebellar function. J Physiol, 202(43770.46).
Rosenblatt, F. (1958). The perceptron: A probabilistic model for information storage and organization in the brain. Psychol. Rev., 65, 386-408.
Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning Internal Representations by Error Propagation. In D. Rumelhart & J. McClelland (Eds.), Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Vol.1(pp. 318-362). Cambridge, MA: The MIT Press.
Samuel, A. L. (1959). Some Studies in Machine Learning Using the Game of Checkers. IBM. J. Res. and Dev., 3, 210-229.
Sutton, R. S. (1988). Learning to predict by the methods of temporal differences. Machine Learning, 3, 9-44.
Sutton, R. S., & Barto, A. G. (1998). Reinforcement Learning: An Introduction. Cambridge, MA: The MIT Press.
About Michael A. Arbib
Michael Arbib is a pioneer in the study of computational models of brain mechanisms, especially those linking vision and action, and their application to artificial intelligence and robotics. Currently his two main projects are “how the brain got language” through biological and cultural evolution as inferred from data from comparative (neuro)primatology, and the conversation between neuroscience and architecture. He serves as Coordinator of ANFA’s Advisory Council and is currently Adjunct Professor of Psychology at the University of California at San Diego and a Contributing Faculty Member in Architecture at NewSchool of Architecture and Design. The author or editor of more than 40 books, Arbib is currently at work on When Brains Meet Buildings, integrating exposition of relevant neuroscience with discussions of the experience of architecture, the design of architecture, and neuromorphic architecture.