A↔N #15: August in San Diego 9. NEUROMORPHIC AND DYNAMIC ARCHITECTURE

August in San Diego 9. Neuromorphic and Dynamic Architecture

Neuroscience For Architecture, Urbanism & Design

Michael A. Arbib

This is the ninth of a series of nine posts on the A«Nblog reporting on the “Neuroscience For Architecture, Urbanism & Design” Intersession held at NewSchool of Architecture & Design in San Diego on August 12-15, 2019. The individual posts range in length from 1300 to 3000 words. The first post provides an overview of the series, along with a Table of Contents with links to each of the posts. A PDF of the whole series may be found here.

Neuromorphic Architecture

Michael Arbib introduced his theme of neuromorphic architecture in the sense of buildings (not just rooms) that in some sense have “brains.” A building then integrates a “body” (a possibly dynamic physical space) with a “neural” space that controls its interactions. We may view such a building as an inside-out robot or an embrained body. Form and space of the building are interactive, intertwined with actions and events in the life of its users. A building becomes a web of systems and components planned and constructed to address certain functionalities and yet which may combine to convey impressions, feelings, and aesthetic qualities. When we study animals and their behavior, we see that brain and body (including sensors and effectors) evolve together. This led him to the proposal for neuromorphic architecture that “Careful attention to “neural space” prior to commitment to the final form of the physical space may in future yield innovative designs that can then enrich the design of the building through constraint satisfaction between its physical and neural dimensions.”

Arbib revisited Corbusier’s dictum “A house is a machine for living in” and asked what this becomes in an age of cybernetic machines, rather than the ocean liners, automobiles and airplanes that inspired Corbusier. He noted that in Vers une architecture, Corbusierhas two voices – one the admirer of the engineering aesthetic, the other of the purely non-functional architectural aesthetic of the Parthenon. We seek a cybernetic architecture that indeed captures the architect’s aesthetic as well as the cybernetician’s: “Machines” for efficiently supporting X, Y and Z while offering aesthetic pleasures. Will they/should they preserve Baukulturor radically transform it?

He cited two challenge for Neuromorphic Architecture: homeostasis (regulating physiological parameters) as in Jean Nouvel’s Institut du Monde Arabe in Paris, and supporting social interaction between a building and its users. Homeostasis is not “socially” interactive – the body adjusts to the ambient conditions or moves elsewhere (the latter option being less relevant for buildings). For social interaction we turn for inspiration to social cognitive neuroscience. Of particular relevance are mirror neurons which are involved both in our own actions and also in recognizing people acting in similar ways — recognizing how the actions of others relate to our own repertoire, we can learn more about the world and prepare for the way we will interact in different social situations. However, the actions of a building will usually differ from the actions of humans (save when the building has part of a humanoid robot as a subsystem) and so “action recognition” rather than the specific mechanism of mirror neurons seems most relevant here – not merely recognizing the action of others but using that recognition to help decide what one should do next. The quest, then, is for brain operating principles that support a neuromorphic architecture for the “social interaction” of rooms or buildings with people in or near them, adapting buildings to the needs of their inhabitants. A classic example is provided by the Interactive Space Ada described below.

Arbib summarized the design of a reactive and adaptive intelligent kitchen put together by students in a course at USC.[1] The kitchen was designed to help a person cook food based on a recipe. All computations were conditional on which recipe was selected, with the room keeping track of progress and helping as needed. The neural spaceincluded speech recognition, emotion recognition, event localization, and processing facial expressions. It also includes recipe management. Simulated “mirror neurons” came into the mix to track the manual actions of the cook. These had to be related to systems “beyond the mirror” to keep track of the state of preparation according to a particular recipe.

A skillful chef would not want a kitchen like this, and even a novice cook could become frustrated by a kitchen that kept offering suggestions for cooking routines that the cook had by now mastered. This motivates bringing in emotion and learning as crucial themes for neuromorphic architecture. Someone who cooks rarely may need a lot of assistance, and even an experienced cook may find it helpful to receive timely reminders when working with the multiple components of a complex recipe. The room would use facial and vocal cues to recognize the emotional state of the user. Do they indicate frustration with excessive instruction or a measure of desperation in preparing the current dish? The room should be able to react accordingly. This leads into learning –both about what the user can handle without assistance and where help is needed (and this changes over time), and how the user chooses to modify the recipe so that the room can keep track of the user’s preferences.

How will future buildings learn? In Stewart Brand’s (1995) book How buildings learn: What happens after they’re built the focus is on how humans can reconfigure a building when the users have new needs or the neighborhood changes. Here, we ask how an embrained building with suitable effectors to reconfigure walls and other structures might itself change the building. Building Dynamics: Exploring Architecture of Change (Kolarevic & Parlac, 2015)offers ideas on buildings that can change shape but within a given range of functionality, in many but not all cases addressed more to entertainment and spectacle. To go beyond this, the memory of the building might extend building information modeling (BIM) not only to track changes in the building but also to plan and direct them This perspective might fit in with Macagno’s notion of lifespan architecture, where observing the changing capabilities of the inhabitants could factor into the building’s decisions.

Ellie Al-Chaer introduced his work with architect Soulaf Abouras, extending the notion of responsive architecture– a term coined by Nicholas Negroponte — through neuroscience-based material programming. He recalled Peter Cook’s fanciful walking city, constantly evolving, with diverse plug-in modules, perhaps an anticipation of the ideas of the previous paragraph. Soulaf developed a design for an architecture school where rooms can move to create different spaces.

In the Al Bouchard towers, Abu Dhabi, the exterior (as in Nouvel’s Institut du Monde Arabe) was inspired by the mashrabiya. Its curtain wall has a lattice of units that open and close, controlled by sensorimotor mechanisms. Additionally, this adaptive curtain wall can rotate around the building.

Hygroskin covered a weather-sensitive pavilion of 2013. The changing surface can sense and react to humidity over the range of 30 to 90%. Here, the material itself is responsive rather than controlled by electric-powered motors. Diverse studies explore the use of innovative materials, inspired for example by the way Lycra can be stretched from a saggy piece of cloth to a shaped surface. The work even involves osteorobotics: Robot arms manipulate and stretch polycaptolactone (PCL) to build a structure — and PCL is biodegradable. Design evolves from the material itself; making it self-emergent.

Core Neuroscience: Neuroscience of Emotion

Charles Darwin’s (1872)concern with relating facial expressions of certain animals to human emotional expression placed the external aspect of emotion within a comparative, and thus implicitly evolutionary, framework – in turn setting the stage for the notion that the internal role of emotions in affecting behavior has an evolutionary base with a strong social component. It is much debated whether there are basic emotions from which all others are constructed by some admixture that also involves cognition – Schadenfreude is certainly not one of them. In any case, an influential claim has been made by Paul Ekman (1999) that the basic emotions are anger, disgust, fear, happiness, sadness, and surprise.

Fellous and Arbib (2005)edited a book that offered an integrative approach to both the neurobiology of emotion and attempts to endow robots and AI with at least simulacra of emotion that could aid human-machine interaction. In their chapter, Ortony, Norman and Revelle analyzed the interplay of affect (value), motivation (action tendencies), cognition (meaning), and behavior at three levels of information processing:

  • Reactive: a hard-wired releaser of fixed action patterns and an interrupt generator. This level has only proto-affect.
  • Routine: the locus of unconscious well-learned automatized activity and primitive and unconscious emotions.
  • Reflective: the home of higher-order cognitive functions, including metacognition, consciousness, self-reflection, and full-fledged emotions.

They addressed the design of emotions in computational agents (these include “softbots” as well as embodied robots) that must perform unanticipated tasks in unpredictable environments. They argue that such agents, if they are to function effectively, must be endowed with curiosity and expectations and a sense of self that reflects parameter settings that govern the agent’s functioning. We will see examples of “artificial” emotion below. Here a very brief glimpse of some neurobiology:

We cannot understand what brains contribute to our humanity unless we understand how emotion interacts with basic behaviors all the way up to higher cognition. Moreover, each part of the brain is a context for other brain regions, and they evolve in conversation with each other as they help contribute to survival. As we saw in the third post on measuring the brain, those who want to probe more deeply people’s emotional reactions (and other factors) to the built environment must include a detailed study of the autonomic nervous system. However, here we focus on the amygdala is a crucial part of the emotion system of the brain. It has been linked to fear behavior in mammals (LeDoux, 2000)and has been shown in humans to be related to the recognition of facial expressions that tell one that someone is to be feared or distrusted (Adolphs, Tranel, & Damasio, 1998). People with lesions to the amygdala can still recognize other people on seeing their faces but can no longer make sound judgments as to the likelihood that a stranger should be mistrusted. Thus, our social interactions with others involve gaining a reliable impression of their general character (a reliable view of their “self”) as well as getting a feel for their current emotions and intentions.

The amygdala is embedded in larger circuits, in particular those involving medial prefrontal cortex which link its immediate evaluations to the longer-term planning of behavior. The amygdala can influence cortical areas by way of feedback from proprioceptive or visceral signals or hormones and via projections to various “arousal” networks. It also has widespread influences on cognition and behavior through its cortical connections, while cognitive functions organized in prefrontal regions can in turn regulate the amygdala and its fear reactions.

Emotions in a Neuromorphic Architecture

Michael Arbib described the “emotions” of the Interactive Space Ada developed for the Swiss Expo of 2002 by a team led by the computational neuroscientists Rodney Douglas and Paul Verschure (Eng et al., 2003). Ada had a “brain” based (in part) on artificial neural networks. She had “emotions” and “wanted” to play with her visitors. She continually evaluated the results of her actions and expressed emotional states accordingly, and tried to regulate the distribution and flow of visitors accordingly. When people participate, Ada is happy; when they do not, she is frustrated. Her level of overall happiness is translated into the soundscape and the visual environment in which the visitor is immersed, establishing a closed loop between environment and visitor. JG Ballard’s 1962 short story The Thousand Dreams of Stellavista (recommended to me by Branko Kolarevic) explored the perils of “psychotropic” buildings that learn the emotional patterns of their inhabitants and express them through dynamic changes of color and form whose persistence may prove catastrophic for the next inhabitant.

Ehnaz Farahi described her cutting-edge research showing how computation can be embedded in materials, affecting tactility, texture and appearance to yield emotive matter. Nature is a source of amazing forms and patterns. What can design learn from these? Farahi stresses human-material interaction over human-machine interaction. Inspiration can come in diverse forms. One can even learn from how trees respond to their environment to define a new symbiosis. She is influenced the work of Rosalind Picard[2] at MIT on affective computing and ways of following people’s emotion, but her focus is the design of soft, material reconformable materials that can establish empathic relations with humans.

Dramatic videos showed devices around the human body that move in relation to EEG, opening and closing a helmet and varying the boundary between the body and the environment – something Farahi relates to the notion of extended mind as studied by Andy Clark and David Chalmers.

Fish scales – hard elements on a semi flexible mesh system — provide inspiration for new materials –

One system maps the sensed direction of the onlooker’s gaze into movements of one’s garment. H, providing haptic and visual response to the others gaze. How might this affect one’s understanding of social relations?

Farahi builds on Ekman’s 6 basic emotions to incorporate a system that can recognize emotions from their facial expression. She then shows how these can be mapped onto clothing. What if one’s garments provide a second skin extending our range of emotional expression? Perhaps such garments would help those with autism better understand the emotions of others and project their own.

Exploiting the power of affective computing, Farahi designed an interactive display case for Adidas, creating positive emotions for a new product. The shoe in the box becomes “alive” and tracks the viewer — this gets interesting when there are multiple viewers.

Another piece was inspired by hummingbird iridescence changing color to attract a mate. This required selecting the materials and determining how they are moved. The extra challenge was that the system had to be robust enough to last for a one year display in Chicago.

How might sensory experience affect social interaction? The overall challenge is to establish an empathic relation between the body and the environment. Asked how this might relate to interior design, Farahi answered that she started as architect but has come to focus on the body. However, her thesis presents a bio-inspired methodology, linked to emotion, that can be applied at many different scales.

Q. Sees future of increased communication with the environment. Not just voice assistants. Buildings that embrace us. That help us to heal. Embracing nature in a new way. A building that responds if I am scared or feel alone. Providing a supportive atmosphere.

Buildings as robots

Guvenc Ozel works with spatial intelligence, cyberphysical architecture, and brain-computer interfaces, integrating data with architectural context. In his lab at UCLA, he doesn’t construct buildings but looks at buildings as robots. Robots can construct in physical and virtual worlds, maybe yielding integrated forms. He sees the key for the 4th industrial revolution not as automation but as a freeing of cybernetic systems from human control: intelligence for non-organic actors in a social context; active participants in human social interaction. The resultwill be an architecture that relies on media and virtual depictions as much as physical constructions, integrating diverse cyberphysical technologies. One example – a space that tracks your actions and will in some cases react in real time, a form of spatial intelligence. Note the importance of predictions of future possibilities to determine interactions accordingly with humans.

A video showed a machine learning system tracking and identifying objects and agents in a street scene video. Meta-parametricism (recall Leach on GANs) yields novel human faces – borderline creativity. Augmented reality relying on machine learning and diverse sensors adds to the external world.

7 years ago: An agent changing form based on EEG, rather than responding to physical actions of the human body.

Venice Architecture Biennale 5 years ago: An augmented reality project, expanding architecture. What can be embedded in physical environments?

Coachella music festival proposal: Robotically actuated carbon fibers to changes the shape of a structure, with visually projected overlay.

Future project? A robot construction system that could gather materials on Mars and make decisions based on the gathered materials.

Ozel also looks to machine learning for interdisciplinary design in a social setting, as in generating individually tailored dynamic facades for buildings. [Recall the design for this in Minority Report – an example of world building by Alex McDowell with whom Sergei Gepshtein collaborates in USC’s 5D|World Building Institute.]

One can create inherently virtual environments that are fictitious rather than mimicking natural environments. These become architectural expressions in their own right. And one can study turning virtual systems into architectural representations.

Think of architectural objects as robots. Make them generative – a design tool to help humans work collaboratively. Ozel looks to machines as creative collaborators. There are spectra of understanding other than the human and there are other examples from nature, but we can go beyond models from nature.

References

Adolphs, R., Tranel, D., & Damasio, A. R. (1998). The human amygdala in social judgment. Nature, 393(6684), 470-474.

Arbib, M. A. (2012). Brains, machines and buildings: towards a neuromorphic architecture. Intelligent Buildings International, 4(3), 4(3), 147-168, DOI:110.1080/17508975.17502012.17702863.

Brand, S. (1995).How buildings learn: What happens after they’re built: Penguin.

Darwin, C. (1872). The expression of the emotions in man and animals (republished in 1965). Chicago: University of Chicago Press.

Ekman, P. (1999). Basic Emotions. In T. Dalgleish & M. Power (Eds.), Handbook of Cognition and Emotion(pp. 45-60). New York: John Wiley & Sons.

Eng, K., Klein, D., Babler, A., Bernardet, U., Blanchard, M., Costa, M., . . . Manzolli, J. (2003). Design for a brain revisited: the neuromorphic design and functionality of the interactive space’Ada’. Reviews in the Neurosciences, 14(1-2), 145-180.

Fellous, J.-M., & Arbib, M. A. (Eds.). (2005). Who Needs Emotions: The Brain Meets the Robot. Oxford, New York: Oxford University Press.

Kolarevic, B., & Parlac, V. (Eds.). (2015). Building Dynamics: Exploring Architecture of Change. London and New York: Routledge.

LeDoux, J. E. (2000). Emotion circuits in the brain. Annu Rev Neurosci, 23, 155-184.

Endnotes

[1] For more on this and two other conceptual designs, see “Brains, machines and buildings: towards a neuromorphic architecture” (Arbib, 2012).

[2] http://web.media.mit.edu/~picard/

About Michael A. Arbib

Michael Arbib is a pioneer in the study of computational models of brain mechanisms, especially those linking vision and action, and their application to artificial intelligence and robotics. Currently his two main projects are “how the brain got language” through biological and cultural evolution as inferred from data from comparative (neuro)primatology, and the conversation between neuroscience and architecture. He serves as Coordinator of ANFA’s Advisory Council and is currently Adjunct Professor of Psychology at the University of California at San Diego and a Contributing Faculty Member in Architecture at NewSchool of Architecture and Design. The author or editor of more than 40 books, Arbib is currently at work on When Brains Meet Buildings, integrating exposition of relevant neuroscience with discussions of the experience of architecture, the design of architecture, and neuromorphic architecture.

2019-12-30T22:28:07+00:00