Catching up with the Past: A Small Contribution to a Long History of Interactive Environments

This paper documents the evolution of my thinking in the area of interactive architecture over the past 15 years with students and my office. The work is framed within an overview of a long history of work in the area by others. My personal development has taken a number of clear steps in a relatively logical progression.
In summary, the work began with kinetics as a means to facilitate adaptation. Work in this area led to integrating computation as a means of controlling the kinetics. The combination of these two areas led to the use of discrete mechanical assemblies as a systems approach to interaction design, which led to the thinking of control as bottom-up and emergent. Consequently I became fascinated with modular autonomous robotics and the notion that actual architectural space could be made of such systems. This in turn led to the exploration of biomimetics in terms of the processes, which eventually led to the idea that the parts in a system should get smaller to the point that they make up the matter itself.
The paper concludes with an explanation of how technical advancements in manufacturing, fabrication and computational control will continue to expand the parameters of what is possible in robotics, and consequently influence the scale by which we understand and construct our environments. The future of interactive environments will most certainly involve re-examining the scale by which things operate to the extent that much of the operations happen within the materials themselves. This scaling down is beginning to force a reinterpretation of the mechanical paradigm of adaptation.

technology that exists, and extrapolating it to suit an architectural vision.Only recently do we see courses in interaction design and robotics being taught in schools of architecture all over the world whereas twenty years ago there were less than a handful.The illusion is that the field is fresh with new ideas illuminated by a wealth of built prototypes and real projects.While there are some genuinely new developments in terms of technology transfer in the areas of Interface Design, Autonomous Robotics, Biomimetics, etc. that will foster advanced thinking in the field, it is important to understand that the foundations have been around for quite some time.
In writing this article, I have attempted to humbly step back and look at my own development in the area within the context of a much larger historical context.In retrospect, after nearly 15 years in the area, I did find the development to take a number of clear steps in a relatively logical progression.
In summary, the journey began with kinetics as a means to facilitate adaptation.Work in this area led to integrating computation as a means of controlling the kinetics.The combination of these two areas led to the use of discrete mechanical assemblies as a systems approach to interaction design, which led to the thinking of control as bottom-up and emergent.
Consequently I became fascinated with modular autonomous robotics and the notion that actual architectural space could be made of such systems.This in turn led to the exploration of biomimetics in terms of the processes, which eventually led to the idea that the parts in a system should get smaller to the point that they make up the matter itself.This leads us to where I am today, how I have evolved my thinking in interaction design over the years with students and my office.I am not sure where it goes from here -but at least it is interesting to explore.

Intelligent environments develop in parallel
While the architects were developing the ideas above based on cybernetics, it is important to also understand that there was another area being developed almost in parallel in digital computation and human interaction.In the late 1980s and 1990s, an explosion of development began to take place within the field of computer science.Out of this, fields such as 'intelligent environments' (IE) were formed to study spaces with embedded computation and communication technologies, creating spaces that bring computation into the physical world.Intelligent environments are defined as spaces in which computation is seamlessly used to enhance ordinary activity.A lot of technologies were developed in this area which dealt with sensing and human behaviours, but the architecture was always secondary as developed under the mantra of 'seamlessly embedded computation'. 5In other words there was very little architectural involvement in a very exciting area that was developing computationally-enhanced environments.These developments were essentially fuelled by the concept of 'ubiquitous computing'

Some early architects take interest
These early ideas rooted in cybernetics were picked up at the time by a few architects who solidly translated them into the arena of architecture.The main problem at this time however was that the computational means were not evolved to the extent that proliferation of concepts in cybernetics could take a strong foothold.In general it remained in the realm of 'paper architecture'.Cedric Price was perhaps the most influential of the early architects to adopt the early theoretical work in cybernetics and extend it to      a roundabout way because today, when we have these 'smart' environments everywhere, the obvious route would be to say that we have this space that is really smart; that understands the environment which was originally defined as a general concept for computation which is thoroughly integrated into everyday objects and activities, and sits at the intersection of computer science, behavioural sciences, and design. 6

Corporate interests also develop in parallel
Corporate interests also developed market-driven roles which began in the late 1950s and were extremely important as they directly involved the users out in the real world; however they were not integrated with the earlier theoretical architectural concepts of interactivity.These cultural and corporate interests played major roles in influencing computationally-enhanced environments through the development of numerous market-driven products and systems that directly involved users in the real world.Computationally-driven environmental control systems were developed within buildings as a direct derivative of the introduction of sensors with remote signalling allowing for a central control room. 7The invention of the 'remote control' also came along at this time, enabling the user to assume a larger role as an operator of objects in space.In the 1970s energy management systems were introduced as well as microprocessors but, for the most part, the architecture world had yet to embrace the promises of such technologies from an interactive standpoint.In the 1980s, the PC became the interface that replaced the central console control, distributed direct digital control replaced conventional control systems, and communication could be programmed to take place on local area networks.

Eventually architects usurp enough to make something
In the 1990s everywhere you turned there was a 'smart home' and 'smart workplace' project being initiated that relished the newly available technological advancements.It was a time when wireless networks, embedded computation, and sensor effectors became both technologically and economically feasible to implement.This feasibility fuelled        These new standards are extremely exciting in light of the role of autocatalytic processes, defined here as a reaction product itself being the catalyst for its own reaction.In the context of modular reconfigurable robotics such processes describe how the pace of technological change is accelerating because of these processes.In other words, the process is 'autocatalytic' in that smart, articulate machines are helping to build even smarter, more articulate ones.
The potential is that in the near future, modular reconfigurable space could hugely impact the way people live in space, and the relationships between users and the space itself.Then if it is possible to build space out of parts that have the ability to reconfigure themselves, it is really up to architects and designers to design how these pieces will come together and how these configurations will respond to the constant flow of information between inhabitant and space.So then in light of the potential of autocatalytic processes, robotics in architecture is not at the beginning, nor is it by any means at an end; but it is, in a sense, at the end of the beginning.
12], were large enough to understand real human interactions and that they were up long enough to understand emergent behaviours.Most architectural applications are neither self-organising nor do they have higher-level intelligence functions of heuristic and symbolic decision-making abilities.
Most applications do, however, exhibit a behaviour based on low-level intelligence functions of automatic response and communication.When a large architectural element is responding to a single factor then a centralised system can be effective in executing a command to a single agent, but when there are many unknown stimuli, or many small autonomous parts, then decentralised intelligence is the most effective way to handle the sensing and response.The more decentralised a system is, the more it relies on lateral relationships, and the less it can rely on overall commands.In a decen-      In many cases traditional mechanical applications seem to be approaching the beginning of the end.
Ironically, I came about these conclusions with a foundation in strictly mechanical typologies.While I believe that there is a great aesthetic honesty and dynamic appeal to mechanised kinetics in architecture, the potential benefits of a biological paradigm seem to outweigh those of the traditional mechanical paradigm.It is also important to remember that I am not advocating the end of mechanics, but simply a reinterpretation of the scale of the mechanics.
Mechanics then are interpreted more literally as biologic rather than mechanical in the sense of a machine.

I
cannot really begin to describe my own development without a brief description of the historical context within which it lies.Essentially the theoretical work of a number of people working in cybernetics in the early 1960s laid most of the foundations in interactive architecture.At this time, Gordon Pask and other cyberneticians, including Norbert Weiner, an architectural concept of 'anticipatory architecture' [fig.3].Many of his unbuilt projects influenced architecture of process that was indeterminate, flexible, and responsive to the changing needs of users and their times. 3John Frazer extended Price's ideas, in positing that architecture should be a 'living, evolving thing' [fig.4].It's important to note that Price and Frazer both worked directly with Pask in developing their work over many years.John Frazer continued his work in the field for nearly thirty years with students at the Architectural Association in London 4 and other collaborators and summarised it in the book An Evolutionary Architecture, with an introduction by Pask himself.His work focused heavily on biological and scientific analogies and the sciences of cybernetics, complexity, and chaos.Although not in the same league as the others mentioned here, I worked for Fraser who subsequently became a strong influence in developing my own ideas.
made advancements toward understanding and identifying the field of interactive architecture by formulating their theories on the topic [fig. 1, 2].Pask's 'Conversation Theory', served as the basis of much of the architectural development in interactive architecture at the time. 1 Essentially a model was developed in which architects interpreted spaces and users as complete feedback systems.Although recently Pask has been 'rediscovered' by the architectural community, he did fade away for quite some time.Pask's trouble was for the most part a lack of marketing potential in his physical proof-of-concept models.In general, it was also difficult for him and others at the time to get funding for anything that was not directly related to development of the digital computer including research in AI and cybernetics such as neural nets, evolutionary programming, biological computation, bionics, and so forth.Most research in these areas had to adapt to what could be implemented digitally in order to be funded. 2Hence the work in these areas was not generally well funded, and therefore not prototyped, published, and disseminated.It did develop theoretically however in the late 1960s and early 70s by the likes of William Brody, Nicholas Negroponte, Charles Eastman, Andrew Rabeneck and others who expanded upon the earlier ideas explored in cybernetics by Pask and Weiner.Without going into any detail here, most of this theoretical work concerned interactive feedback systems related to adaptability.

Fig. 7 :
Fig. 7: iSpa -Interactive Environment Developed in Architectural Robotics Course at Art Center College of Design Fig. 8: iZoo -Interactive Environment Developed in Architectural Robotics Course at SCI-ARC Fig. 9: Ex-Com Cubes -Interactive Exhibit Developed in Architectural Robotics Course at Hong Kong Poly U.

Fig. 10 :
Fig. 10: FlockWall -Interactive Environment Developed in Architectural Robotics Course at Cal Poly Pomona

Fig. 9
Fig. 9Fig.10 system there is normally no centralised control structure dictating how individual parts of a system should behave, local interactions between discrete systems therefore often lead to the emergence of global behaviour.The idea of behaviour that emerges became very interesting to me and I began to explore this idea in very simple ways through a number of projects.An emergent behaviour can occur when a number of simple systems operate in an environment that forms more complex behaviours as a collective.The rules of response can be very simple and the rules for interaction between each system can be equally simple, but the combination can produce interactions that become emergent and very difficult to predict.…which led to the idea that architectural space itself could be made of robotic systems I began moving away from developing traditional uses of automated mechanical devices in architecture to looking at the potential of transformable systems that are made up of a number of small robots.I taught numerous design studios in which students developed modular autonomous robotic modules [fig.13,14] that served as the base building

Fig. 11 :
Fig. 11: Bubbles, Interactive Environment -Project of Foxlin Fig. 12: Neural Sky -Interactive Environment Developed in Architectural Robotics Course at Cal Poly Pomona Fig. 13: Modular Autonomous Robotic Module Components -Student project at Cal Poly Pomona Fig. 14: Modular Autonomous Robotic Module Components -Student project at SCI-Arc Fig. 13Fig.14

I
am very excited to witness the explosion of interest in interactive architectural environments, but caution that such should be pursued with an understanding of the inclusive historical context which laid the foundations in this area quite some time ago.Designing such environments is not inventing after all, but appreciating and marshalling the technology that exists at any given time, and extrapolating it to suit an architectural vision.As we continue to expand the possibilities of what is possible today with the accessibility of new tools we can begin to catch up with the past.