current thoughts on design

Menu

Sensor Build

Remember the telephone game you used to play as child –where a starting phrase “Mary had a little lamb” can quickly translate to “Annie ate a ham”? The same things are happening in many design/construction projects. We’re all familiar with the conventional construction methodologies such as design-bid-build and design-build. And for the most part, we can also relate with the inefficiencies of translations between design, drawing, and what’s built. Like the telephone game, the process in vulnerable to error in translation. Because of this, extra time is invested into the projects simply to correct itself.

As I sit here perusing over my notes for my next ARE on construction drawings, I wonder how much of what I’m reading will still be applicable in the next ten or even five years. We’re living in a time for the “now.” The construction industry will change ever so rapidly and smartly.

If translation is the issue, can we instead remove this component (i.e. drawings) and instead build intelligence into the production and assembly?

4th Dimension in Construction

During the ACSA Conference (CCA, San Francisco) in 2013, I had listened to the keynote speaker Gregg Pasquarelli lecture about a few of SHoP’s recent projects. It was only months after the completion of the Barclay’s Center –a project which had pushed fabrication not only into the digital realm, but even into the fourth dimension (time and schedule)! Firstly for the exterior cladding panels, the designers created an algorithm that was able to translate the model directly to the CNC routers without the need for shop drawing interpretations. Secondly, each panel had a special chip installed on them (similar to a QR code system) that would allow each panel to be identified in space and in time over the course of the construction assembly. Through the ease of an app, the designers, contractors, and fabricators, can easily identify where and when each custom cladding fit into the overall structure and sequencing.[1]

The Barclay Center is a great test bed for this technology. Because of its scale, it allows the investors to see the overall great impact this would be for the project – in terms of time saving and reduction in construction errors. We no longer have the luxury of time in construction, especially not for the incoming 10 billion in the next century!

Robotic Construction

As an early pioneer in digital and automated fabrication, the designers at Gramazio Kohler Architects have long experimented with the use of robots in construction. Starting from their earlier work – Structural Oscillations (2007-08 Venice Biennale)— they had utilized a robot to construct an undulating wall of bricks, each piece rotated to exact precision per the digital model. By 2011-12, they were able to replicate a similar experiment with flying robots. And since then it has grown into a body of research at ETH Zurich – Flight Assembled Architecture –where the group have similarly constructed a tensile-structured bridge.[2]

Robots have become the future tools. Researchers such as Madeline Gannon, whom I’ve met at the maker-space symposium at CCA (2015), are already ahead of the game and are looking into more creative ways in which we, humans, can interact and potentially collaborate with these seemingly “rigid” and “instructive” machines. It’s the similar effect of the telephone game, can we cut the language barrier between series of construction drawings and instead interact directly with the tools through human gestures? Can robots be an extension of our arms and fingers?[3]

Sensor Build

There is a current a resident at the Autodesk Pier 9 Workshop in San Francisco, Maria Yablonina, whom I’d like to meet and ask about her robotic construction research. I think Yablonina, along with Gannon and Gramazio and Kohler, is the spearhead of the next age of construction. In one of her previous research project, she had employed two communicating robots that, together, were able to “work together” to build a structure out of fiber filament. Through the combination of basic proximity sensors and robotic mechanisms (1-degree rotation and x-y translation), Maria was able to compose a conjoined series movements that were then able to intertwine the fiber into an overall structure. [4]

Now imagine this at the building scale where construction is no longer governed by complicated machinery and a whole army of contractors. Can a building be built via simple mechanisms that can intercommunicate (based on proximity and structural analysis) and create complexity rather than onsite complications? Can a structure have nano-sensors embedded into the fibers and help sense a failing structure (and in turn actuate the robots to self-correct the structure in real time)? This is something that Nish Kothari and I are interested in testing in a Traveling Pavilion project.

With research groups, such as the Deep Reinforcement Learning at UC Berkeley, on the forefront of artificial intelligence, it’s not too far into the future from when we can embed these “smartness” into sensors. With the integration of technologies such as the Kinect sensors, no longer are robots considered an automated processor, but rather one that can sense and learn to build in context.