Monday, February 20. 2017

“My feeling is that the Bauhaus being conveniently located before the Second World War makes it safely historical,” says Dr. Peter Kapos. “Its objects have an antique character that is about as threatening as Arts and Crafts, whereas the problem with the Ulm School is that it’s too relevant. The questions raised about industrial design [still apply], and its project failed – its social project being particularly disappointing – which leaves awkward questions about where we are in the present.”

Kapos discovered the Hochschule für Gestaltung Ulm, or Ulm School, through his research into the German manufacturing company Braun, the representation of which is a specialism of his archive, das programm. The industrial design school had developed out of a community college founded by educationalist Inge Scholl and graphic designer Otl Aicher in 1946. It was established, as Kapos writes in the book accompanying the Raven Row exhibition, The Ulm Model, “with the express purpose of curbing what nationalistic and militaristic tendencies still remained [in post-war Germany], and making a progressive contribution to the reconstruction of German social life.”

The Ulm School closed in 1968, having undergone various forms of pedagogy and leadership, crises in structure and personality. Nor the faculty or student-body found resolution to the problems inherent to industrial design’s claim to social legitimacy – “how the designer could be thoroughly integrated within the production process at an operational level and at the same time adopt a critically reflective position on the social process of production.” But while the Ulm School and the Ulm Model collapsed, it remains an important resource, “it’s useful, even if the project can’t be restarted, because it was never going to succeed, the attempt is something worth recovering. Particularly today, under very difficult conditions.”

Max Bill, a graduate of the Bauhaus and then president of the Swiss Werkbund, arrived at Ulm in 1950, having been recruited partly in the hope that his international profile would attract badly needed funding. He tightened the previously broad curriculum, established by Marxist writer Hans Werner Richter, around design, mirroring the practices of his alma mater.

Bill’s rectorship ran from 1955-58, during which “there was no tension between the way he designed and the requirements of the market”. The principle of the designer as artist, a popular notion of the Bauhaus, curbed the “alienating nature of industrial production”. Due perhaps in part to the trauma of WW2, people hadn’t been ready to allow technology into the home that declared itself as technology.

“The result of that was record players and radios smuggled into the home, hidden in what looked like other pieces of furniture, with walnut veneers and golden tassels.” Bill’s way of thinking didn’t necessarily reflect the aesthetic, but it wasn’t at all challenging politically. “So in some ways that’s really straight-forward and unproblematic – and he’s a fantastic designer, an extraordinary architect, an amazing graphic designer, and a great artist – but he wasn’t radical enough. What he was trying to do with industrial design wasn’t taking up the challenge.”

In 1958 Bill stepped down having failed to “grasp the reality of industrial production simply at a technical and operational level… [or] recognise its emancipatory potential.” The industrial process had grown in complexity, and the prospect of rebuilding socially was too vast for single individuals to manage. It was no longer possible for the artist-designer to sit outside of the production process, because the new requirements were so complex. “You had to be absolutely within the process, and there had to be a team of disciplinary specialists — not only of material, but circulation and consumption, which was also partly sociological. It was a different way of thinking about form and its relation to product.”

After Bill’s departure, Tomás Maldonado, an instructor at the school, “set out the implications for a design education adequate to the realities of professional practice.” Changes were made to the curriculum that reflected a critically reflective design practice, which he referred to as ‘scientific operationalism’ and subjects such as ‘the instruction of colour’, were dropped. Between 1960-62, the Ulm Model was introduced: “a novel form of design pedagogy that combined formal, theoretical and practical instruction with work in so-called ‘Development Groups’ for industrial clients under the direction of lecturers.” And it was during this period that the issue of industrial design’s problematic relationship to industry came to a head.

“You had to be absolutely within the process, and there had to be a team of disciplinary specialists – not only of material, but circulation and consumption, which was also partly sociological. It was a different way of thinking about form and its relation to product.”

– Peter Kapos

In 1959, a year prior to the Ulm Model’s formal introduction, Herbert Lindinger, a student from a Development Group working with Braun, designed an audio system. A set of transistor equipment, it made no apologies for its technology, and looked like a piece of engineering. His audio system became the model for Braun’s 1960s audio programme, “but Lindinger didn’t receive any credit for it, and Braun’s most successful designs from the period derived from an implementation of his project. It’s sad for him but it’s also sad for Ulm design because this had been a collective project.”

The history of the Braun audio programme was written as being defined by Dieter Rams, “a single individual — he’s an important designer, and a very good manager of people, he kept the language consistent — but Braun design of the 60s is not a manifestation of his genius, or his vision.” And the project became an indication of why the Ulm project would ultimately fail, “when recalling it, you end up with a singular genius expressing the marvel of their mind, rather than something that was actually a collective project to achieve something social.”

An advantage of Bill’s teaching model had been the space outside of the industrial process, “which is the space that offers the possibility of criticality. Not that he exercised it. But by relinquishing that space, [the Ulm School] ended up so integrated in the process that they couldn’t criticise it.” They realised the contradiction between Ulm design and consumer capitalism, which had been developing along the same timeline. “Those at the school became dissatisfied with the idea of design furnishing market positions, constantly producing cycles of consumptive acts, and they struggled to resolve it.”

The school’s project had been to make the world rational and complete, industrially-based and free. “Instead they were producing something prison-like, individuals were becoming increasingly separate from each other and unable to see over their horizon.” In the Ulm Journal, the school’s sporadic, tactically published magazine that covered happenings at, and the evolving thinking and pedagogical approach of Ulm, Marxist thinking had become an increasingly important reference. “It was key to their understanding the context they were acting in, and if that thinking had been developed it would have led to an interesting and different kind of design, which they never got round to filling in. But they created a space for it.”

“[A Marxian approach] would inevitably lead you out of design in some way. And the Ulm Model, the title of the Raven Row exhibition, is slightly ironic because it isn’t really a model for anything, and I think they understood that towards the end. They started to consider critical design as something that had to not resemble design in its recognised form. It would be nominally designed, the categories by which it was generally intelligible would need to be dismantled.”

The school’s funding was equally problematic, while their independence from the state facilitated their ability to validate their social purpose, the private foundation that provided their income was funded by industry commissions and indirect government funding from the regional legislator. “Although they were only partially dependent on government money, they accrued so much debt that in the end they were entirely dependent on it. The school was becoming increasingly radical politically, and the more radical it became, the more its own relation to capitalism became problematic. Their industry commissions tied them to the market, the Ulm Model didn’t work out, and their numbers didn’t add up.”

The Ulm School closed in 1968, when state funding was entirely withdrawn, and its functionalist ideals were in crisis. Abraham Moles, an instructor at the school, had previously asserted the inconsistency arising from the practice of functionalism under the conditions of ‘the affluent society’, “which for the sake of ever expanding production requires that needs remain unsatisfied.” And although he had encouraged the school to anticipate and respond to the problem, so as to be the “subject instead of the object of a crisis”; he hadn’t offered concrete ideas on how that might be achieved.

But correcting the course of capitalist infrastructure isn’t something the Ulm School could have been expected to achieve, “and although the project was ill-construed, it is productive as a resource for thinking about what a critical design practice could be in relation to capitalism.” What’s interesting about the Ulm Model today is their consideration of the purpose of education, and their questioning of whether it should merely reflect the current state of things – “preparing a workforce for essentially increasing the GDP; and establishing the efficiency of contributing sectors in a kind of diabolical utilitarianism.”

Wednesday, February 08. 2017

Note: interesting exhibition and economic setup (by Fala Atelier) for an exhibition about the metabolists in Lisbon --Nagakin Capsule Tower specifically--, in one of the many marvelous yet rotten "palaces" of the City!

Finding a place to live in Tokyo isn't easy. Most of the available options are expensive and usually located far from the center. Typologically, the Nakagin Capsule Tower continues to prove that it makes sense. Designed by Kisho Kurokawa and built in 1972, it represented a new typology and a different approach to the idea of urban renewal.

Nevertheless, forty years later, it is clear that something went wrong along the way. The building is getting emptier and several of the capsules are abandoned, rotten, leaking. Some of the owners want to demolish it; a few offer resistance. Each capsule was supposed to last 20 years but twice the time has passed.

Metabolism’s biggest icon is sick and stands today only as a remembrance of a future that never happened.

Anticlimax was an exhibition about the contemporary routine of a fallen hero. While presenting its current condition, the exhibition intended to illustrate the contemporary daily life of one of the most iconic buildings of the 20th Century.

Exhibiting the former metabolist superstar in Portugal was also a provocation and the layout for the exhibition was a necessary curatorial dead-end: presenting the Nakagin in a traditional way would be conceptually wrong. The exhibition happens in an almost negligent way, reflecting the condition of the building, bringing its sense of scale and repetition to the Sinel de Cordes Palace.

Monday, February 06. 2017

Now that machines (or should we rather say companies?) are starting to listen continuously, that they are installed in your home or your everyday vicinity, we can start see some glitches... don't we? Sounds here quite absurd again.

We can then envision situations where machines would be hacked (rather than only trolled) by sounds or phrases. Literally "Spells"!

Or to reverse the process, where machines could pirate other machines, or even reproduce what they are doing only by listening to the noise they are doing: a colleague recently pointed me to this special case in which a 3d print could possily be copied and remade only by listeing the printing process in the first place. Reverse-engineer it (the printing process = movements = specific sounds) and you might end being able to reprint it!

Early during tonight’s game, Google’s ad for the Google Home aired on millions of TVs. We’ve actually seen the ad before: loving families at home meeting, hugging, and being welcomed by the Google Assistant. Someone says “OK Google,” and those familiar, colorful lights pop up.

But then my Google Home perked up, confused. “Sorry,” it said. “Something went wrong.” I laughed, because that wasn’t supposed to happen. I wasn’t the only one.

Poor Dave... at some point, some enterprising TV writer or ad jerk is gonna plant an “OK Google” into some on TV with intent and force everyone to listen to Nickelback. Mark my words. This is a massive troll waiting to happen.

Note: following the two previous posts about algorythms and bots ("how do they ... ?), here comes a third one.

Slighty different and not really dedicated to bots per se, but which could be considered as related to "machinic intelligence" nonetheless. This time it concerns techniques and algoritms developed to understand the brain (BRAIN initiative, or in Europe the competing Blue Brain Project).

In a funny reversal, scientists applied techniques and algorythms developed to track human intelligence patterns based on data sets to the computer itself. How do a simple chip "compute information"? And the results are surprising: the computer doesn't understand how the computer "thinks" (or rather works in this case)!

This to confirm that the brain is certainly not a computer (made out of flesh)...

When you apply tools used to analyze the human brain to a computer chip that plays Donkey Kong, can they reveal how the hardware works?

Many research schemes, such as the U.S. government’s BRAIN initiative, are seeking to build huge and detailed data sets that describe how cells and neural circuits are assembled. The hope is that using algorithms to analyze the data will help scientists understand how the brain works.

But those kind of data sets don’t yet exist. So Eric Jonas of the University of California, Berkeley, and Konrad Kording from the Rehabilitation Institute of Chicago and Northwestern University wondered if they could use their analytical software to work out how a simpler system worked.

They settled on the iconic MOS 6502 microchip, which was found inside the Apple I, the Commodore 64, and the Atari Video Game System. Unlike the brain, this slab of silicon is built by humans and fully understood, down to the last transistor.

The researchers wanted to see how accurately their software could describe its activity. Their idea: have the chip run different games—including Donkey Kong, Space Invaders, and Pitfall, which have already been mastered by some AIs—and capture the behavior of every single transistor as it did so (creating about 1.5 GB per second of data in the process). Then they would turn their analytical tools loose on the data to see if they could explain how the microchip actually works.

For instance, they used algorithms that could probe the structure of the chip—essentially the electronic equivalent of a connectome of the brain—to establish the function of each area. While the analysis could determine that different transistors played different roles, the researchers write in PLOS Computational Biology, the results “still cannot get anywhere near an understanding of the way the processor really works.”

Elsewhere, Jonas and Kording removed a transistor from the microchip to find out what happened to the game it was running—analogous to so-called lesion studies where behavior is compared before and after the removal of part of the brain. While the removal of some transistors stopped the game from running, the analysis was unable to explain why that was the case.

In these and other analyses, the approaches provided interesting results—but not enough detail to confidently describe how the microchip worked. “While some of the results give interesting hints as to what might be going on,” explains Jonas, “the gulf between what constitutes ‘real understanding’ of the processor and what we can discover with these techniques was surprising.”

It’s worth noting that chips and brains are rather different: synapses work differently from logic gates, for instance, and the brain doesn’t distinguish between software and hardware like a computer. Still, the results do, according to the researchers, highlight some considerations for establishing brain understanding from huge, detailed data sets.

First, simply amassing a handful of high-quality data sets of the brains may not be enough for us to make sense of neural processes. Second, without many detailed data sets to analyze just yet, neuroscientists ought to remain aware that their tools may provide results that don’t fully describe the brain’s function.

As for the question of whether neuroscience can explain how an Atari works? At the moment, not really.

fabric | rblg

This blog is the survey website of fabric | ch - studio for architecture, interaction and research.

We curate and reblog articles, researches, writings, exhibitions and projects that we notice and find interesting during our everyday practice and readings.

Most articles concern the intertwined fields of architecture, territory, art, interaction design, thinking and science. From time to time, we also publish documentation about our own work and research, immersed among these related resources and inspirations.

This website is used by fabric | ch as archive, references and resources. It is shared with all those interested in the same topics as we are, in the hope that they will also find valuable references and content in it.