Saturday, July 15. 2017

Note: Summer is coming again, and like each year now, it's time to digg into unread books or articles! "Luckily" and due to other activities, we didn't publish much since last Summer. So it won't be too much of a hassle to catch back. Nonetheless, there are almost 2000 entries now on | rblg...

So, I hope you'll enjoy your Summer readings (on the beach... or on the rocks)! On my side, I'll certainly try to do the same and will be back posting in September.

As we lack a decent search engine on this blog and as we don't use a "tag cloud" either... but because Summer is certainly one of the best period of the year to spend time reading and digging into past content and topics:

HERE ARE ALL THE CURRENT UPDATED CATEGORIES TO NAVIGATE ON | RBLG BLOG:

(to be seen below if you're navigating on the blog's html pages or here for rss readers)

James Bridle entraps a self-driving car in a "magic" salt circle. Image: Still from Vimeo, "Autonomous Trap 001."

As if the challenges of politics, engineering, and weather weren't enough, now self-driving cars face another obstacle: purposeful visual sabotage, in the form of specially painted traffic lines that entice the car in before trapping it in an endless loop. As profiled in Vice, the artist behind "Autonomous Trip 001," James Bridle, is demonstrating an unforeseen hazard of automation: those forces which, for whatever reason, want to mess it all up. Which raises the question: how does one effectively design for an impish sense of humor, or a deadly series of misleading markings?

Monday, March 27. 2017

Note: in direct link with the previous post about vr, this interesting evening discussion next April at the Bartlett School of Architecture about the relation between architecture and videogames (by extension, the architecture of videogames? and/or the architecture in videogames?

Or If we go for older references in our own work, this reminds me of projects in which we explored this relation between architecture and artificial environments of games or interactive 3d spaces, like for exemple the MIX-m project (2005) or even La_Fabrique (1999 (!))... Hum.

REALMS is an evening discussion on the relationship between video games and architecture held at the Bartlett School of Architecture as part of the London Games Festival 2017. As games become ever more complex and immersive, and architects increasingly adopt game technologies for visualizing and exploring their design ideas, Realms asks what the shared future of the two mediums may be. Might architects turn towards realizing ideas in virtual realms in the face of financial pressures, and what can we learn from the weird and wonderful spatial experiences that games can offer us?

REALMS is an evening of informal talks from architects, writers and game developers followed by a panel discussion and audience Q&A. It will provide a platform for the free discussion of how architecture and video games may develop together both technologically and culturally. As part of Realms we will also showcase architecture student work from the Bartlett that deals with the relationship between architecture and video game space.

Monday, March 20. 2017

Note: Obviously, it was just a matter of time before something like this (virtual virtual reality) happened! "Virtual reality" is part of "reality" isn't it? So why not represent it as well, as part of vr... Etc.

Which brings us to the 20 years old question: when will we start trigger new experiences with VR that are not necessarily linked to some kind of representation, even if this representation is an "hallucination", or some sort of surrealistic visual narrative as stated here?

But this question addresses the paradoxal limitations or presuppositions of the media itself, so to say. It seems to open doors to alternate realities, but at the same time, it is entirely based on perspective, human vision and sound perception. It is in fact quite limitative and hard to overcome, but nonetheless dimensions of human perception that have been challenged for a long time by artistic practices of different sorts.

"In the near future, most jobs have been automated. What is the purpose of humanity? Activitude, the Virtual Labor System, is here to help. Your artisanal human companionship is still highly sought by our A.I. clients. Strap on your headset. Find your calling.

Pssst. . . Sure, you could function like a therapy dog to an A.I. in Bismarck and watch your work ratings climb, but don’t you yearn for something more: adventure, conflict, purpose? Escape backstage into Activitude’s system by putting on an endless series of VR headsets in VR. Outrun Chaz, your manager, as he attempts to boot you out PERMANENTLY. Along the way, uncover the story of Activitude’s evolution from VR start-up to the “human purpose aggregator” it is today."

Monday, February 06. 2017

Note: following the two previous posts about algorythms and bots ("how do they ... ?), here comes a third one.

Slighty different and not really dedicated to bots per se, but which could be considered as related to "machinic intelligence" nonetheless. This time it concerns techniques and algoritms developed to understand the brain (BRAIN initiative, or in Europe the competing Blue Brain Project).

In a funny reversal, scientists applied techniques and algorythms developed to track human intelligence patterns based on data sets to the computer itself. How do a simple chip "compute information"? And the results are surprising: the computer doesn't understand how the computer "thinks" (or rather works in this case)!

This to confirm that the brain is certainly not a computer (made out of flesh)...

When you apply tools used to analyze the human brain to a computer chip that plays Donkey Kong, can they reveal how the hardware works?

Many research schemes, such as the U.S. government’s BRAIN initiative, are seeking to build huge and detailed data sets that describe how cells and neural circuits are assembled. The hope is that using algorithms to analyze the data will help scientists understand how the brain works.

But those kind of data sets don’t yet exist. So Eric Jonas of the University of California, Berkeley, and Konrad Kording from the Rehabilitation Institute of Chicago and Northwestern University wondered if they could use their analytical software to work out how a simpler system worked.

They settled on the iconic MOS 6502 microchip, which was found inside the Apple I, the Commodore 64, and the Atari Video Game System. Unlike the brain, this slab of silicon is built by humans and fully understood, down to the last transistor.

The researchers wanted to see how accurately their software could describe its activity. Their idea: have the chip run different games—including Donkey Kong, Space Invaders, and Pitfall, which have already been mastered by some AIs—and capture the behavior of every single transistor as it did so (creating about 1.5 GB per second of data in the process). Then they would turn their analytical tools loose on the data to see if they could explain how the microchip actually works.

For instance, they used algorithms that could probe the structure of the chip—essentially the electronic equivalent of a connectome of the brain—to establish the function of each area. While the analysis could determine that different transistors played different roles, the researchers write in PLOS Computational Biology, the results “still cannot get anywhere near an understanding of the way the processor really works.”

Elsewhere, Jonas and Kording removed a transistor from the microchip to find out what happened to the game it was running—analogous to so-called lesion studies where behavior is compared before and after the removal of part of the brain. While the removal of some transistors stopped the game from running, the analysis was unable to explain why that was the case.

In these and other analyses, the approaches provided interesting results—but not enough detail to confidently describe how the microchip worked. “While some of the results give interesting hints as to what might be going on,” explains Jonas, “the gulf between what constitutes ‘real understanding’ of the processor and what we can discover with these techniques was surprising.”

It’s worth noting that chips and brains are rather different: synapses work differently from logic gates, for instance, and the brain doesn’t distinguish between software and hardware like a computer. Still, the results do, according to the researchers, highlight some considerations for establishing brain understanding from huge, detailed data sets.

First, simply amassing a handful of high-quality data sets of the brains may not be enough for us to make sense of neural processes. Second, without many detailed data sets to analyze just yet, neuroscientists ought to remain aware that their tools may provide results that don’t fully describe the brain’s function.

As for the question of whether neuroscience can explain how an Atari works? At the moment, not really.

Thursday, January 26. 2017

Note: I just read this piece of news last day about Echo (Amazon's "robot assistant"), who accidentally attempted to buy large amount of toys by (always) listening and misunderstanding a phrase being told on TV by a presenter (and therefore captured by Echo in the living room and so on)... It is so "stupid" (I mean, we can see how the act of buying linked to these so-called "A.I"s is automatized by default configuration), but revealing of the kind of feedback loops that can happen with automatized decision delegated to bots and machines.

It's nothing new for voice-activated devices to behave badly when they misinterpret dialogue -- just ask anyone watching a Microsoft gaming event with a Kinect-equipped Xbox One nearby. However, Amazon's Echo devices is causing more of that chaos than usual. It started when a 6-year-old Dallas girl inadvertently ordered cookies and a dollhouse from Amazon by saying what she wanted. It was a costly goof ($170), but nothing too special by itself. However, the response to that story sent things over the top. When San Diego's CW6 discussed the snafu on a morning TV show, one of the hosts made the mistake of saying that he liked when the girl said "Alexa ordered me a dollhouse." You can probably guess what happened next.

Sure enough, the channel received multiple reports from viewers whose Echo devices tried to order dollhouses when they heard the TV broadcast. It's not clear that any of the purchases went through, but it no doubt caused some panic among people who weren't planning to buy toys that day.

It's easy to avoid this if you're worried: you can require a PIN code to make purchases through the Echo or turn off ordering altogether. You can also change the wake word so that TV personalities won't set off your speaker in the first place. However, this comedy of errors also suggests that there's a lot of work to be done on smart speakers before they're truly trustworthy. They may need to disable purchases by default, for example, and learn to recognize individual voices so that they won't respond to everyone who says the magic words. Until then, you may see repeats in the future.

Thursday, January 19. 2017

Note: let's "start" this new (delusional?) year with this short video about the ways "they" see things, and us. They? The "machines" of course, the bots, the algorithms...

An interesting reassembled trailer that was posted by Matthew Plummer-Fernandez on his Tumblr #algopop that documents the "appearance of algorithms in popular culture". Matthew was with us back in 2014, to collaborate on a research project at ECAL that will soon end btw and worked around this idea of bots in design.

Will this technological future become "delusional" as well, if we don't care enough? As essayist Eric Sadin points it in his recent book, "La silicolonisation du monde" (in French only at this time)?

Possibly... It is with no doubt up to each of us (to act), so as regarding our everyday life in common with our fellow human beings!

Wednesday, November 30. 2016

Note: I'll be pleased to be in Paris next Friday and Saturday (02-03.12) at the Centre Culturel Suisse and in the company of an excellent line up (!Mediengruppe Bitnik, Nicolas Nova, Yves Citton, Tobias Revell & Nathalie Kane, Rybn, Joël Vacheron and many others) for the conference and event "Bot Like Me" curated by Sophie Lamparter and Luc Meier.

In particular, in the frame of this research project, as a source of critical inspiration for a workshop we were preparing to lead with students at that time (critical because "magic" in the context of technology means what it means: tricked and not understanding, therefore believing or "stupefied").

For the matter of documentation, I reblog this post as well on | rblg as it brings different ideas about the "sublime" related to data or data centers, creation and contemporary technology in general.

It may be a bit hard to follow without the initial context (a brief by the invited guests, Random International and the general objectives of the project), but this context can be accessed from within the post -below-, for the ones interested to digg deeper.

...

As a matter of fact, this whole topic make me also think of the film The Prestige by Christopher Nolan. In which the figure of Nikola Tesla (played by "The Man Who Fell to Earth himself, a.k.a. David Bowie) is depicted as a character very close to a magician, his inventions with electricity being understood at the margin between sciences and magic.

Following the publication of Dev Joshi‘s brief on I&IC documentary blog yesterday (note: 10.11.2015), I took today the opportunity to briefly introduce it to the interaction design students that will be involved in the workshop next week. Especially, I focused on some points of the brief that were important but possibly quite new concepts for them. I also extended some implicit ideas with images that could obviously bring ideas about devices to build to access some past data, or “shadows” as Dev’s names them.

What comes out in a very interesting way for our research in Dev’s brief is the idea that the data footprints each of us leaves online on a daily basis (while using all type of digital services) could be considered as past entities of ourselves, or trapped, forgotten, hidden, … (online) fragments of our personalities… waiting to be contacted again.

How many different versions of you are there in the cloud? If they could speak, what would they say?

Yet, interestingly, if the term “digital footprint” is generally used in English to depict this situation (the data traces each of us leaves behind), we rather use in French the term “ombre numérique” (literally “digital shadow”). That’s why we’ve decided with Dev that it was preferable to use this term as the title for the workshop (The Everlasting Shadows): it is somehow a more vivid expression that could bring quite direct ideas when it comes to think about designing “devices” to “contact” these “digital entities” or make them visible again in some ways.

By extension, we could also start to speak about “digital ghosts” as this expression is also commonly used (not to mention the “corps sans organes” of G. Deleuze/F. Gattari and previously A. Artaud). Many “ghosts”/facets of ourselves? All trapped online in the form of zombie data?

Your digital ghosts are trapped on islands around the cloud – is there a way to rescue them? Maybe they just need a shelter to live in now that you have moved on?

… or a haunted house?

And this again is a revealing parallel, because it opens the whole conceptual idea to beliefs… (about ghosts? about personal traces and shadows? about clouds? and finally, about technology? …)

What about then to work with inspirations that would come from the spiritualism domain, its rich iconography and produce “devices” to communicate with your dead past data entities?

Fritz Lang. “Dr. Mabuse, the Gambler”, movie, 1922.

Or even start to think about some kind of “wearables”, and then become a new type of fraud technological data psychic?

We could even digg deeper into these “beliefs” and start looking at old illustrations and engravings that depicts relations to “things that we don’t understand”, that are “beyond our understanding”… and that possibly show “tools” or strange machinery to observe or communicate with these “unknown things” (while trying to understand them)?

This last illustration could also drive us, by extension and a very straight shortcut , to the idea of the Sublime (in art, but also in philosophy), especially the romantic works of the painters from that period (late 18th and early 19th centuries, among them W. Turner, C. S. Friedrich, E. Delacroix, T. Cole, etc.)

Submerged by the presentiment of a nature that was in all dimensions dominating humans, that remained at that time mostly unexplained and mysterious, if not dangerous and feared, some painters took on this feeling, named “sublime” after Edmund Burke’s Philosophical Enquiry (1757), and start painting dramatic scenes of humans facing the forces of nature.

It is not by chance of course that I’ll end my “esoteric comments about the brief” post with this idea of the Sublime. This is because recently, the concept found a new life in regard to technology and its central yet “unexplained, mysterious, if not dangerous and feared” role in our contemporary society. The term got completed at this occasion to become the “Technological Sublime”, thus implicitly comparing the once dominant and “beyond our understanding” Nature to our contemporary technology.

So, to complete my post with a last question, is the Cloud, that everybody uses but nobody seems to understand, a technologically sublime artifact? Wouldn’ it be ironic that an infrastructure, which aim is to be absolutely rational and functional, ultimately contributes to creates a completely opposite feeling?

fabric | rblg

This blog is the survey website of fabric | ch - studio for architecture, interaction and research.

We curate and reblog articles, researches, writings, exhibitions and projects that we notice and find interesting during our everyday practice and readings.

Most articles concern the intertwined fields of architecture, territory, art, interaction design, thinking and science. From time to time, we also publish documentation about our own work and research, immersed among these related resources and inspirations.

This website is used by fabric | ch as archive, references and resources. It is shared with all those interested in the same topics as we are, in the hope that they will also find valuable references and content in it.