Yesterday afternoon, I explored the town of Owego. I ran across a used DVD, CD & electronics store and bought an Atari Flashback Console for $25. I didn’t even know these existed.

I can plug it directly into their video synth system. After futzing around with the various patch cables, I came up with this 5-minute composition, which shows me playing the game. The audio sounds like marching with dirty noise levels.

Also, here is the latest 3D model from my code, which now has a true 3D axis for data-plotting.

In the first full day of the residency at Signal Culture, I played around with the video and audio synthesizers. It’s a new world for me.

While my focus is on the Machine Data Dreams project, I also want to play with what they have and get familiar with the amazing analog equipment.

I started with this 2 minute video, which I shot earlier this summer at Musee d’Orsay. I had to document the odd spectacle: visitor after visitor would take photos of this famous Van Gogh self-portrait…despite the fact you can get a higher-quality version online.

I ran this through a few patches and into the Wobbulator, which affects the electronic signal on the CRT itself.

Ewa Justka, who is the toolmaker-in-residence here, and who is building her own audio synthesizer spruced up the accompanying audio. I captured a 20-minute sample.

What I love about the result is that the repetitive 2-minute video takes on its own life, as the two of us tweaked knobs, made live patches and laughed a lot.

I was notified months ago, but the project was on the back-burner until now — where I’m beginning some initial research and experiments at a residency called Signal Culture. I expect full immersion in the fall.

The project description
Machine Data Dreams will be a large-scale sculptural installation that maps the emerging sentience of machines (laptops, phones, appliances) into physical form. Using the language of machines — software program code— as linguistic data points, Scott Kildall will write custom algorithms that translate how computers perceive the world into physical representations that humans can experience.

The project’s narrative proposition is that machines are currently prosthetic extensions of ourselves, and in the future, they will transcend into something sentient. Computer chips not only run our laptops and phones, but increasingly our automobiles, our houses, our appliances and more. They are ubiquitous and yet, often silent. The key to understanding their perspective of the world is to envision how machines view the world, in an act of synthetic synesthesia.

Scott will write software code that will perform linguistic analysis on machine syntax from embedded systems — human-programmable machines that range from complex, general purpose devices (laptops and phones) to specific-use machines (refrigerators, elevators, etc) . Scott’s code will generate virtual 3D geometric monumental sculptures. More complex structures will reflect the higher-level machines and simpler structures will be generated from lower-level devices. We are intrigued by the experimental nature of what the form will take — this is something that he will not be able to plan.

Machine Data Dreams will utilize 3D printing and laser-cutting techniques, which are digital fabrication techniques that are changing how sculpture can be created — entirely from software algorithms. Simple and hidden electronics will control LED lights to imbue a sense of consciousness to the artwork. Plastic joints will be connected via aluminum dowels to form an armature of irregular polygons. The exterior panels will be clad by a semi-translucent acrylic, which will be adhered magnetically to the large-sized structures. Various installations can easily be disassembled and reassembled.

The project will build on my experiments with the Polycon Construction Kit by Michael Ang, where I’m doing some source-code collaboration. This will heat up the fall.

At Signal Culture, I have 1 week of residency time. It’s short and sweet. I get to play with devices such as the Wobbulator, originally built by Nam June Paik and video engineer Shuya Abe.

The folks at Signal Culture built their own from the original designs.

What am I doing here, with analog synths and other devices? Well, I’m working with a home-built Arduino data logger that captures raw analog video signals (I will later modify it for audio).

I’ve optimized the code to capture about 3600 signals/second. The idea is to get a raw data feed of what a machine might be “saying”, or the electronic signature of a machine.

Does it work? Well, I hooked it up to a Commodore Amiga (yes, they have one).

I captured about 30 seconds of video and I ran it through a crude version of my custom 3D data-generation software, which makes models and here is what I got. Whoa…

It is definitely capturing something.

Its early research. The forms are flat 3D cube-plots. But also very promising.

The inevitable conversation about evictions at San Francisco every party…art organizations closing, friends getting evicted…the city is changing. It has become a boring topic, yet it is absolutely, completely 100% real.

For the Bad Data series — 12 data-visualizations depicting socially-polarized, scientifically dubious and morally ambiguous dataset, each etched onto an aluminum honeycomb panel — I am featuring two works: 18 Years of Evictions in San Francisco and 2015 AirBnb Listings for exactly this reason. These two etchings are the centerpieces of the show.

This is the reality of San Francisco, it is changing and the data is ‘bad’ — not in the sense of inaccurate, but rather in the deeper sense of cultural malaise.

By the way, the reception for the “Bad Data” show is this Friday (July 24, 2015) at A Simple Collective, and the show runs through August 1st.

When I embarked on the Bad Data series, I reached out to the organization and they assisted me with their data sets. My art colleagues may not know this, but I’m an old-time activist in San Francisco. This helped me with getting the datasets, for I know that the story of evictions is not new and certainly not on this scale.

In 2001, I worked in a now-defunct video activist group called Sleeping Giant, which worked on short videos in the era when Final Cut Pro made video-editing affordable and when anyone with a DV camera could make their own videos. We edited our work, sold DVDs and had local screenings, stirring up the activist community and telling stories from the point-of-view of people on the ground. Sure, now we have Twitter and social media, but at the time, this was a huge deal in breaking apart the top-down structures of media dissemination.

Here is No Nos Vamosa hastily-edited video about evictions in San Francisco. Yes, this was 14 years ago.

I’ve since moved away from video documentary work and towards making artwork: sculpture, performance, video and more. The video-activist work and documentary video in general felt overly confining as a creative tool.

My current artistic focus is to transform datasets using custom software code into physical objects. I’ve been working with the amazing fabrication machines at Autodesk’s Pier 9 facility to make work that was not previously possible.

Ths dataset (also provided through the SF Rent Board) includes all the no-fault evictions in San Francisco, I got my computer geek on…well, I do try to use my programming powers for non-profit work and artwork.

I mapped the data into vector shapes using the C++ open source toolkit, called OpenFrameworks and wrote code which transformed the ~9300 data points into plotable shapes, which I could open in Illustrator. I did some work tweaking the strokes and styles.

This is what the etching looks like from above, once I ran int through the water jet. There were a lot of settings and tests to get to this point, but the final results were beautiful.

The material is a 3/4″ honeycomb aluminum. I tuned the high-pressure from the water-jet to pierce through the top layer, but not the bottom layer. However, the water has to go somewhere. The collisions against the honeycomb produce unpredictable results.

…just like the evictions themselves. We don’t know the full effect of displacement, but can only guess as the city is rapidly becoming less diverse. The result is below, a 20″ x 20″ etching.

Bad Data: 18 Years of San Francisco Evictions

The Airbnb debate is a little less clear-cut. Yes, I do use Airbnb. It is incredibly convenient. I save money while traveling and also see neighborhoods I’d otherwise miss. However, the organization and its effect on city economies is a contentious one.

There also seems to be a long-term effect on rent. Folks, and I’ve met several who do this, are renting out places as tenants on Airbnb. Some don’t actually live in their apartments any longer. The effect is to take a unit off the rental market and mark it as a vacation rental. Some argue that this also skirts the law rent-control in the first place, which was designed as a compromise solution between landlords and tenants.

There are potential zoning issues, as well…a myriad of issues around Airbnb.

BAD DATA: 2015 AIRBNB LISTINGS, etching file

In any case, the location of the Airbnb rentals (self-reported, not a complete list) certainly fit the premise of the Bad Data series. It’s an amazing dataset. Thanks to darkanddifficult.com for this data source.

In 1989, I read Neuromancer for the first time. The thing that fascinated me the most was not the concept of “cyberspace” that Gibson introduced. Rather it was the physical description of virtual data. The oft-quoted line is:

“The matrix has its roots in primitive arcade games. … Cyberspace. A consensual hallucination experienced daily by billions of legitimate operators, in every nation, by children being taught mathematical concepts. … A graphic representation of data abstracted from banks of every computer in the human system. Unthinkable complexity. Lines of light ranged in the nonspace of the mind, clusters and constellations of data. Like city lights, receding.”

What was this graphic representation of data that struck me at first and has stuck with be ever since. I could only imagine what this could be. This concept of physicalizing virtual data later led to my Data Crystals project. Thank you, Mr. Gibson.

In Neuromancer, the protagonist Case is a freelance “hacker”. The book was published well-before Anonymous, back in the days when KILOBAUD was the equivalent of Spectre for the BBS world.

At the time, I thought that there would be no way that corporations would put their data in a central place that anyone with a computer and a dial-up connection (and, later T1, DSL, etc) could access. This would be incredibly stupid.

And then, the Internet happened, albeit more slowly than people remember. Now hacking and data breaches are commonplace.

In these examples, ‘bad’ has a two-layered meaning. The abrogations of accepted treatises of Internet behavior is widely considered a legal, though always not a moral crime. The data is also ‘bad’ in the sense that it is incomplete. Data breaches are usually not advertised by the entities that get breached. That would be poor publicity.

For the Bad Data series, I worked with no necessarily the data wanted, but rather the data that I could get. From Information Is Beautiful, I found this dataset of Internet data breaches.

What did I discover? …that Washington DC is the leader of breached information. I suspect it’s mostly because the U.S. government is the biggest target rather than lax government security. The runner-up is New York City, the center of American finance. Other notable cities are San Francisco, Tehran and Seoul. San Francisco makes sense — the city is home to many internet companies. And Tehran, which is the target of Western internet attacks, government or otherwise. But Seoul? They claim to be targeted by North Korea. However, as we have found out, with the Sony Pictures Entertainment Hack, North Korea is an easy scapegoat.

BAD DATA: INTERNET DATA BREACHES (BELOW)

Conversely, there are many lists of banned IPs. The one I worked with is the Suricata SSL Blacklist. This may not be the best source, as there are thousands of IP Blacklists, but it is one that is publicly available and reasonably complete. As I’ve learned, you have to work with the data you can get, not necessarily the data you want.

I ran these two etched panels both through an anodization process, which further created a filmy residue on the surface. I’m especially pleased with how the Banned IPs panel came out.

Bad Data: BLACKLISTED IPs (below)

I recently finished a new artwork — called Genetic Portraits — which is a series of microscope photographs of laser-etched glass that data-visualize a person’s genetic traits.

I specifically developed this work as an experimental piece, for the Bearing Witness: Surveillance in the Drone Age show. I wanted to look at an extreme example of how we have freely surrendered our own personal data for corporate use. In this case, 23andMe provides a (paid) extensive genetic sequencing package. Many people, including myself have sent in saliva samples to the company, which they then process. From their website, you can get a variety of information, including their projected likelihood that you might be prone to specific diseases based on your genetic traits.

Following my line of inquiry with other projects such as Data Crystals and Water Works, where I wrote algorithms that transformed datasets into physical objects, this project processes individual’s genetic sequence to generate vector files, which I later use to laser-etch onto microscope slides. The full project details are here.

Concept + MaterialI began my experiment months earlier, before the project was solidified, by examining the effect of laser-etching on glass underneath a microscope. This stemmed from conversations with some colleagues about the effect of laser-cutting materials. When I looked at this underneath a microscope, I saw amazing results: an erratic universe accentuated by curved lines. Even with the same file, each etching is unique. The glass cracks in different ways. Digital fabrication techniques still results in distinct analog effects.

When the curators of the show, Hanna Regev and Matt McKinley, invited me to submit work on the topic of surveillance, I considered how to leverage various experiments of mine, and came back to this one, which would be a solid combination of material and concept: genetic data etched onto microscope slides and then shown at a macro scale: 20” x 15” digital prints.

Surrendering our DataI had so many questions about my genetic data. Is the research being shared? Do we have ownership of this data? Does 23andMe even ask for user consent? As many articles point out, the answers are exactly what we fear. Their user agreement states that “authorized personnel of 23andMe” can use the data for research. This sounds officially-sounding text simply means that 23andMe decides who gets access to the genetic data I submitted. 23andMe is not unique: other gene-sequencing companies have similar provisions, as the article suggests.

Some proponents suggest that 23andMe is helping the research front, while still making money. It’s capitalism at work. This article in Scientific American sums up the privacy concerns. Your data becomes a marketing tool and people like me handed a valuable dataset to a corporation, which can then sell us products based on the very data we have provided. I completed the circle and I even paid for it.

However, what concerns me even more than 23andMe selling or using the data — after all, I did provide my genetic data, fully aware of its potential use — is the statistical accuracy of genetic data. Some studies have reported a Eurocentric bias to the data and The FDA has also has battled with 23andMe regarding the health data they provide. The majority of the data (with the exception of Bloom’s Syndrome) simply wasn’t predictive enough. Too many people had false positives with the DNA testing, which not only causes worry and stress but could lead to customers taking pre-emptive measures such as getting a mastectomy if they mistakenly believe they have are genetically predisposed to breast cancer.

A deeper look at the 23andMe site shows a variety of charts that makes it appear like you might be susceptible (or immune) to certain traits. For example, I have lower-than-odds of having “Restless Leg Syndrome“, which is probably the only neurological disorder that makes most people laugh when hearing about it. My genetic odds of having it are simply listed as a percentage.

Our brains aren’t very good with probabilistic models, so we tend to inflate and deflate statistics. Hence, one of many problems of false positives.

And, as I later discovered, from an empirical standpoint, my own genetic data strayed far from my actual personality. Our DNA simply does not correspond closely enough to reality.

Isolating useful information from this was tricky. I cross-referenced some of the rsids used for common traits from 23andMe with the SNP database. At first I wanted to map ALL of the genetic data. But, the dataset was complex — too much so for this short experiment and straightforward artwork.

Instead, I worked with some specific indicators that correlate to physiological traits such as lactose tolerance, sprinter-based athleticism, norovirus resistances, pain sensitivity, the “math” gene, cilantro aversion — 15 in total. I avoided genes that might correlate to various general medical conditions like Alzheimer’s and metabolism.

For each trait I cross-referenced the SNP database with 23andMe data to make sure the allele values aligned properly. This was arduous at best.

There was also a limit on physical space for etching the slide, so having more than 24 marks or etchings one plate would be chaotic. Through days of experimentation, I found that 12-18 curved lines would make for compelling microscope photography.

To map the data onto the slide, I modified Golan Levin’s decades-old Yellowtail Processing sketch, which I had been using as a program to generate curved lines onto my test slides. I found that he had developed an elegant data-storage mechanism that captured gestures. From the isolated rsids, I then wrote code which gave weighted numbers to allele values (i.e. AA = 1, AG = 2, GG = 3, depending on the rsid).

Based on the rsid numbers themselves, my code generated (x, y) anchor points and curves with the allele values changing the shape of each curve. I spent some time tweaking the algorithm and moving the anchor points. Eventually, my algorithm produced this kind of result, based on the rsids.

The question I always get asked about my data-translation projects is about legibility. How can you infer results from the artwork? Its a silly question, like asking an Kindle engineer to to analyze a Shakespeare play. A designer of data-visualization will try to tell a story using data and visual imagery.

My research and work focuses deep experimentation with the formal properties of sculpture — or physical forms — based on data. I want to push boundaries of what art can look like, continuing the lineage of algorithmically-generated work by artists such as Sol Lewitt, Sonia Rappaport and Casey Raes.

Is it legible? Slightly so. Does it produce interesting results? I hope so.

But, with this project, I’ve learned so much about genetic data — and even more about the inaccuracies involved. It’s still amazing to talk about the science that I’ve learned in the process of art-making.

Each of my 5 samples looks a little bit different. This is the mapping of actual genetic traits of my own sample and that of one other volunteer named “Nancy”.

Genetic Traits for Scott (ABOVE)
GENETIC TRAITS FOR NaNCY (BELOW)

We both share a number of genetic traits such as the “empathy” gene and curly hair. The latter seems correct — both of our hair is remarkably straight. I’m not sure about the empathy part. Neither one of us is lactose intolerant (also true in reality).

But the test-accuracy breaks down on several specific points. Nancy and I do have several differences including athletic predisposition. I have the “sprinter” gene, which means that I should be great at fast-running. I also do not have the math gene. Neither one of these is at all true.

I’m much more suited to endurance sports such as long-distance cycling and my math skills are easily in the 99th percentile. From my own anecdotal standpoint, except for well-trodden genetics like eye color, cilantro aversion and curly hair, the 23andMe results often fail.

The genetic data simply doesn’t seem to be support the physical results. DNA is complex. We know this, it is non-predictive. Our genotype results in different phenotypes and the environmental factors are too complex for us to understand with current technology.

Back to the point about legibility. My artwork is deliberately non-legible based on the fact that the genetic data isn’t predictive. Other mapping projects such as Water Works are much more readable.

I’m not sure where this experiment will go. I’ve been happy with the results of the portraits, but I’d like to pursue this further, perhaps in collaboration with scientists who would be interested in collaboration around the genetic data.

FOUR FINAL SLIDE ETCHINGS (BELOW)

The first day after arriving in Paris, we embarked on a dérive — the French word for a “drift” — an unplanned journey (usually) through an urban space. The idea is to immerse yourself in the moment, the now of a city. No maps, no mobile phones, no direction, just walk and make choices on where to go based on your senses: the smells, sights and sounds of a city. This experiment would hopefully be some sort of authentic experience, devoid of the central modes of organization and give us a subjective experience.

I did this once before, in Berlin, while reading Rebecca Solnit’sA Field Guide to Getting Lost. That time was by bicycle and I spent the first day meandering through the city with no direction. Every couple of hours, I’d stop for a cup of coffee or a snack and read Solnit’s book, which covered themes of mental and emotional wandering. It was profound. I noticed odd things, mostly architectural.

My recommendation is to do this when you first arrive in an unfamiliar city, after getting a night’s sleep but before you’ve done anything else. At this point, your body is still jet-lagged. Daily patterns have yet to be formed. Memories are unestablished. The brain is at its most receptive state.

We started here, near where we were staying. All I know was that the 6th Arrondissement was on the Left Bank. I’ve since become familiar with the shell-like ordering of the city’s districts.

We picked the direction that we most “liked”, based on whatever looked best down the street.

When you’re not trying to get somewhere or having a conversation about something, you notice funny things, like tons of push-scooters locked with cheap cable locks everywhere.

Or custom-painted tiles like these. Of course, these are “touristy”, but the walk pushed these labels out of my mind.

I wanted to document the dérive, but didn’t want to be in a documentation state-of-mind, so just snapped photos without much consideration for what I was shooting.

The space-for-women was inviting, but also seemed to be closed. It was some sort of library.

We never would have found this old store on Yelp, but it was incredible. Lots of old science and medical devices and posters were inside! The dérive soon meant that we could go inside shops and here is where my expectations of some sort of 1950s Paris that Guy Debord lived in quickly got dashed on the rocks. There were tons of distracting shops and restaurants everywhere. I guess that was the case 60 years ago as well, but I’m sure capitalist advertising techniques have advanced significantly since his time.

We found some contemporary art galleries, too.

Though the Jesus spinning on the turntable didn’t “work” for me.

With two people, the dérive meant compromising. Sometimes I wanted to walk on one side of the street and Victoria would walk on the other. And when we made a decision, we had to pick one person’s “way” if we disagreed. I’m would have been curious to see where my choices would have left me.

Sure, you notice all sorts of details.

And signs in French, mostly about parking rules.

Interesting chimneys on buildings.

You’re not supposed to stop to do errands, but we had to get some coffee capsules for the espresso machine in our room. And then I noticed the shrink-wrapped cheese.

Wide boulevards with complex intersections. Surprisingly little traffic noise and congestions for a major city.

Streets signs and greenery.

Plaques with names of historical figures and where they once lived.

The smell of dog shit everywhere. Cigarettes, lots of cigarette smoke. I still hate getting the exhale of smoke in my face.

Many apartment buildings with exactly the same window dressing on them. Why do only the 2nd story windows have planters on them? Everywhere, ads for various services, including “Tantra Massage” on drain pipes.

A giant old wooden door with intricate carvings.

An old church interspersed amongst the apartment buildings.

Odd urban compositions.

A time portal to the year 1858.

Bubble windows.

Ah, the iron work.

Gold leafing shop. Isn’t it dangerous to leave this in the window for potential thievery?

Real estate ads everywhere. Prices are comparable to San Francisco.

French flags outside what looks like government buildings.

Lots of small dogs and apparently it’s okay to bring them into the restaurant with you.

Sign for a movie theater…or something else.

The most amazing air vent I’ve ever seen.

Reserving your parking spot with trash.

The stop sign figurine is fatter than the walk sign figurine.

Goats in a park.

One cannot escape the Eiffel Tower as a point of orientation.

Bodily functions rule in the end. The toilets are free, but the lines are long.

I first heard about the program in the late 1990s. In 2010, I saw the 20th Anniversary show, and later that year, applied and was accepted. I started my residency in February 2011. During this time, I made a series called “2049” — where I played the role of a prospector from the year 2049, who was mining the dump for resources to construct “Imaginary Devices” to help me survive.

Part of the deal with being an artist-in-residence at The Dump is that they get to keep one of your artworks. And exhibitions like this are exactly the reason why. The good folks at Recology put on shows, featuring work from their program. The artwork that they elected to retain was the Universal Mailbox (below), which will be in tomorrow’s show.

I constructed the Universal Mailbox from a discarded UPS keypad, scrap wood, a found satellite dish and dryer hose. I found the paint at the dump as well. I used a similar technique for the 2049 Hotline, and during the opening, friends of mine played the role of “emissaries from the year 2049″, who would talk to exhibit-goers on the phone. Their only directive was to stay in character — they had to be from the future, but the environment they imagined could be anything they wanted.The artwork later traveled to the New York Hall of Science for their Regeneration Show (walkthrough below)

This was a one-way mission for many of my sculptures, as they were fragile to begin with and 4 months at an Interactive Science Museum decimated the work. I knew this would happen. I always viewed the sculptures as temporary. I was even able to save some money on shipping costs. The artwork, after all, came from the dump!

The blueprints survived, as well as a rebuilt versions of the Universal Mailbox and the 2049 Hotline, which I will continue to exhibit. The 2049 project and my 4 months at the dump was a lesson in attachment to material things, which flow from hands to hands and eventually to landfill and hopefully, sometimes, to art.