Jer Thorp is a Brooklyn-based, Vancouver-bred artist and educator who builds software and utilizes data visualization to explore the intersection between art and science. He is currently the Data Artist in Residence at The New York Times and a visiting professor at New York University’s ITP program where he takes an interdisciplinary approach to the aestheticization of data. PopTech spoke with Thorp about information overload, Malcolm Gladwell vs. Jean Marie Laskis, David Foster Wallace’s predictions for the future and NASA’s Kepler project.

PopTech: You’ve been the Data Artist in residence at The New York Times since October. What have you been working on? Jer Thorp: The project has a code name, Cascade, and it’s a visualization tool that lets us look at how people are sharing New York Times content over social spaces. We’re looking at Twitter specifically, but it could be applied to any network that grows over time. So we built this tool that shows that in real time and it’s 3-D.

When it’s released, which should be happening very soon, it’ll be an internal use tool. We have some opportunities to get it into the newsroom so that people at the Times can track how the stories they’re writing are being shared. It’s more of a diagnostic tool than anything – it’s kind of like a medical tool for social networks.

Sample image from The New York Times project

Are you collaborating with anyone on this project? I’m working with Mark Hansen who is a professor of statistics at UCLA and media artist. Most of the data work I’ve done in the past is me hammering my head against some approximation of statistics but having someone with a mind for that, like Mark, has been really amazing for this project.

People ask me where they can find good data sets but I just think that's the wrong question. When I have an interesting question or problem that gets stuck in my head, I chase a data set to find it.

Speaking of which, where do you come upon the data that informs your work? Do you encounter a data set you want to anaylyze or do you approach your work wanting to dissect a specific topic? People ask me where they can find good data sets but I just think that’s the wrong question. When I have an interesting question or problem that gets stuck in my head, I chase a data set to find it. That’s not to say that sometimes a data set doesn’t get presented to me that’s interesting.

Last month, I had a conversation with Lee Billings who is a science writer covering the NASA Kepler project, which discovered all these exoplanets. He sent me that data set and then we wound up collaborating together on a video that was used by NASA. I’d already read stories about the project and the numbers can be hard to picture in your head. Like, how many planets are twelve hundred planets? And how do they compare to the planets we know in our solar system?

A lot of my visualizations at the end are kind of self-serving because there’s a question that I want to ask. I end up doing it myself and then hopefully at the end there’s something that comes out of it that might be useful to other people.

Is there a time when a project came about because you read something that left you with a lingering question which you attempted to parse further? I always think that whenever I get a question stuck in my head, I am more likely to sit down and write a piece of software to try to answer it, which maybe makes me a little bizarre.

In the fall 2009 I’d read two articles about head injuries in the NFL. One of them was by Jeanne Marie Laskas in GQ, and then I read another article a week later by Malcolm Gladwell that covered the same story. I had this feeling that I really, really liked the first article [Laskas] more than I liked the second article [Gladwell] and I couldn’t quantify why.

I built this tool that visualizes all the word usage in the two articles. You can compare how different words are used and that brought me to an answer that I maybe could have found on my own but looking below the surface, the Laskis article was essentially about people. It was the story of one specific doctor and a couple of really interesting human level interactions whereas the Gladwell story was much more abstract and didn’t really involve people as directly.

When you build this type of software, is it for one-time use or do you typically apply it to different sets of data? The nice thing about what I’d built [above] is that then I had a tool I could use to compare any two sets of text. I looked at some of Obama’s foreign policy speeches - how language used in his speeches in the Middle East compares to those in Asia. You can see what words are used in different circumstances.

I read an article from the Neiman Journalism Lab blog that quotes you as saying, “The art itself is the software.” Can you elaborate on what you mean by that? When I look at my own practice, I always look at the software I’m building as the result. People see images and video from my work, but I just consider them to be documentation of the work rather than the work itself. I think the process of building the work is as important as the work itself. I’ve been building software for almost a decade but I’ve only been working with data for two years so I still see myself as being very much in the early stages of exploring how these pieces of software can exist in a more contemporary art framework.

I always think that whenever I get a question stuck in my head, I am more likely to sit down and write a piece of software to try to answer it, which maybe makes me a little bizarre.

Since you haven’t always been working with data in your work, how did you get into it? I’ve been working on this really long-term project called The Colour Economy, which looks at creating a simulated economy in which the members are trading in color. You see these agents that change colors as they exchange red and green and blue with each other, trying to survive inside of a simulated market. That’s actually the project that brought me to data in the first place because I built this thing that was using random numbers and it didn’t seem to be very pleasing. I wanted to bring in some external data. I thought, I’ll learn how to bring in data and start to visualize that stuff and that was a two-year side track.

We’re bombarded with data every day. Can data visualization serve as a mechanism to understand or digest massive quantities of information? What’s your take? I was reading an article a week ago – it was an interview with David Foster Wallace. He was saying that one of the reasons he wrote Infinite Jest was because he wanted to fill it with all of those endnotes. He said he wanted to mimic the information flood and data triage he expected to be a big part of American life fifteen years from then. We’re about fifteen years later and I think it couldn’t be more true.

I don’t think our brains are necessarily capable of handling the vast amounts of information around us. Data visualization as an idea seems very appealing to us. Now, whether or not it succeeds at minimizing that [information flood] is an entirely different story. I would argue that often it doesn’t and that that doesn’t necessarily need to be the purpose of data visualization.

So what is the purpose? Sometimes the purpose should be to make things more simple but sometimes it’s to give you an idea of the scale of the complexity. And then we can circumvent those expectations of data visualization in interesting ways. Obviously that’s not something you really want to do in a newspaper, magazine or in a scientific paper but when we think of our data as a medium for creative expression, those rules don’t really apply anymore. Maybe some of the interest here is to take that expected language of data visualization and infographics and try to play with the idea of trust that it gives us. Some artists have worked in interesting ways in that realm.

Data visualization is all around us these days. I’m wondering if there are some favorite instances you’ve seen of data being interpreted in a visual manner. My favorite person working in the data viz. field right now is a German designer named Moritz Stefaner. Moritz just did a project called Notibilia, which looks at Wikipedia edits. The reason I like his work is he tends to use novel approaches, but at the same time, they seem like they should have been there all along.

From Moritz Stefaner's Notibilia

Do you think this data visualization trend is a result of the massive amount of data we’re presented with, the tools, or access to information? There is a cultural gravitation towards data visualization, which may have to do with an underlying broader psychology of the population. Also, media outlets like the Times and the Guardian are getting more advanced with their infographics so people expect more. People are engaged by these richer ways that newspapers and websites are presenting information that go beyond the bar graphs and pie charts we’ve seen before. And then the technology to store data has gotten a lot better. Access to data has become vastly better and the tools to work with data have become easier to get and there’s a larger community around it so in many ways it’s a perfect storm to generate this massive interest in data visualization.

Where do you think this “perfect storm” will take us? I hope it leads to more exploration – people right now are often doing the same thing they did before, only bigger. And I’d love to see some more experimentation. In my mind, what that requires is more of a distribution of data viz between not only the scientific and design communities, where it largely exists right now, but to also bring some artists into the picture who might do some stranger things that end up being novel and useful and keep people asking the right questions.