Immaterial Culture

Author: holly

One hot summer in France I found myself excavating Neanderthal bones in Les Pradelles, a Mousterian reindeer butchering site north of the Dordogne. I was young and restless and relentlessly curious, and the idea of spending long, impassioned days in a gravel pit analyzing and interpreting objects seemed like a great idea at the time.

It was also during this time I first became interested in — edit: obsessed with — material culture.

In University, I took a variety of social science courses that allowed me to think about the symbolic components of complex organizational systems. Everything from Mayan death rituals to modern American foodways, Haitian voodoo to Northwest Coast Native Art. Anthropology became an approach to problem solving — a way of thinking about domains and their structure — and of “man is an animal suspended in webs of significance he himself has spun” (Clifford Geertz, Interpretation of Cultures).

Material culture was different, like Japanese Death Poetry or Cat Stevens. I loved seeing the physicality of a culture embodied in an object from the past. To me that object was everything: it was a keyhole, it was metaphor, it was technology, it was language, it was life.

Hand Axes and Early Technology

One day while excavating in my rocky quadrant thinking about Lorca and his gorgeous “Lament for Ignacio Sanchez Mejias,” I found something. It was recognizably man-made, a tool perhaps, appearing somewhat out of place in the context of sandstone rubble. Snapping out of the depths of that July afternoon, I took out my own tool, a black flat paintbrush, and worked the bristles swiftly around the object. I meticulously unearthed it from the warm dirt. Before removing the object, I sketched it in media res to identity its original place within the grid.

I let out an exhale. I looked down at the object for a long time, forgetting about the dirt that was making interesting indentation patterns under my knees.

I took this object to the basin at the threshold of the excavation site. Washing my object in water was somewhat of a religious experience. Here in my hands was a hand axe. A tool crafted by someone on the fringes of humanness. The last time this axe was held, I guessed, it was likely by a Neanderthal. I marveled at this object, turning it around in my hand under the afternoon sun. I examined with detail the denticulated edges of the biface. Moving my fingers over the edges, palming the object with force, I let it slip perfectly into the curves of my inner palm.

Biface profiles – early forms of material culture

To me this tool was recognizably material culture. It told a story. It had a presence that was re-adopted — catalogued by myself and reclaimed as an artefact of the site — and yet its past was the evolutionary blueprint behind its creation. It was beautiful.

Material Culture

What is material culture? Material Culture is the physical embodiment of culture in the objects and architecture that culture has made. Material culture studies is the cross-disciplinary field analyzing the relationships between people and their objects: the making, history, preservation, and interpretation of objects. Material culture theory (and practice) draws from the humanities and social sciences, from museology, history, archaeology, historic preservation, folklore, and the like. Anything from momental buildings and architectural elements, Artificial Intelligence, Cosmopolitan magazine, shoes, hair brushes. All can be considered material culture.

Code as Material Culture

As I continue onward in my journey as a developer, I often reflect on my time at Les Pradelles. How does material culture relate to code? What does a JSON object and a hand axe have in common? What can object oriented design tell us about human relationships? How does programming and the machines we create and flex give us insight into human logic?

Occasionally this app produces Dataist haikus which are as delightful as they are random.

Excavating a Codebase

We can imagine code as material culture because it is application architecture. It is built by humans, modified by humans, preserved by humans. It is both physical and object-oriented. You can hold code in your hand. You can touch code, inspecting it in the console of a website with your cursor. As with any tool, it is adapted to the needs of its users over time.

JavaScript, for example, owes its incredibly light framework to its haphazard birth as a programming language in May 1995. Developed by Brendan Eich in just 10 days some 20+ years ago, the language itself has evolved over time, entering a new realm of standardization and innovation. JavaScript frameworks and libraries like Node.js, Backbone.js, Underscore.js, Handlebars.js, etc., have added complexity and fullness to the stack. Like the development of biface tools across the Palaeolithic landscape, our material culture reflects a movement towards efficiency and standardization.

Coding encodes human logic. This is exciting across multiple programming languages because there are so many different ways of solving the same problem. Viewed in this way, methods and frameworks become lenses into human logic. When I am familiarizing myself with a codebase, as I did in Les Pradelles, I often ask myself the following questions:

What is the logic of production of this tool?

How do humans experience and interact with this technology?

What purpose does this technology serve?

Where am I in the codebase, stack, etc., and how does this part relate to the object itself?

Humans and their Machines

When we imagine code as material culture we can also think of it it as a lens into the current state of affairs of technological production. Technology is, after all, as much about human relationships as it is about machines. If a hand axe demarcates a type of human cognition some 40,000 years ago, what about code today? What does this mean if 85% of “producers” of code are male? And if this is the case, how does this effect our experience with technology?

In my next post I’ll discuss this touching on the documentary CODE: Debugging the Gender Gap.

Sometimes the journey from Ruby to JavaScript can feel like diving into a swimming pool of parenthesis and semicolons.

There — I said it.

But JavaScript is really important — and really pervasive. In fact, 78.2% of the top one million websites across the world wide web use JQuery — a JavaScript library and language framework — to render powerful online web applications.

Travelling the Information Superhighway

Writing JavaScript as a Rubyist can sometimes burn us as noobie programmers because its syntax is so unforgiving. Remembering to include semicolons and brackets when we are dealing with blocks, returns statements before variable declarations, variable scope (whether local or global) , etcetera and etcetera and etcetera, may feel initially overly-declarative — not to mention frustrating. Don’t worry, n00b, there’s hope.

JavaScript’s framework is incredibly light. This owes itself to it’s unique origins as a functional programming language. Interestingly, JavaScript was originally developed in 10 days in May 1995 by Brendan Eich. Read about the history of JavaScript here.

In Ruby there are many ways to solve a problem because Ruby graciously gives us a bunch of methods to break down a problem. In JavaScript, this is not the case. When I first approached JavaScript, pouring over each function and semi-colon gingerly, I started wondering: “where are all the JavaScript language features?” Where is the .map, the .each, .reduce — for god sakes, the .find!? — allowing for easy and efficient iteration over collections of objects.

The horror when I thought: will I really have to use “for” loops ad infinitum to iterate through problems?

Underscore

Enter Underscore. Underscore is a utility-belt JavaScript library that gives the Rubyist a whole mess of useful functional programming helpers — more than 100, in fact — to keep in your arsenal. From the Underscore API:

“[Underscore] is the answer to the question: “If I sit down in front of a blank HTML page, and want to start being productive immediately, what do I need?” … and the tie to go along with jQuery‘s tux and Backbone‘s suspenders.”

Lets see the brilliance of Underscore in action. (Seriously — prepare to freak out).

Collection Functions(arrays, objects, array-like objects)

Each in Ruby

Recall using each when working with non-transformative functions. If these functions embody a complex object, this is usually linked to some sort of action. For example: iterating through a list of names (or emails) and interpolating a greeting (or emailing that list in an action mailer).

Identity functions allow us to uncover the status of objects. This is important in programming — and especially in JavaScript — because objects can be conceptually different if they are nil, NaN, or null. Studying the Underscore API I was struck by the applicability of these functions in JavaScript:

In my last post I discussed the importance of RSpec and test-driven development (TDD) in evaluating Ruby code at the controller and model levels. We write tests to refactor our code, to prevent human error when developing sexy, dynamic web applications, to draft products for white labeling, and to offer guidance to future developers and collaborators of our code.

TDD is great because it ensures our models and controllers align with the overarching goals of our application — that these components communicate with each other and perform the necessary functions to transform our sexy app into a full-blown religious experience for our end user. (Yes – I said it).

But why, then, is testing such a laborious task for programmers? Like doing laundry, testing seems to evoke either pious adoration — something resembling Christianity in 17th c. New England — or vehement hisses to writing even the smallest of tests.

Before elaborating on the above — and the debate surrounding whether TDD has merit at all in the real world — I propose first an exercise in shoshin. Just go with me here.

Throw everything you know about testing in the metaphorical garburator. Wipe those Foucault college midterms and impromptu coding quizzes — thank you Flatiron! — from memory. Relax and get comfortable with TDD; heck, order-in a 12″ pie from Roberta’s and spend this Friday getting to know her better. She’s not going anywhere.

It’s important to understand testing isn’t magic — it’s not the wingardium leviosa! of spells — enabling your program to somehow run on your command. When we write tests, we are in the driver’s seat: we control our application’s engine. We think of testing as the Sparknotes of our application, offering a bit of guidance to our future self as we refactor — and to future developers who fork our code and use our Specs as Cliffnotes under the hood. As we know from High School: everybody loves Sparknotes.

A Brief Ethnographic Survey of Testing

Exploring this testing philosophy a bit deeper, I began conducting a little ethnographic surveying of my own. Reaching out to a few developer friends beyond the walls of Flatiron, I sought the hard-boiled truth to testing. Who tests? Is testing a thing? Is what I’m learning actually important to the end goal of being a ferocious femme programmer? Give me the sparknotes answer to my sparknotes metaphor!

What I discovered was astonishing: testing, it seems, is a pretty controversial topic, meriting a kind of Jerry Springer retaliation from all sides of the conversation. Never before had I encountered a topic so closely resembling an episode of Maury. And interestingly, everyone offered their opinion to why testing was either so good or so bad. From one side: TDD or die. From the other: “TDD is hocus — criminal!” – this guy.

Wow.

Different Applications of Testing

As we venture deeper into full-stack development at Flatiron, I began to wonder how we might test user interaction across applications. After all, is not our user the end-goal of everything we do as technophile humanists? Alas — how do we know our user is interacting with our application in the way s/he is supposed to?

Capybara

Enter Capybara. Capybara is a gem that gives us a bunch of methods that simulates how a user interacts with our Rails application. Capybara talks with many different drivers to execute our tests through the same clean and simple interface. Capybara is sometimes referred to as End-To-End testing because it tests higher-level tests while flexing the entire stack of our application (our Models, Views and Controllers).

RSpec and Capybara

This left me questioning the difference between RSpec and Capybara. Behold another metaphor: imagine RSpec as that austere, extremely-French, French teacher you had in Grade 10. Yes – that one, who quizzed you on verb conjugations, keeling over as you occasionally forgot the translations to seminal cultural tokens like “aujourd’hui, maman est morte.” By contrast, Capybara is that gym teacher who once convinced you that line-dancing was an appropriate way to spend an hour of your time. The image of her slapping her hands together as I fell out of electric slide formation will be forever imprinted in memory.

Whereas RSpec tests how models and controllers communicate with one another in various ways — the conjugation of verbs and their correct usage in french sentences — Capybara tests how users interact with the pages of your Rails app. Capybara methods, like clicks on a page, form interaction, finding elements on a page, and the electric slide, are examples of this. OK, maybe not the electric slide, but Capybara gives us a ton of methods to help us test really important user interactions. Peeps the Capybara cheat sheet here.

No setup necessary for Rails and Rack application. Works out of the box.

Intuitive API which mimics the language an actual user would use.

Switch the backend your tests run against from fast headless mode to an actual browser with no changes to your tests.

Powerful synchronization features mean you never have to manually wait for asynchronous processes to complete.

To run Capybara, we must include it in the Gemfile of our rails application.

group :development, :test do
gem "rspec-rails"
gem "capybara"
end

We group RSpec and Capybara under the development testing suite of our application because we are testing user interaction. To write our Capybara tests, we hard code sample user data to populate into our forms. By this measure, we can then test the functionality of our application.

In our testing environment, we need to first load RSpec and Capybara, configure RSpec, include Capybara in our Rails app, define the application we are testing, load the application defined in config.ru, and finally configure Capybara to test against the application we are testing. In our spec/spec_helper.rb we add the following code (as taken from Learn):

The RSpec Ruby gem is one of the best testing libraries for testing Ruby code. Testing your code is very important: in fact, tests are essential in setting the parameters of what your program is all about. Imagine tests as the guideposts and scaffolding of your code. We write tests as we build out our code to ensure our models align with the goals of our program.

This post takes a leisurely stroll through the process of testing your code with RSpec.

Setting Up RSpec

The first step in test-driven development (TDD) is to configure RSpec as a dependency via Bundler. Create a new directory in your project and title this one “Gemfile.” Within it type the following code.

# Gemfilesource"https://rubygems.org"gem"rspec"
gem "pry"

Open this project’s directory in your terminal and type bundle install –path .bundle. This installs the latest version of RSpec and all related dependencies. You’ll see something that resembles the following:

Perfect! You have now installed your test suite with the RSpec and Pry gems. Both gems are necessary for helping us build out our project.

Now we are ready to build a small project. This project will have two classes: a Song and Library class. Instances of the Song class (each new song object) contain attributes: a name, an artist, and a genre — just as every song belongs to an artist and a genre. Our Library Class stores a library of music — of song objects — can save these songs to a file and allow the user to query them by their name and genre.

This is what our project directory should look like:

In our program we write the specifications (the “specs”) in a spec directory. For each class file we have a corresponding spec_helper.rb file. For our specs to run we need to “require” the associative Ruby classes we are testing. We declare this in the spec_helper file like so:

require_relative "../itunes_library.rb"
require_relative "../song"

require "yaml"

Think of require_relative as “require”: here, instead of scanning your Ruby path it searches for the relative path to your current directory.

For the purpose of this example we will be requiring YAML. YAML is a very basic text database we will use to store our program data.

The Song Class

Now we can start building out our Song class. Remember to require “spec_helper” at the top of our file to see what’s going on under the hood of our method.

require "spec_helper"
describe Song doend

We start our spec_song.rb with a describe block which introduces what we are testing: this could be a string, but in this example we are using the class Song name.

before :eachdo
@song = Song.new"Name", "Artist", :genre
end

We begin by calling “before”, passing the symbol iterator :each to declare that we want to run this code before each test. And what are we doing before each test? We are generating new instances of Song according to attributes “Name”, “Artist” and “:genre”. Note the syntax of these attributes: whereas name and artist are strings :genre references something different.

When we write specs we reference instance variable names with the @symbol. We do this to ensure our instance variable is accessible across our tests; if we don’t use “@” we can only access this local variable within each individual code block (between the before and end of this code block).

Now we can create our first test which describes the actions of a specific method we are testing. As we write out our first test we are testing to see if our initialize method is making a Song object.

describe "#initialize" do
it "takes three attributes and returns a Song object" do
expect(@song).to be_a(Song)
end
end

It’s convention in test-driven development to include a “describe” block for our test. We use the hashtag symbol “#” to refer to instance methods like this: ClassName#methodName. We write the class name in the top level of our block to describe and include the method name which we want to reference.

Take a look at the syntax in the above example. We use expect(object).to do_something. This form will account for 9.9 out of 10 tests you write: you have an object, you expect it do something, then you pass that object the call of another function. This reads well and makes for a fairly intuitive test. Above, as we are testing the instance of @song, we reference this variable with the @ symbol.

Let’s run this. In TTD we run our tests in our terminal with rspec spec. Remember: the spec is the directory we created to hold our method tests.

Here’s a problem: when we run this program we encounter an error: “uninitialized constant Object::Song.” This means there is no Song class! Let’s add this now.

In TDD we only write enough code to fix the problem at hand.It is important in RSpec to write the smallest possible test case matching what we need to program in our Class.rb file(Andrew Burgess, “Ruby For Newbies“).

Moving from our spec/song.rb to our song.rb file, we write our code for the Song class:

class Song
end

Re-run our tests and you see we are failing them. Here’s why: we are lacking an initialize method with three arguments (to initialize a new instance of song for our song class) so calling #initialize has no effect on our program. Typically, we follow the process of writing a test (or tests), testing these tests, seeing these tests fail,making these tests pass, refactoring our code, and repeating this process until we are done.

Onto more tests for our Song:

describe "#title" do
it "returns the correct song title" do
expect(@song.title).to eq("Title")
end
end
describe "#artist" do
it "returns the correct artist" do
expect(@song.artist).to eq("Artist")
end
end
describe "#genre" do
it "returns the correct genre" do
expect(@song.genre).to eq(:genre)
end
end
end

This logic is straightforward: as you read the above tests you should know exactly what they are doing and referencing.

But run your tests again and you see they are failing! This is because in our Song class we have to initialize new instances of Song with the attributes outlined above: :title, :artist, and :genre. Let’s fix this now in our Song.rb file.

We pass in three arguments to our Song class which initializes new instances of Song with a title, an artist, and a category. Run our tests now and they pass.

You may notice although our tests pass we are not able to track our progress within our test suite. Luckily we can fix this by customizing the way RSpec runs by creating a “.rspec” file in our main directory.

Run these tests again and you’ll see a few colorful features added to your testing suite!

RSpecing the Library Class

This is slightly more complicated and involves some hard coding. Let’s build out a new instances of song in our Library object.

We use two :before blocks: one for :each and one for :all. In the before :all code block we hard code some of our data to use in our tests. Here I have created an array of Songs, each with a corresponding genre, title, and artist. When we run our program we will open the file “songs.yml” in “w” write mode and use YAML to dump this data as an array into this file.

You’ll notice when we run rspec spec in our terminal, YAML zips up this data and dumps it into a new “songs.yml” file in our main directory. Check it out:

YAML is “is a human friendly data serialization standard for all programming languages.” Essentially, a text-based database kind of like JSON. We import our YAML in our spec_helper.rb by requiring it with our other gems.

In our example we use two commands: dump, which spits out serialized data into a string, and load which takes the data string and converts it back into Ruby objects. The YAML load syntax looks something like this:

So far we have created a file with some data we’ve hard coded. Perfect. Before :each test we create a new Library object, passing it the YAML file name. We write our corresponding tests for this:

describe "#initialize" do
context "initialize Library with no parameters" do
it "Library has no songs" do
lib = Library.new
expect(lib.songs).to eq([])
end
end
context "Initialize Library with a YAML file parameter" do
it "Library has five songs" do
@lib.songs.length == 5
end
end
end
describe "#get_songs_in_genre" do
it "returns all the songs in a given genre" do
@lib.get_songs_in_genre(:classic_rock).length == 2
end
end
describe "#add_song" do
it "accepts new songs and adds to library" do
@lib.add_song(Song.new("A Jealous Heart Can Never Rest",
"The Black Madonna", :techno))
end
end
end

We start these tests with a describe block for the #initializeLibrary method. Notice above new code block: context, which specifies a particular testing situation for our #initialize method. Here we spell out two contexts: “Initialize Library with no parameters” and “Initialize Library with a YAML file parameter.” In the former we are testing a Library object at initialization without any data. What does this look like? Well, an empty array of course! Thus we write:

expect(lib.songs).to eq([])

When we initialize with a YAML file we expect .length of Library to equal 5 because we hard coded 5 songs in our lib_obj. Run these tests and you’ll see they are passing for this particular texting context.

Naturally, we’d like to build out the functionality of our program. Let’s say we want our program to add a new instance of Song to our Library object — or query a song by its title. Let’s write these methods in our Library.rb file:

Once we are happy with the functionality of our program we stop. Of course we can continue writing methods and their tests ad infinitum — until our program does what we want it to do.

Let’s stop here and run our tests.

Well looky there: our tests pass!

RSpec is sometimes described as a Domain-Specific Language (DSL) because it describes the behaviour of objects. As programmers we write our tests first because we need to know what to test before learning how to test (Build Awesome Command Line Applications in Ruby: 143).

★

Ruby is a nimble programming language because it is an object-oriented programming language. Abstraction gives the programmer the flexibility to be creative and poetic, telescoping in and out of repetitive methodological processes — loops, iterations, logical workflows — to get to the meat-and-potatoes of the problem.

Objects in Ruby are like complex versions of functions. They let you keep an “inside” full of variables and functions that revolve around an “archetype” or system of behaviors (like a Class or a Method). In Ruby, there is a keyword “self” that allows you to gather all the variables of that object around a single point of (self) reference.

Background

As a novice programmer, it is often difficult to understand abstract concepts like “Class” and Method,” which fail to have grounding in the 3D, physical world. In Ruby, objects have “self-reference,” meaning, objects understand what methods they are accessing at a given time. A Ruby object understands it is an object, complete with its own unique identity, blueprint, and object attributes.

This is fine and dandy. But how might we keep track of this metaphor “Ruby object?” What does this mean, when we are tallying method inputs and their outputs? How does “self” translate into hard code and what is the relationship of “self” in the context of its use in ruby classes and methods?

In Ruby, “self” gives us access to the current scope of the object. This keyword is the combination of the input data and method(s) of our object, and it changes according to what “self” is referencing. In a nutshell, “self” is understood as “current scope.”

Deconstructing “self” requires an understanding of what these abstract inputs, classes, and methods are doing in order to know what “self” is referring to. Adapted from “Hackhands” and reproduced below, we see “The Golden Three Rules of Self” which will assume 9 out of 10 all applications of “self” in Ruby.

Breaking Things Down

In programming, we decompose a problem into smaller problems so it becomes easier to solve. We make sure we have an “inside” and “outside” to a sub-problem so the “outside” can worry about what it needs from the sub-problem and the “inside” can worry about how to solve the sub-problem.

A function is a way of naming a subproblem, passing it some specifics, and letting the inside come up with the answer. Functions are good for a couple of variables. But what about a complex data structure like the one below?

Scary, I know. The above example needs an object which holds a bunch of variables and a bunch of methods. The variables in an object are related because they describe an object — or at least something that is convenient for us to conceive of as an object. Allow me to illustrate with an example.

Boxes of Dogs

Say you are building a program to retrieve information on Dogs. Picture this: you are in a room with four separate boxes. Inside each of the four boxes is a different type of dog: a Beagle, a Poodle, a Pug, and a Bull Dog. Every “box machine” is the same but every dog inside the box machine is different, and the dog inside each “box machine” represents the program data. On each of the boxes are method buttons, “dog_name”, “dog_weight”, “dog_poops.” When you put a Dog in a box and when a box_button is pushed, the box_machine gives you information about that dog. For example, if the dog poops, a button will be able to tell you how many poops that dog pooped, even though you can’t see the poops.

The object definition — the code you use to define an archetype — is different from a dog instance, a specific version of that object.

In the above context, self is what the method say_hello calls on, “Dog.” “Dog” is the object and “Koko” is the instance of the object, which is another object. An object definition is like a box_machine that has buttons. When you push the say_hello button, the Dog box_machine performs a calculation and spits out information of that modified Dog object.

When you create the above Dog_Box definition you use “self” to reference information about the Dog. You might use “self.name” or “self.age” or “self.number_of_poops” to obtain information about each of the Dogs. Remember: “self” is the box’s way of getting information about the Dog whereas Dog is the data that fills the Dog’s variables and attributes.

Self is a fundamental concept to understanding object-oriented programming because, as programmers, we put things “inside” of a function to decompose a problem or operation. Accessing the function’s name with “Self” is easier than dealing with the whole big chunk of code (block) every time you perform a calculation.

As someone new to the dataviz party, I’ve often contemplated what makes a good visualization. How do data visualizations “work” — or not work? What’s the secret sauce here, anyway?

Thankfully, David gives us his “What Makes a Good Visualization” model, a delicate and careful marriage of the implicit and explicit, and the tension between the two. See below:

What Makes A Good Visualization, David McCandless 2015.

Successful visualization, then, lends itself into the artful synthesis of information (data), story (concept), goal (function), and visual form (metaphor). What’s especially cool, is how new and exciting these fields are within the realm of data visualization / information design.

Visual form (metaphor), as it is currently known in information design, is a relatively new practice, compared to, say, information design in film or animation. Methods, avenues, and templates for visual form are only now being developed and refined. Think, then, what can be accomplished in this field if we orient our focus from 2D to 3D information design! The opportunities for design in a statistical and analytic context are abound.

As are good, smart visualization on McCandless’ blog. Peeps a few of my favorites (reproduced below for your scrolling pleasure):

Common MythConceptions: World’s Most Contagious Falsehoods. David McCandless. 2015.

Yesterday, I happened across a video by data journalist extraordinaire David McCandless. Within it he discusses his dataviz process and the way he leverages statistical models and large amounts of information into meaningful, artful visualizations.

The video, of course, also stands as a corporate plug for the new Microsoft Office 365 suite (which, from a UX standpoint, I remain conflicted. More on this later). Corporate plugs aside, the short is a cool introduction into information design. O, the art you can make with data! Pretty cool, you guys.

In Dear Data, Giorgia and Stefani collect data on day-to-day living. Weekly, they record their data as they experience it, parameterized by a typical component or theme of human experience. These themes range from the amusing and commonplace — “A Week of Swearing” — to the not-so-easily quantifiable — “A Week of Negative Thoughts.” At week’s end, the women translate their data into analog drawings on postcard-sized pieces of paper. They slot these postcards into the mail and send oversees until they arrive “at the other person’s address with all the scuff marks of its journey over the ocean: a type of “slow data” transmission.”

Week 07, A Week of Complaints

Week 41, A Week of Music

Week 25, A Week of Friends

Week 40, A Week of Meeting New People

On their website, Giorgia and Stefanie describe the method behind Dear Data:

By creating and sending the data visualizations using analogue instead of digital means, we are really just doing what artists have done for ages, which is sketch and try to capture the essence of the life happening around them. However, as we are sketching life in the modern digital age, life also includes everything that is counted, computed, and measured.

We are trying to capture the life unfolding around us, but instead we are capturing this life through sketching the hidden patterns found within our data.

So far we’re having a lot of fun while we learn about our own and each other’s lives – and we’re also trying to get better at drawing in the process

We’ve also noticed that the data collection and visualization process has become a sort of performance and ritual in our lives, affecting our days and weeks, and inherently changing our behaviour.

But really, we also started this project to show how “data” is not scary, is not necessarily “big”, and that you need to know almost nothing about data to start collecting and representing it (just a pencil, a notebook and a postcard!)

The schematic link between Dear Data and Frances Ha? Besides standing on their own as spectacular projects, the two share narrative similarities. This is exciting within the realm of trans-media storytelling because the two projects treat time in a similar way.

In Frances Ha, scenes are vignettes for Baumbach’s protagonist; chapters can be seen as keyholes into character, revealing aspects of Frances Ha’s emotional architecture.

In Dear Data, one week’s data project is eclipsed by next week’s data project. There are no new developments to old data, except for a more accurate “sum-total” understanding of the artist — the woman — behind the data.

In each story, we gain insight and understanding though accumulation, rather than through continuation.

And what does this mean in trans-media storytelling? Well, for one, a more varied and colorful rhetoric — even if portrayed in black and white.

What does it mean to visualize culture? If information flows through people, how is this animated across time and space? To what extent does information flow enhance or diminish the “identifiable” cultural matrix?

“Charting Culture” is a video based on a paper by Maximilian Schich, an art historian at the University of Texas at Dallas. Essentially, the visualization charts migratory patterns of historically-prominent individuals, data of which was gathered from Freebase, a Google-owned information repository of people, places and things. By plotting the birth and death locations of 120,000 individuals important enough to have their births and deaths recorded, “Charting Culture” illustrates the ebb and flow of culture as embodied in the data patterns of a few significant humans from 600 BCE to 2012 CE.

The accessibility of this visualization is what, in my opinion, makes this video so enjoyable. And yet as I watched “Charting Culture” I couldn’t help but feel Walter Benjamin rolling over in his grave, an index finger in air, chiming in with the all-too-familiar, something-is-not-right axiom about historiography:

“There is no document of civilization which is not at the same time a document of barbarism. And just as such a document is not free of barbarism, barbarism taints also the manner in which it was transmitted from one owner to another.” (Walter Benjamin, Theses on the Philosophy of History, 256)

The problem with this video is that it fails to be open about the parameters of its data. Claiming to represent “humanity’s cultural history” is a textbook Eurocentric claim, ignoring other fundamental data, such as, for example, inner migratory patterns of cultural figures not recorded by European courts. And this is, of course, only one such example. The data, albeit pretty, is a redemptive story.

The visualization, however, is a damn cool melange of art and statistics. It stands as a fine example of what one can do with a question, a penchant for data mapping, and the desire to illustrate by example.

I’m an anthropologist in finance who DJs arts and culture events in Vancouver. Currently, I’m learning to program to do data visualization for social science. My mission is to combine engineering, data science, and design to transform information into art.

This blog is a people-watching-on-the-internet kind of study — an ethnographic and cultural musing on data production, information design, and the transformation of knowledge. Hopefully, in doing so, esoteric subjects like computer science, data visualization, and anthropology become a little less elusive and a little more accessible for everyone involved (myself included).

At best, Immaterial Culture is an investigation into digital and visual culture from the vantage point of the subject of study. And because WordPress is the preferred platform of choice within the blogosphere, it’s only natural I join among those blogging ranks who (which?) intend to leave a mark on the proverbial digital story.