Thursday, November 24, 2016

For the past decade or more I’ve fiddled around with learning to code: when I first began I tried to learn some Perl, then later Python, then Ruby, then back to Python again. But I’ve never been able to stick with it for any significant period of time, and I think the chief reason is this: I still have no idea what I would ever do with any of those languages. I can’t imagine a use-case. By contrast, I’ve learned various markup languages — LaTeX, HTML, CSS — because, as someone who writes and presents what he writes to others in various venues, the uses of such tools are obvious. But most people don’t think of that kind of thing as real coding.

The article that most people quote when humanists ask whether they should learn to code is this one by Matt Kirschenbaum. Its subtitle is “Why humanities students should learn to program,” but I don’t think Kirschenbaum addresses that question directly, or with any degree of specificity.

He does describe some particular cases in which code literacy matters: “the student of contemporary literature interested in, say, electronic poetry or the art of the novel in the information age”; “the student interested in computer-assisted text analysis, who may need to create specialized programs that don’t yet exist”; “Procedural literacy, which starts with exercises like making a snowball, will be essential if humanities students are to understand virtual worlds as rhetorical and ideological spaces.” Fair enough. But what about those of us who aren’t studying virtual worlds or electronic poetry?

Typically, we see the value of a particular skill — driving a car; cooking; playing a musical instrument — and because we perceive the value we go about acquiring that skill. But sometimes the arrow may point in the other direction: only when you acquire the skill do you perceive its uses. I have a suspicion that if I got really good at writing Python I would find uses for it. But because I can’t imagine what those uses could be, I have trouble sustaining the discipline needed to learn it.

4
comments:

As someone who does software development as an occupation, I'm not sure that there are a lot of uses. I've yet to come across a problem outside of what I do professionally that's begging to be turned into a CRUD app, and judging by my exposure to the open-source community, most of what my colleagues build in their spare time are tools aimed at solving problems encountered while coding.

I think we've already picked most of the low-hanging fruit for what people generally need to do on laptops / smartphones. Shoehorning computing as the solution to other problems creates as many issues as it solves (as you've noted with journaling). And the problems that I see as interesting and worth solving ("Siri - why do my bread loaves come out so much flatter than the pictures in Flour, Water, Salt, Yeast?") aren't the sort of thing amenable to hobbyist tinkering.

I think rather then learning to code the goal should be to learn about computation. The easiest entry into understanding computation is (probably) through formal languages. I dislike dynamically typed languages because they miss much of the wonder and mathematics that is inherent in formal languages and systems that static type systems capture. Languages like perl, python, and ruby miss all of this and hide computational complexity. I think learning about computation informs a technological understanding of the world. This XKCD captures it well I think: http://xkcd.com/1425/

Post a Comment

Search This Blog

About

Commentary on technologies of reading, writing, research, and, generally, knowledge. As these technologies change and develop, what do we lose, what do we gain, what is (fundamentally or trivially) altered? And, not least, what's fun?