dueling binaries from today's readings (two poems).

In thinking about how words are being defined and placed, I have come to the realization that all binaries end up replicating themselves as boundary objects across not only academic disciplines, but also across the lived experience. They are performing the very thing that they are:

the mindis a virtual mediatized performer, a listener,a perceiverof the other.

or

the bodyis a real, mediatized spectator,a speaker,an objectof the self.

how is meaning derived?

when we move into 'online' taxonomies, how do our own assumptions (categorizations) affect the way we search, choose, and look? what does it mean when we crowd source taxonomies-think flickr, delicious, etc- do we derive more, or less meaning? How will 'efficiency' products' such as Google's Instant Search affect our categories of knowledge?

if binaries are boundaries objects across everyday life, do they help us to understand across difference, or do they reinforce cultural expectations?

These are not rhetorical questions. I throw this out there hoping to start a conversation.

notes:the binaries were derived from the following:Grosz, Elizabeth, Ch. 2 “Lived Spatiality (The Spaces of Corporeal Desire)” from Architecture from the OutsideGrosz, Elizabeth, Ch. 5 “Cyberspace, Virtuality, and the Real” from Architecture from the OutsideAuslander, Phillip, Ch. 1 "Introduction An Orchid in the Land of Technology"Auslander, Phillip, Ch.2 "Live Performance in a Mediatized Culture" from Liveness Performance in a mediatized culturePhelan, Peggy, Ch. 7 “The Ontology of Performance: Representation without Reproduction,” from Unmarked: The Politics of Performance

13 comments

"If binaries are boundaries objects across everyday life, do they help us to understand across difference, or do they reinforce cultural expectations?"

My initial reaction to binaries of any sort is to flail my arms and screech like a baby t-rex.

But that's my gut, and somewhere under the t-rex is a rational person saying, "Shuts it, sometimes binaries are useful."

Without understanding "black" and "white," I'd have no understanding of "slate" or "silver."

In other words, they matter in setting the sprectrum, but reinforce societal norms when considered in isolation. Luckily for my inner t-rex, I think our increasingly digital lives make it increasingly difficult to view binaries without looking between them. All of those traditional pairings are fairly well turned on their heads by remix culture, multimodal participation, and rich virtual evironments.

Of course, this in itself creates its own binary tension between so-called digital natives, who (might) tend toward flexibility, versus non-natives who (might) prefer stasis to change.

Thanks for your thoughts Sheepeh. I've been thinking about this differently- in terms of structure, not in terms of end-use- but in trying to understand how knowledges are created, shared, deployed, ruptured, subverted- and am looking at how the materiality of the body translates into the 'posthuman'

But systematically speaking, there is, of course, the 'digital' binary- as in 0's and 1's that break all of our 'information' down into binaries. (and what does information include- is it idenitites, social constructions of reality, how we understand our own embodiment?) so there is the code. (and what about the comments and variable names in the code?)

Then there are the machines and the infrastructure that carries said information- from the people who build the innards of the machine to the (non-benign) corporations that feed you your wireless connectivity.

Next, there are the archtitects of the commercial web- (generally speaking)- choose your binary gender - except in certain instances where you can 'prefer not to say' -plus avatars that are gendered, raced, (hetero)sexualized. Checking boxes, clicking buttons- these categories are built out for us - using previously acquired knowledges about what is important 'information' to collect.

So then there's folksonomy and taxonomy, which all create and support knowledges that pre-exist the systems into which they are being tagged/sorted/created... so though they might get at the in-between, I'm not sure how to visualize the connections and meanings that are being made. The problem with folksonomies are that of scalability again. I can't see how to map the in-betweens.

There are text boxes- which can be used to subvert, occupy, deploy new meanings- including using images, not text... as you did so terrifically in your intro post. This kind of information doesn't scale well though, so it is limited to the humans who look at them. Which is maybe where you are pointing towards?

I'm gonna leave the 'digital native' thing for another post... I think you're calling me old and static.... :)

"But systematically speaking, there is, of course, the 'digital' binary- as in 0's and 1's that break all of our 'information' down into binaries."

I'm going to have to push back against this a bit. This sentence rests on a conflation of two meanings of the term binary. The first of these is, essentially, as an either-or dichotomy. The second is as a particular notation for writing down numbers using two symbols. These definitions do not have anything meaningful to do with each other. The use of binary notation does not actually have anything to do with socially constructed binaries like male/female, black/white, etc.

The first reason for this is that virtually no one ever actually programs in binary. The kind of low-level hackery that many non-computer scientists envision as being done in binary is actually done in what is called assembly language, in which a variety of keywords are assigned to specific processor commands. If for some ungodly reason software developers need to look at these in numeric form (and indeed every assembly command has a number associated with it) they use hexadecimal, or base-16, not binary.* Anyway, essentially the only people in the CS world who deal with 0s and 1s these days are the hardware engineers who produce the actual physical chips that a computer uses. They use binary for a very good reason: because computers are essentially vast networks of microscopic switches which can be either on or off--1 or 0.

The second and more important reason is that binary is simply a notation, a way of counting, an encoding. One can freely convert from binary into any other counting system you'd like. Furthermore, binary notation can represent any kind of discrete information just fine. One only runs into trouble when one tries to deal with continuous information (for instance, color, images, or √2). Computers can't handle that, so the information must be truncated or filtered to put it into a discrete form, which necessarily means that information is lost. Now, because memory is cheap you can often get such fantastically high resolution that, at least for the purposes of Digital Humanities, the information loss isn't significant. For instance, I challenge you to distinguish between the hex colors #000000 and #000001. You can't.

At the same time, though, the information loss that occurs as information is converted from one medium to another can be quite significant. A speech transcribed as text loses a substantial amount of information and subtleties that the human ear can detect. Similarly, printed text loses a great deal of the subtlety that handwriting can convey. Thus, it can be quite hard (and often ill-advised) to attempt to convey sarcasm, especially subtle sarcasm, in writing, as readers lack the aural cues that indicate whether one is being serious or sarcastic. Even something as completely over the top as the Modest Proposal left readers wondering if Swift was actually being serious.

This is why you are absolutely right to hit on boundary objects and taxonomies. Computers by their very nature encourage discrete thinking, which requires the setting of boundaries and the drawing of lines between categories, as in a Venn diagram. This ultimately results in the loss of information. And while it is possible to implement "fuzzy" categories, it requires a great deal of artifice and effort to do so, both on the parts of developers and users. I think that this boundary between the discrete and the continuous and the more general question of what happens to information as it changes forms could be very interesting topics for humanities research.

PS: Is discrete/continuous an actual solid binary, or just socially constructed? Well... let's just say that's a very tricky question.

*This is because a single digit of hex can represent four digits of binary, making it much more compact and easier to deal with. But no one would claim that, because we represent some information in hex, that we're forcing it into a hexidecimalotomy.

perhaps not so much conflation as an entanglement with which, well, I'm entangled…

I was thinking about the world 'binary' as a border object- in that it carries meaning across knowledges, but that meaning is simplified, taking the concept of 'off' and 'on' or 'yes' and 'no' and applying it to social constructedness- as a way of thinking about how these concepts apply themselves socially-speaking.

if binary notation - 0's & 1's - are translated to off and on (0 is no, or off, and 1 is yes, or on)- they provide discrete, rather than continuous information. So by the very virtue of translation from a numerical system into a yes/no, off/on dichotomy, it can become representational of and translatable to the language of social constuctedness- (to the extreme of the posthuman, or genetic code)

Though many people may not program in binary, it is an underlying part of the 'computer' and I would therefore argue that it is a mind-set, a way of thinking that translates across those borders and entangles the meanings, making them inseparable. So, looking at this not from a computer science/programming point of view, but as an inter-connectedness between the machine and the human.

You so rightly said-

"binary is simply a notation, a way of counting, an encoding. One can freely convert from binary into any other counting system you'd like. Furthermore, binary notation can represent any kind of discrete information just fine. One only runs into trouble when one tries to deal with continuous information (for instance, color, images, or 2). Computers can't handle that, so the information must be truncated or filtered to put it into a discrete form, which necessarily means that information is lost. Now, because memory is cheap you can often get such fantastically high resolution that, at least for the purposes of Digital Humanities, the information loss isn't significant. For instance, I challenge you to distinguish between the hex colors #000000 and #000001. You can't."

This is what I am thinking about- work with me here a bit, as I'm not a computer scientist- but if the foundation of computer systems is in binary code, we have set up a base language that information must be translated into- causing the other programming languages to be written with the 'discrete' in mind, instead of the continuous. So as a system, then, computers support a binary, on-off dichotomy which translates into programming/scripting languages, which in turn are translated into human-readable language - through which we understand our social binaries, and therefore, can easily entangle 'binary' meanings across web/internet interfaces.

An example, as I mentioned earlier, is gender. If I am a website developer and my marketing team wants to know how many women are registering on the site, what are the main (and usually arrived at) ways I can do this on my website?

text box- fill in the blank with a string of characters

radio button- gives choice between N number of items

check boxes- gives choice between N number of items

'drop down' menu- gives choice between N number of items

Now- do I want the computer to process the information collected, or do I want to do it myself? Chances are, I'd like the computer to analyze the data, and spit out a final number at me so that I can forward it to the marketing team and go back to developing my website. So this leaves out the text box- it's not 'granular' enough to provide the computer with meaningful information.

The other 3 items-radio buttons, check boxes and drop downs all require a human to input language that will be represented by the code behind it. This is where the binary gender construct gets translated. From my life experience, I would argue that many (Western) folk do not think of gender as being more than a binary - it's institutionalized as such, so almost all of the forms that I come across on the web only allow for 2 choices- male or female. Some of the more 'progressive' organizations out there might give "prefer not to say" or "other" - but information is being lost here as well.

To go on to your last couple of lines in that paragraph-

"Now, because memory is cheap you can often get such fantastically high resolution that, at least for the purposes of Digital Humanities, the information loss isn't significant. For instance, I challenge you to distinguish between the hex colors #000000 and #000001. You can't."

I agree- I can't tell the difference between those hex colors - but stepping away from color choice, I would argue that the information loss for digital humanities, or people more generally, is quite significant. But again, I'm entangling meanings of 'binary' across disciplines, trying to find- in the borderlands- how meanings are not discrete, but are deeply engaged with each other - bringing 0's and 1's to the surface was, for me, a way of showing that no system is neutral, and by going to the 'roots' we can begin to understand the layering and interconnectedness of everything we do.

I like what you said about 'fuzzy' categories- and I think that if I understand you correctly, it is in the same borderlands as my 'in-between-ness' that I am thinking about.

I think we are largely on the same page, but I still think that you may be over-emphasizing binaries. Binaries are but a specific case of the wider phenomenon of what occurs when we discretize or digitalize continuous or analog information. What's more, they would result regardless of what encoding computers used. Now, you may well be right that the use of binary code--even as low-level and segregated from everyday programming as it is--encourages thinking in binaries in particular. But I think this is secondary to the broader issue, which is that computers (and for that matter anything based on a language with fixed, finite character sets) inherently encourage the discretization and therefore the distortion of continuous information.

That is because, and this is very important, computers do NOT depend on binary. The theoretical model upon which all modern computers are based is called the Universal Turing Machine, or UTM. Essentially, a UTM is modeled as a machine that can read or write from a tape with symbols written on it, and can transition to different states upon seeing a particular symbol on the tape. A UTM tape can use any set of two or more symbols. In fact, when I was taught the formulations for Turing Machines, typically the examples used contained three symbols: 0, 1, and a symbol for blank spaces on the tape. They could just as easily contain five, ten, twenty-six, or a hundred and fifty--it makes no difference as to what functions the computer can compute. Physical computers only use binary because it is convenient from a hardware perspective to do so, and because the number of symbols used does not affect what the computer can do. Indeed, you can do just as much with two symbols as you could with a hundred and fifty, or a million.

What Turing machines cannot handle are infinite or continuous symbol sets (indeed, can you even imagine what a continuous symbol set would look like?). Turing machines and all other digital computers must have a finite set of symbols in order to operate, which means that all the information fed into it must first be converted from analog to digital form. Yes, this encourages the creation of artificial dichotomies and binaries, but it also encourages all manner of other artificial taxonomies, some of which are partitions, others of which are more like Venn diagrams or inheritance hierarchies.*

So, what I'd like to invite you to do is to consider some thought experiments. First, let's consider a distant race of aliens identical in all ways and practices to humans, except for some reason they use ternary computers (that is, computers whose fundamental language consists of three symbols rather than two, as ours do). Like humans, this alien race has two "primary" sexes, male and female, but also a small minority of intersex, transgender/sexual, genderqueer, &c. individuals. They also, tragically, have marketing teams. Using your example of the website, I submit to you that the alien web developer's marketing team would still want a radio button "Male/Female" selection, despite the fact that their computers operate in ternary. Why? Because it's easier to aggregate than responses from a text box, which would serve such a vanishingly small percentage of potential customers while causing their statistical models so much inconvenience that marketing simply does not care. The gender binary here is reinforced not by the number of symbols the computer's basic language uses (remember, our aliens use ternary computers, not binary ones), but by the large majority of users who identify as one of the "primary" genders; by the fact that it simplifies the statistical model, not only by excluding by excluding people who don't fit in the gender binary but also by ensuring that people who do fit in the gender binary use uniform terms; and by the fact that, by their digital nature, computers encourage the discretization of continuous data and, by extension, the drawing of hard boundaries.

Next, let's consider another distant alien race, again very similar to humans, only this time possessing three "primary" sexes each making up roughly a third of the population. Like humans, their computers use binary, and like humans, they have a small minority of intersex, transgender/sexual, genderqueer, &c. individuals. And like us, they also have marketing teams. I submit to you that here again, the alien marketing team will want its web developers to use radio buttons for users to select their gender, for the same reasons, only this time instead of two radio buttons to click, there will be three. Again, this artificial gender ternary is not being encouraged by the number of symbols the computer's basic language uses (remember, our aliens use binary computers, but have three "primary" genders), but by the same reasons that encourage the aliens with the ternary computers to use radio buttons.

So the basic symbol sets our remarkably anthropomorphic aliens' computers use would have no impact upon their resulting web design. Speaking generally, the cardinality of the symbol sets used would have at most only a secondary impact on their wider thinking, but I suspect that this would primarily be a matter of form, not one of function.** The primary issue is the use of discrete symbol sets in the first place. And so it is with us. The use of binary code in computers may well have helped to encourage binary thinking, but this is secondary to the wider impact of computers, which is toencourage discrete thinking and hard boundaries as opposed to continuous thinking and fuzzy boundaries. It encourages, nay, virtually enforces the drawing of sharp lines, the formation of rigid--albeit sometimes overlapping--categories, and, often, the loss of nuance.

But again, I maintain that this is not actually something that came about with computers. You can trace it directly back to the development of writing, which forced analog sounds into a discrete character set, and the development of movable type, which did away with the variation and thus much of the nuance present in handwritten documents (where indeed even inkspots, let alone variation in the pen- or brushstrokes, can convey information).

So to sum up, binaries are of only secondary importance here. What matters more are the overarching processes of discretization, codification, taxonomizing, and digitalization--not how many digits are used.

*If you were feeling really masochistic, you could absolutely define these hierarchies as sets of dichotomies or binaries. But that would turn into a real pain very, very quickly.

**For instance, the aliens with the ternary computers might have the following basic operation in their assembly language: cmp x y z, which stores 0 in z if x < y, 1 in z if x == y, and 2 in z if x > y. This would be natural because the result would only take up one "trit". Higher level languages might express the operation as x # y, which might encourage the aliens to use of switch statements more frequently and if/else statements less frequently. In other words, their programs would do the same things, but have slightly different forms. But even with ternary computers I imagine binary logic would still remain quite useful. And you would still have radio buttons, though you might also see three-state UI elements more often than you do on our own binary computers. Again, though, this is a matter of form--function would remain equivalent.

"Next, let's consider another distant alien race, again very similar to humans, only this time possessing three "primary" sexes each making up roughly a third of the population. Like humans, their computers use binary, and like humans, they have a small minority of intersex, transgender/sexual, genderqueer, &c. individuals. And like us, they also have marketing teams. I submit to you that here again, the alien marketing team will want its web developers to use radio buttons for users to select their gender, for the same reasons, only this time instead of two radio buttons to click, there will be three. Again, this artificial gender ternary is not being encouraged by the number of symbols the computer's basic language uses (remember, our aliens use binary computers, but have three "primary" genders), but by the same reasons that encourage the aliens with the ternary computers to use radio buttons."

Have you read any Ursula Le Guin? Her novel Left Hand of Darkness posits an alien race that is made of up androgynes who only have a gender identity when they're "in heat" and which can be male or female. Even thinking of a race with three primary sexes assumes clear boundaries between the sexes, as you point out, and one that Le Guin dismantles in this novel. Anyway, this is mostly a recommendation to read Le Guin if you haven't, not a serious point.

This is a fruitful conversation. I think what you have explained in your last post is pointing to what I was getting at in my original post. We are talking about the same thing, just from different points of view and areas of expertise. This is a classic case of 'binary' as boundary object- and the difficulties of doing trans-disciplinary work, such as what is being done in the Digital Humanities.

Right now, in our discussion, we are reinforcing our own academic expectations, and developing our points from these places. How can we arrive at a 'common' trans-understanding of difference so that we can move forward and explore the possibilities within these mind-sets?

If you go back to my original post- you will see that I was speaking of human language- of words- and how word 'binaries' can be used to either reinforce or understand across difference. My questions were around how these move into 'online' spaces where there are taxonomies - specifically user-created 'folksonomies'-

which is essentially what you mention in your last post:

"So to sum up, binaries are of only secondary importance here. What matters more are the overarching processes of discretization, codification, taxonomizing, and digitalization--not how many digits are used."

This can be compared to my original post:

"when we move into 'online' taxonomies, how do our own assumptions (categorizations) affect the way we search, choose, and look? what does it mean when we crowd source taxonomies-think flickr, delicious, etc- do we derive more, or less meaning? How will 'efficiency' products' such as Google's Instant Search affect our categories of knowledge?

if binaries are boundaries objects across everyday life, do they help us to understand across difference, or do they reinforce cultural expectations?"

I'm so excited to find people talking about categories. My own research is on genre theory, using work in the cognitive sciences on categories. If you haven't already, I highly recommend you check out Lakoff's Women, Fire, and Dangerous Things for a detailed discussion of the cognitive processes underlying categorization. I think what you're pushing against when you discuss binaries is the "classical model" of categories:

From the time of Aristotle to the later work of Wittgenstein, categories were thought [to] be well understood and unproblematic. They were assumed to be abstract containers, with things either inside or outside the category. Things were assumed to be in the same category if and only if they had certain properties in common. And the properties in common were taken as defining the category. (Lakoff 6)

This view is, as you can tell, a binary one. Either the thing you're trying to categorize is "inside" or it is "out". This model is, incidentally, the one used for thousands of years when thinking about genre. What Lakoff and others have shown (Eleanor Rosch is the person who discovered these things through a series of experiements) is that the mind, in most cases, doesn't actually construct categories this way. Instead, categories have a structure in which some members are felt to be a better fit than others. Those are the prototypes and are "central"; everything else is farther from the center based on how it diverges from prototypes. This structure is impossible, however, with a classical, binary model of categories. If the members of a category are defined as sharing a set of characteristics, then to be a member, the members all have those characteristics in equal degree. This would mean, in turn, that no members can be better examples than others. Yet we do find, repeatedly, these prototype effects structuring categories.

To turn to gender, think of the fact that some men are seen as more "manly" than others; some women are more feminine, right? That means we have a prototype in mind for each gender identity against which we're judging the individuals in question. In turn, it means that there is a continuum of identity inherent in the supposed binary. Evolutionary psychology (of which I'm typically skeptical) argues that this repeated desire to uphold binary categories might result from in-group/out-group identification held over from our more tribal/small-group periods. The argument goes, we readily formed into groups competing over resources, territory, etc. The groups were likely organized around some shared characteristics (again the classical view of categories/taxonomies) that enabled vilification of the other, violence, war. This interpretation is one of the few in evo-psych that I find pretty compelling.

I've seen it play out in medieval Europe, for example, with the vilification of, attacks against, and ultimate expulsion of Jews. There were, for example, regulations about what Jews could and could not wear and where they could live in a effort to differentiate them clearly from the Christian population. In other words, laws encoding prejudice in an effort to define a binary between Christians and Jews. The fact, however, that they needed these laws suggests the difficulty in telling Jews and Christians apart. If you're resorting to clothing as a marker of difference, you must be having a hard time telling people of different faiths apart based. In other words, the laws and cultural practices in this case betray an anxiety over maintaining the boundaries of binary categories. Because these categories are artificial constructs and don't accurately represent how our minds categorize the world, it makes sense that extraordinary measures are required to uphold them.

By the way, even though most people don't program in binary, I do count in it when I need to count higher than ten, but want to save the number on my hands instead of in my head. It's fun. You can count up to 1023 if you use both hands.

Michael- I love that you're excited to find people talking about categories- and I love even more that we are having this discussion! (Thanks again HASTAC!)

What you say about categories being continuous within themselves is an interesting one. This allows a majority of people in a given society to fall within a construct that allows them to have a liveable life (A little Judith Butler for you...). And- as you showed through your example of the history of Jews in Europe, there are material consequences that occur due to theories of categorization- how societies maintain their worlds is based in the application of theory to the body, to space and place.

A more recent example - entering a public gendered bathroom has real consequences for people when they don't fit the expectation of 'passing' in order to use the space. Investigating this 'matrix' means untangling not only gender, but race, class, architecture, histories of public urination, and of course, the space itself. (which includes phenomenology, architecture, how space is produced, etc).

So, turning to an 'online' space- how do these categorizations get replicated, translated, mutated, opposed, restructured, dissolved? My example that I gave in an earlier post was very concrete- in the shape of a form- but what happens to these categories as they shape the 'web'? What are the strands that need to be untangled?

I'm interested in the production of 'normative space' through taxonomies -more formally structured, top-down website development- such as Drupal's taxonomy module for website hierarchy and organization, or Google's new Instant Search, but also in folksonomies- like crowdsourced tagging, what we do ourselves to give meaning to images or videos on YouTube and Flick, or our blog posts here on HASTAC.

As I mentioned earlier- we are talking across disciplines, so if I tag this post as simply 'poems' or 'binary' - who might come read this? If I include 'digital humanities' or 'learning' - then who will come? How can the action of tagging be a performance of identity, or categorical inclusion?

I enjoyed your post, Jarah, and all of the responses. As has been touched on in the comments, it seems that part of the question here depends on whether we are attending to difference as a multitude of discrete units or difference as a continuous flow, a multiplicity. This question gets taken up in a number of different ways. In his consideration of unit operations, Ian Bogost claims that Deleuze and Guattari's focus on multiplicity ends up becoming a system operation of sorts - that is, a top-down model that fails to account for discrete, countable things. In his discussion of specificity/generality and singularity, John Muckelbauer offers what can serve as a critique of Bogost, that the attention to smaller, more specific units (as in Bogost's unit operations) does not escape the problem of generality. In other words, unit operations are already system operations, just on a smaller scale.

This discussion of unit and systems operations seems to map onto the difference between something like a top-down website development model and a folksonomy. In both cases, difference is made discrete and (I think) procedural as well. That is, even in a folksonomy, difference gets defined according to specific processes and logics (although the procedure here is perhaps much more open-ended, adaptable, and perhaps discernable only once in place).

The main question at this point seems to focus on the ability of different systems (binary or otherwise) to categorize difference. Another perspective here that might be worth considering would address not the categorization of difference (which some would argue always leaves some excess of difference unaccounted for by the categories) but our exposure to difference and multiplicity. That is, regardless of our ability to categorize it, difference affects us. With this in mind, we can consider how digital spaces structure our exposedness and exteriority in particular ways. (Alex Reid considers this question with reference to digital scholarship and video production in his recent Enculturation piece.) So, we can certainly talk about how normativity gets (re)produced in digital spaces through different acts of categorization, binary thinking, unit/system operations, etc. But we can also attend to the ways that the digital exposes us to difference in unique and historically/technologically situated ways.