It's not the journalists' fault; it's ours. We've failed miserably at public outreach because the "leaders of our field" don't believe the public will ever understand what we do and don't care to try and explain it at a level people will understand. [...] The culture of irrelevance we've created for ourselves can't be dismissed with a hand wave...

That's some tough love, but I'm inclined to disagree. Perhaps linguists, like all academics, have some isolationist tendencies. Doug himself had a lot of trouble drumming up contributions to the Popular Linguistics Magazine. But I think a more severe problem is that linguists' point of view is actively unwanted.

To flesh out what I mean, I think it's worth speculating about why this particular piece of research on vocal fry captured the collective media imagination. The research itself was very modest in its scope, and there is a vast universe of research out there that media outlets could have chosen to report on. Putting aside the academic press, you could fill hours of television with just the postings to Science Now, where the vocal fry piece first got some play. So why did this particular piece of research get reported on TV, and all over the internet?

The answer lies, I think, in the supposed culprits: young women. This is a very simple case of language shaming. The Today Show clip described vocal fry as "animal-like," and buffered the piece with iconic images of female frivolity: shopping, gossiping, talking about boys, and watching Sex and the City. The original MSNBC blog post was updated with the "best comment so far" from Facebook, which said

"These girls sound like a bunch of neurotic dolphins who do not make sense."

"Brilliant," says the MSNBC blogger, "can you top that?" Vocal fry has thus been successfully framed as a negative behavior.

Why is vocal fry framed so negatively? Well, it's almost a tautology to say that young women do something, and it is undesirable. Vocal fry is an especially striking case. Before all of this media coverage, no one, except people who work on speech, even knew what it was, or commented on it. Once it was defined and explained, and associated with young women, suddenly it fit snugly into a classic declinism frame, and a linguistic inferiority of women frame.

The supposed motives of young women for doing vocal fry are also a key element in the media coverage. They want to 1) emulate pop artists and 2) fit in with their friends. That is, they are shallow, frivolous, and thoughtless. Really, the tone of the story is only a slightly refined version of this or this.

Perhaps the coverage of vocal fry could be understood as being part of a larger trend of policing the behavior of women. In a lot of ways (dietarily, sexually, physically, professionally, etc.), there is a razor thin range of acceptability for young women, which now apparently includes their pitch contours. If you end your utterances with a final pitch rise, you're doing uptalk (a.k.a. ending all your sentences with question marks), and if you end them with falling pitches, you're doing vocal fry.

So where does the work of a linguist fit in here? Could we have provided higher quality research and better facts, in an equally digestible manner? Probably, but I submit that media interest in vocal fry has nothing to do with facts, or the quality of the research. The commentary of a linguist would not add grist to the mill of female inferiority, and would therefore just be ignored. In fact, that's exactly what happened with Janet Pierrehumbert's contribution to the Today Show story. What she said was completely lucid, and contained no technical mumbo jumbo, but the point of the coverage was not to educate, but to shame.

The problem is that most people want to be able to use language as a device to separate the inferior from the superior. This kind of desire surfaces in almost every conversation I have about language with a non-expert. It becomes amplified in the media, and it operates at all levels of the social hierarchy. There is the denigration of people who speak non-standard Englishes. Then, there is the denigration of women's and youth's speech. At the higher levels of the cultural elite, self-worth can be determined by your choice of octopuses, octopi, octopodes, or by whether you agree that by saying "A whole wheat bagel, please," you should not have to be asked to specify that you don't want cream cheese.

This is the kind of social work that people want to use language for, and it is a frustrating cultural juggernaut to be at cross purposes with. And that is exactly why, in my opinion, most linguistic research does not gain traction in popular discourse. Before we can get to the interesting stuff, we first have to turn everyone's moral universe upside down.

And that kind of task requires something more than just scientists being open to popularizing their research. We really have to be more agressive in a way that other sciences don't have to be. Really, it's necessary to be politicized, and I can fully understand that step being a difficult one to take for a researcher.

I see this tension being the biggest roadblock to developing larger social relevance for linguistics. Are we scientists, or are we politicians? Can we be both, effectively?

What is wrong with this video is everything. There is a brief snippet where they interview a real linguist (Janet Pierrehumbert) who says (I paraphrase) "This isn't a new phenomenon, and it's not caused by pop-stars" (see also, the related Language Log post). But see how much air time that gets! The whole premise of the piece is wrong, and she says so, and they power right along like it's irrelevant. If you were to, say, introduce a political figure on air with the incorrect party or state affiliation, you'd have to apologize on air moments later. If you report that the jury found a defendant guilty when they were actually acquitted, you'd be ripped to shreds. You state a bunch of garbage about language, and an expert tells you you've got it all wrong, oh, whatever, it's more fun this way. On this topic, and most others about language, the media coverage is of the same journalistic quality as "Dewey Defeats Truman."

What do I know about vocal fry?

Frankly, I'm not much of an expert on voice quality or register. I'm especially not too familiar with sociolinguistic work on voice quality, and that kind of knowledge seems to be necessary to evaluate the claims of this story.

However, I have had quite a bit of experience dealing with vocal fry. Vowels and their acoustics are my thing, if you didn't know, and a vowel pronounced with vocal fry can be difficult to measure. I've looked at a lot of vowels, which means I've seen a lot of vocal fry, and have my own impressions about where it occurs. Basically, it happens most often when a speaker's pitch drops, like at a phrase boundary, or sometimes when a voiceless consonant follows the vowel.

I'd agree that there is something more than simple mechanics of articulation going on with the use of vocal fry. There is definitely a stylistic component. I'd also agree, impressionistically, that women tend to do a bit more vocal fry than men, or at least it's more noticeable when they do.

But vocal fry is by no means an exclusively female quality. Arguing from anecdotes is poor form, but here is an example of a relatively high profile male doing a lot of vocal fry.

I read the paper.

When watching science reporting like this, there's always the possibility that the researchers' work is being misconstrued, either by the media outlet, or by their institution's press office. So, I made good use of my institutional access to academic journals, and read the original paper (even livetweeted the process) by Wolk, Abdelli-Beruh & Slavin (2011), which was published in the Journal of Voice. Here are the claims that rubbed me so wrong about the Today Show clip.

Use of vocal fry is a new phenomenon.

Vocal fry is exclusively a female phenomenon.

Vocal fry is created and spread by figures in popular media (e.g. Ke$ha, Kim Kardashian).

I read the original paper with the aim of determining whether

there is evidence in the paper supporting these claims,

the researchers themselves made these claims.

Wolk et al. recorded 34 women between the ages of 18 and 25, both producing a sustained vowel sound, and reading a short passage. Then, three carefully selected sentences from the reading passage were evaluated by trained speech pathologists for whether the speaker was using vocal fry. About 2/3 of the speakers were judged to use vocal fry. They also did some acoustic analysis of the vocal fry.

That is all the evidence that Wolk et al. collected, analyzed, and presented. Needless to say, it provides no support for any of the three points. On the first, they only analyzed one age group, so there is no way to tell if young people do it more or less than older people. Their discussion of background literature actually cites a number of papers from the mid 60s which argue that vocal fry is part of normal speech. So much for it being a new phenomenon. In the discussion, the authors don't outright claim that vocal fry is a new phenomenon, but they do frame the interesting research question as figuring out how much college students do it. They deserve a pass on this point, I think, but they should perhaps consider reframing their research questions as pertaining to a larger cultural pattern.

On vocal fry as an exclusively female phenomenon, I think the structure of this study presupposes that outcome, rather than investigating it. Why study only female college students if you didn't already think that only women did vocal fry? Part of the answer to that seems to be that male subjects are hard to come by for speech pathologists. Wolk et al. cite a previous study of vocal fry that looked at first year speech pathology graduate students. The sample turned out to be 94% female. Abdelli-Beruh, the second author, told the Today Show reporter that 99% of her students are female. Regardless, without a male sample, it's really impossible to draw any hard conclusions about the gender difference. At any rate, Wolk et al. don't outright say that "men don't do it," so I'll give them a pass there.

Now, for the worst part: the all important influence of popular media figures. There is less than zero evidence presented by Wolk et al. for causal influence of any variety. In fact, they cannot even claim that the patterns they found are primarily social rather than being primarily anatomical, or automatic. However, on page 4, they say

It is possible that these college students have either practiced or observed this vocal register and modeled it to match popular figures.

They said it. On the basis of zero evidence, they went ahead and said it. This is not a case of the big bad media twisting an earnest researcher's words. These researchers went ahead and speculated in an unsubstantiated and, I think, irresponsible manner. Claims require evidence, and on this point, they have none.

Vocal Hygiene

This paper also introduced me to a new range of concepts: "vocal abuse," "vocal misuse", "vocal hygiene." I have to admit, this was all news to me. They sound vaguely familiar as something a professional singer or actor worries about.

I'm not a speech pathologist, but I'd be surprised that even speakers who use vocal fry at a high rate could do so to an extent that injures them. Wolk et al. actually don't report how often their speakers used vocal fry, just now many used vocal fry at all (one time out of three sentences). But let's go extreme and say some speakers do it once per sentence with a falling final pitch. This would exclude questions, for instance, or sentences produced with a final rise for some other reason, like uptalk (women just can't win, can they?). That's still not a lot.

I mean, there are languages out there with contrastive creaky voice. That means that in order to say the word you intend to, you have to use vocal fry.
Stay tuned for next time, where I will talk more about the media's coverage, and why I don't think train wrecks like this one are linguists' fault, which I think is a controversial position among linguists.

Wednesday, December 7, 2011

Following up on my plurality post, Jon Stevens showed me this video done by Kory Stamper, an associate editor at Merriam-Webster. Based on the comments, it looks like it's gone a little bit viral.

What is so striking to me is the fictional dialogue she presents at the beginning.

So let's say you're swimming in the ocean, and you see some eight legged cephalopods. You say to your friend, "Hey! I saw a group of octopuses." And your friend says, "Hey! You're an ignorant slob! You saw a group of octopi."

And, I think that the trigger of the "ignorant slob" judgment here is very telling. We're not talking about a non-standard dialect which may, for instance, employ negative concord (a.k.a. double negatives), or feature different verb agreement patterns. Those people are too far gone to even begin engaging with. We're not even talking about misguided prescriptive proclamations, like "don't end a sentence with a preposition," or "don't use the passive voice." That's high school English class material, unworthy of debate.

No, we are talking about the plural form of octopus.

You are unworthy.

Only performance on a task as esoteric and irrelevant to every day life as forming the plural of octopus is adequate to separate the elect from the damned. Woe unto you who accepts the heresy of octopi. You must accept the Truth of octopodes into your heart if you don't want to sound like a fucking idiot.

On a related note, no matter what their origins were, I suspect prescriptive proclamations like "don't end a sentence in a preposition" and "don't use the passive voice" only continue to be considered virtuous because they are nearly impossible to adhere to. (Hey! A twofer!)

Monday, December 5, 2011

Update: December 8, 2011
I'm going to use this post as a running list of examples of over-latinate plurals.

Almost everyone is familiar with the uncertainty surrounding the plural words like platypus, octopus, and syllabus. They look kind of Latin, and a lot of high profile words with this kind of shape form their ending by changing the last syllable to"i" (alumni, foci, fungi). But in these uncertain cases, prescriptivists tell us we are hypercorrecting, and engaging in pseudo-Latin.

But, I'm not so sure if this is simply a case where people are well educated enough to know the -us → -i, rule, but not enough to know a Greek word when they see it. For instance, I've seen it overapplied to words which aren't even spelled -us. At 1:10 in this video, John Stewart says

"We cannot allow ourselves, to get complacent, for the face of tyranny has many... orifi."

Ok, clearly this was done for comedic effect, but I think it's only funny because we recognize "orifi" as well formed, but prescriptively incorrect.

Even stranger, I recently had an experience where I wasn't quite sure how to form the plural of danish (as in pastry). I was telling a dinner party that I wasn't very hungry because I'd eaten a few at a coffee shop earlier. I said "I had a few..." and paused, because the first thing that came to my mind was "dani". Even stranger, my sister, who had seen me eat the offending pastries, offered "Dani?" And we are not alone! check out this Yahoo! Question.

Whats the plural for danish?
Like if you have two danish(es?) is it dani?
Or just danishes?

So for some people, the semi-productive latinate plural rule doesn't care if it's dealing with s or sh.

In some ways, it makes total sense. I'd argue that the the sequence [ɨsɨs] isn't the greatest one in the world. Once you've got a rule which would let you avoid it, why not use that all the time?

In a note related to irregular plurals, I was once asked in a question period about what kind of "metrices" I use. This is way more interesting than it initially seems. "Oh, that's just analogy from matrix," you say, but it isn't quite. The singular form is just metric. The word doesn't have the appropriate shape to undergo the irregular pluralization until after you've already added the regular plural suffix! So you wind up with metric → metrics → metrices.

I did this story three different times six months ago on the RidicuList, and some of the video from the Colbert Report that-- Some of the video they used, came from the Third Eagle's video responses to my RidicuLists. I like to call them Ridiculi, but you get the point.

Wednesday, November 23, 2011

Colin Wilson recently gave a talk here at Penn about why speakers don't necessarily say words in a foreign language the way foreign language speakers do. For example, the capital of Georgia (the country) is Tbilisi, which an initial [tb] onset cluster. Here, listen to pronunciation on Wikipedia: Tbilisi, then say it back out loud. That's basically the experiment Colin was talking about.

So, I'm guessing that if you didn't manage to say Tbilisi exactly like the recording did, you probably said something like [tɨbilisi], adding in an extra vowel between the [t] and [b]. There are a few different explanations for why you might have added in that extra sound.

You hallucinated, and thought you heard [tɨbilisi].

You accurately heard [tbilisi], but then when you tried to say it, it came out [tɨbilisi].

Colin is pursuing another kind of analysis, where the way a Georgian speaker says /tbilisi/ sounds more like the way you would say /tɨbilisi/ in English, than the way you would say /tbilisi/ in English (if you were ever to say such a thing).

It's pretty cool stuff, and strangely reminded me of a similar repetition experiment I inadvertently performed with my iPhone. Here's a video re-enactment:

How weird is that! Siri heard me say [ʃəvan], but for some reason repeated it back [sajobən]!

Ok, I guess I really know what's going on here, and it's not phonotactics, but it's fun to pretend. Clearly, the transcription with the highest probability given my speech was the Irish spelling "Siobhan": P(transcription | audio). But, given the text, the text to speech (P(audio | transcription)) produces [sajobən].

It still strikes me weird that Siri has some kind of dictionary lookup to give me "Siobhan" for [ʃəvan], but then does a procedural text-to-speech.

P.S. I think that I have an intrusive /l/ after "how" the second time I say "How do you spell Siobhan?".

Friday, October 21, 2011

Robert A. Muenchen is maintaining a report here on the popularity of R, a programming environment for statistics.

He's got a bunch of measures, but these really caught my eye. A site called Rexter Analytics did a survey in 2010 asking respondents which pieces of software they used in 2009. These were the results:

So, R is at the top of the list. KDnuggets did a similar poll, and returned very similar results.

The take away message so far is that a lot of people who do data analysis use R. The plurality even. That is the zeitgeist.

Now we come the the results that worry me. Muenchen also did an analysis of Google Scholar citations of software packages, and produced this graph.

Clearly R has a pretty sharply rising slope, but it still comes in fourth after a bunch of software that, frankly, only academics can use because they get institutional licenses.

I'm not worried because I think academics should be using R (even though I do). It has more to do with the fact that people in academia like to think of themselves as the forward thinkers, and the innovators of new ideas. But in this regard they are clearly following behind the trend that everyone else is setting. Maybe it's fitting that the SPSS curve looks not unlike what I'd imagine an ivory tower to be.

Thursday, September 1, 2011

I recently re-watched Battlestar Galactica (the re-imagined series). I had never watched the end after the season 4 mid-season break. Over all, I liked the series a lot, but wasn't a big fan of the decidedly anti-modernity finale. Do you know what is great? Medicine, and good odds of not dying in your 40s. You know what's even better? Space ships and faster-than-light travel.

Anyway, I don't want to give away spoilers (even thought that wouldn't ruin it for you, so says science). My point of posting is this cool medical display from season 4 (and maybe earlier, I just noticed it in season 4).

I like this display a lot, because it fits in with the general BSG style of keeping things close to current reality, ish. Sure, they have humanoid robots, but they also still use nukes, not photon torpedoes.

I could almost imagine seeing this display today, maybe in a tech company's speculative design video. It appears to incorporate some contemporary data display ideas, like sparklines. My feeling is that in a lot of sci-fi, data displays like this are a lot more cryptic, and hardly seem practical from the view of an analyst. This display, while definitely looking futuristic, also looks like it's all about practicality.

The element that gets the most screen space is the EKG, which animates and bleeps just like in any medical drama.

Then, there are these little widgets.

I believe the larger number is the current heart rate. It updates fairly regularly, going up or down a few beats-per-minute (I missed a whole bunch of dialogue staring at this in the background). The little blue light above the heart rate blinks with every heart beat, or at least every time the display beeps. I don't know what the smaller number represents. I didn't see it update, so it might not represent dynamic data.

Then, there's these three panels, probably small-multiples of some kind.

They're largely static, except they redraw themselves every few seconds. So maybe they could be density distributions over a time interval, or maybe frequency analyses.

Then there are these bars.

This is maybe the most vexing element on the display for me. At first I thought that they might display blood sugar or oxygen relative to some baseline, but you'll notice that at some points, there are bars that go both above and below the baseline. So, they have to be two kinds of measures that are usually in a complementary distribution, but not always. Either way, it seems to clearly be a time series at a relatively large granularity, since it never redraws itself during a scene.

Lastly, there's this strip at the bottom.

It's relatively understated compared to everything else in the display, meaning it can't be any sort of really vital statistic. It looks like maybe a spectral analysis of some kind, or maybe another time series (sleeping and waking time?). This also remains static during scenes.

There are also a lot of elements of the user interface which are very contemporary. Take these boxes for instance.

I think we all know that if you were to press on the screen on one of those triangles, these little boxes would expand to show more information, or contract and hide the information they're currently displaying. This is definitely something that wouldn't have been incorporated into speculative UI designs 20 years ago.

Sunday, August 28, 2011

Well, here in Philadelphia, we've just braved Hurricane Irene. From what I've heard, damage here was relatively minimal, and we haven't lost power. My friends further north in NYC are in my thoughts, cause it looks like they got really hammered.

The silver lining here for me is that I was able to go collect data from the Weather Underground station about six blocks away from where I live. Here are the numbers.

We got 5.68 inches of rain, which fell most steadily between 6PM and midnight last night.

Barometric pressure, on the other hand, hit the floor at 6AM today.

As for wind speeds, there are two measures from the weather station. Speed is, I believe, average wind speed over the reporting time bin (which varies between 1 and 7 minutes...), and Gust is, I believe, the maximum speed during that time bin. Either way, our max wind speeds were around 11PM last night, and they've stayed pretty high into this afternoon.

Update

Well, I feel a little stupid. It looks like there are two locations on the USGS site for this earthquake, and the one I was looking at is not up-to-date... Maybe I don't feel so stupid, it's not the best kind of design.

The real data to download is here. I've already updated the links above.

Wednesday, August 17, 2011

I've been wondering if blogging does me any good. I don't mean for the heart and soul. I enjoy blogging and am going to keep it up (except for those end-of-semester hiatuses). But I've been wondering if blogging does me any good professionally, or whatever. Obviously, "a professional or whatever good" is hard to define, so I'll define it according to the data that I have.

I maintain, along with this blog, an academic website where I have all of my more serious research stuff. I've got Google analytics set up on both my blog, and my academic site, keeping track of page views. So, if I can detect that page views of my blog drive some page views to my academic website, then I'll conclude that blogging is doing me some professional good. This makes a certain kind of sense, since what matters to me at this particular stage of my professional life is getting my ideas out there, and my ideas are catalogued on my academic site.

Now here is the traffic from my academic site, and my research page on that site from the same time period.

As you can see, my academic site gets a lot less page views than my blog. Prospects are not very bright.

Autocorrelation

My first step of analysis was to figure out how correlated page views of each site were within each site. That is, how correlated are page views on my blog with page views from one day later on my blog, or two days later, etc. To calculate this, I used the acf() function in R. Here's the autocorrelation function from my blog. The x-axis represents how many days into the future you're comparing page views, and the y-axis represents the correlation between page views separated by that many days.

It looks like page views on my blog are pretty well correlated with the pages views from one day before (0.45). After that, there is a correlation drop off, which I'll interpret as new-post-decay. It seems like influence that a single new post has on my blog traffic is fairly minimal after five days.

Here's the autocorrelation function for my academic site.

As you can see, the over-all size of the correlations are much smaller than for the blog. This is most likely because each new post is a new event that happens on my blog, which can have an effect which lasts for a few days, whereas nothing happens on my academic site in the same way. However, there is an apparently cyclic pattern, where page views are most positively correlated at 7 day intervals, and most negatively correlated at 3 to 4 day intervals.

Duh! Who does work on the weekends?

To factor out this cyclic pattern, I fit a linear regression of page views for my academic site and research page with weekday as a categorical predictor. I'll use the residuals from these regressions for doing the cross-correlation.

Cross-correlation

Next, I checked the cross-correlation of (residualized) page views. This checks to see how correlated page views are between any two of the sites at different time lags. First, here's the cross correlation of my main academic site and my research page. I knew these would have to be highly correlated, since my research page is the most clicked link on my main page.

Correlations with negative lag indicate that visits to my research page were correlated with visits to my main academic site a few days later. Positive lags mean visits to my academic page indicate that visits to my academic site were correlated with visits to my research page a few days later. The correlation at 0 indicates how correlated visits to my academic page and my research page were on the same day.

Unsurprisingly, the only strong correlation between visits to my main academic site and my research page are on the same day. That spike around 10 days makes no sense, so it's probably just noise.

So, drum-roll please, how correlated are visits to my blog and my main academic site?

I would analyze this as bupkis. Likewise for my research page.

To sum up

It looks like blogging is just a fun diversion for me right now. Even though it would have been a lot of fun to come to my advisor or department chair with strong results that blogging is professionally fruitful, I'm fine with the way things turned out.

However, I shouldn't have been surprised. If I was trying to use blogging as a platform for promoting my professional work, I wasn't doing it very well. If you're looking at my blog now (vs an RSS subscription), you may notice that I've added some links to the right, which lead to my academic site, and to my github site. Why not try to make blogging work for me a little bit?

Sunday, August 14, 2011

In the process of moving, I've come across a bunch of books from my undergrad Sociology minor days, including a book of collected works by Max Weber. You may know him best for the notion of the Protestant work ethic.

At any rate, the volume includes text from a lecture called Science as a Vocation (available free online here), which I've decided to read through because of its personal relevancy, and I've come across this wonderful paragraph.

"Nowadays in circles of youth there is a widespread notion that science has become a problem in calculation, fabricated in laboratories or statistical filing systems just as 'in a factory,' a calculation involving only the cool intellect and not one's 'heart and soul.' First of all, one must say that such comments lack all clarity about what goes on in a factory or in a laboratory. In both, some idea has to occur to someone's mind, and it has to be a correct idea, if one is to accomplish anything worthwhile. And such intuition cannot be forced. It has nothing to do with any cold calculation. Certainly calculation is also an indispensable prerequisite. No sociologist, for instance, should think himself too good, even in his old age, to make tens of thousands of quite trivial computations in his head and perhaps for months at a time. One cannot with impunity try to transfer this task entirely to mechanical assistants if one wishes to figure something, even though the final result is often small indeed. But if no 'idea' occurs to his mind about the direction of his computations and, during his computations, about the bearing of the emergent single results, then even this small result will not be yielded."

This seems to me to be a nice enough refutation, 90 years prescient, of that strange Wired article from a few years ago which claimed that big-data is going to kill the scientific method.

It also resonates with an issue near and dear to my heart: promoting statistical literacy within linguistics. And that takes a two pronged approach. The first is developing statistical competency to be able to run and analyze your own statistics, without relying on semi-automated techniques, like stepwise regression, or put slightly differently, transferring the task entirely to mechanical assistants. The second is to be sure to treat statistical methods as tools for investigation, not to reify them as the objects if inquiry themselves, nor their results as god's truth, spoken by its R-acle.

Tuesday, August 9, 2011

I've already blogged about what I didn't like about Mark Pagel's TED talk. I'm not going to beat up on it more, specifically. Rather, I'd like to problematize the meme that he kicked it off with.

"Each of you possesses the most powerful, dangerous and subversive trait that natural selection has ever devised. It's a piece of neural audio technology for rewiring other people's minds. I'm talking about your language, of course, because it allows you to implant a thought from your mind directly into someone else's mind, and they can attempt to do the same to you, without either of you having to perform surgery." [emphasis added]

Hopefully by now, you've caught on to my own subversive juxtaposition. Briefly, I think this meme is cuter than it is true.

I call it a meme, because I seem to recall it showing up in Steven Pinker's The Language Instinct, and I'm sure it's popped up other places too. Obviously, this meme brushes right up against other issues regarding language and thought. For instance, is language the structure of thought, and does language somehow constrain our thoughts? I'm not well versed enough in these issues to comment, and I only mention them here in order to say that I won't be saying anything about them, except for what I have already said.

Did that make sense? If so, I have succeeded in externalized telepathy. If not, that's sort of my point. Unsuccessful thought implants are a pervasive fact. Just ask the customer and the project leader, or the teacher and the student. If it were so easy to implant thoughts in others' minds, would schooling really take so long? Perhaps thought implant rejection can be blamed on external factors, like inattention on the hearer's part, or the complexity of the thought being transmitted, but I'd be surprised if that was all there was to it.

I'd guess, and this is where I enter into purest speculation, that successful communication between a speaker and hearer has a lot more to do with the fact that people are willing to attribute minds and intentional stances to just about anything, including other people, than with the design specifications of language.

In fact, the ability to implant (false) beliefs in someone else's mind is most definitely not only possible within the domain of language. Just ask Marcel Marceau.

Or, puzzle over this interesting item.

Perhaps language is better than other natural forms of communication at transmitting propositional content, but it's certainly not ideal for it either. If it were, then there wouldn't have been any need to develop formal logic, or propositional calculus.

So there is the problem that I want to create for this meme. Language does not really "implant a thought from your mind directly into someone else's mind," and insofar as it does, it doesn't do so uniquely above all other forms of communication. It's a pretty meme though, sort of like a poem about linguistics, and it's attention grabbing. But if it matters whether it's true and accurate, I don't think it stands up.

Wednesday, August 3, 2011

I'm a bit of a caffeine junky. Every day, regardless of where I am, I need to get my fix. I've also been very lucky to do some international traveling, which has put me in the situation where I need a coffee, but I don't speak the local language. And you know what? I've always successfully ordered and paid for my coffee, and even gotten what I intended to order.

Ok, enough speaking in parables. My point is that communication is not the same thing as language, and even complex economic transactions can be successfully carried out with only communication and no language.

I think his introduction is far too simplistic, especially with regards to his passing comments about language acquisition. He says

"Just imagine the sense of wonder in a baby when it first discovers that merely by uttering a sound, it can get objects to move across a room, as if by magic, and maybe into its mouth."

It is obvious that there must be more to the secret sauce of language acquisition than that. Even Nim Chimpsky was able to work out that by merely waving his hands around, he could get things into his mouth. Just read his quotations: Wikipedia/Nim Chimpsky/Quotations. But Nim never acquired language.

There's also something strangely self defeating about his entire evolutionary argument. He seems to say that humans evolved language as a means to the end of creating large, modern societies. I'm sure he doesn't really think it worked like that. Evolution isn't goal oriented, and he's a biologist. Anyway, the last part of his talk is devoted to the "problem" of language diversity, and how we use it to build barriers between populations. The whole talk, laid out in one sentence, becomes:

Humans evolved language in order to encourage cooperation and to build large societies, but then, we actually used it to build divisions between population groups, and that's a problem because of globalization.

How on earth could language be failing at the very goal for which it was apparently evolved?

Now, I'm not saying the world would be exactly the same if there was no language. We probably wouldn't have an iPhone, as Pagel playfully illustrated in his talk. But how much language do we really need to achieve the goal of a large society, and arrive at iPhone? Does language really need to be recursive? If we couldn't say

I know [that you hate me].

could we still have arrived at iPhone? Who really needs relative clauses anyway? On the flip side, what if language were more "permissive," and we could say

Whati did you see the man who bought ti

These are technical properties of language I'm talking about. They may seem like little details, but they're actually very fundamental to very nature of language. And it's almost impossible to connect them directly to the evolutionary story Mark Pagel is telling. All that story needs is some means of communication, but says nothing about why we have the specific system of language that we do, out of all the possible systems that could have existed.

Needless to say, linguists never concern themselves with questions like "is the evolutionary consequence of high applicatives an iPhone?" and good thing too.

* * *

One thing that I did like was that he said "Tower of B[ei]bel." That's the way I say it.

Sunday, July 31, 2011

Project Nim is a new documentary out about the life of Nim Chimpsky, the chimpanzee that a group of researchers at Columbia tried to teach sign language. Here's a brief synopsis.

"Let's take a chimpanzee, put it in a house in the upper west side with a psychoanalyst who doesn't know anything about chimpanzees, language, language acquisition, or sign language. Also, she has 7 other children in that house. What could go wrong?"

To put Project Nim in some perspective, Nim Chimpsky was born in 1973, which is two years after the Stanford Prison Experiment, and one year before the first legislation requiring Institutional Review Boards for institutions carrying out human subjects research. This is not to say that most social science research was so by-the-seat-of-their-pants back then, but it was a different time.

I came away from this film with a few different lessons.

Don't sleep with your advis(or/ee).

Just don't do it. Twice in the film, two different interviewees said about two different sexual entanglements, "I don't think it affected the science." But, as I heard Christopher Hitchens once say about interview subjects, a guilty mind wants to confess.

The movie starts out with Nim being placed in the home of Stephanie LaFarge to be raised as a human child. Stephanie had 3 children of her own, and her husband had 4, bringing the total residency of her Manhattan brownstone to 7 human children, 2 adults, and 1 baby chimp. This frankly sounds a lot more like a reality TV show than a scientific experiment. Add to that the fact that they gave baby Nim alcohol and pot, and that Stephanie breast fed Nim, I'm not sure MTV could even air it.

Why on earth was Stephanie LaFarge recruited to be Nim's mother? As far as I can tell, her only qualification was her sexual history with Project Nim PI, Herb Terrace. Her graduate degree was in psychoanalysis. She had no experience with chimpanzee research, or language research of any kind, and in fact, she was hostile to the scientific goals. She wouldn't keep logs, didn't have a project plan, and eventually tried to restrict the other researchers' access to Nim.

The second affair which came up was, again, between the PI, Herb Terrace, and the head teacher on the project, who was only an undergrad at the time. The fallout of this brief relationship led to the head teacher leaving the project.

First of all, I just don't think it's possible to pursue a relationship between a professor and an advisee (especially an undergraduate) in an ethical way. Given the power dynamic, some form of coercion is nearly impossible to avoid. I feel a little uneasy saying so in a public forum, which I think goes to say that this is not a problem that academia has left behind in the 70's.

Secondly, all sorts of strange and bad things happened to the science because of the sex aspect. Nim would have never had such a strange early childhood, and would have had greater constancy with the project if the PI had not pursued inappropriate relationships.

Beware those with media savvy.

One frequently hears that scientists in general, and linguists in particular, don't do enough to popularize their research. Occasionally, we are scolded for holing up in our ivory towers, since we are too arrogant to try to share our love of science broadly.

However, I think Project Nim has a lot to say about the perils of researchers who are a little too keen to popularize their research. One of the ASL teachers on the project described Herb Terrace as an "absentee landlord," who only showed up for photo-ops and media interviews. All in all, the project appears to have been planned far better from a media perspective than from a research perspective.

In case you were unaware, research, even really cool and good research, doesn't just show up on TV out of nowhere. It takes deliberate attempts on the part of the researcher or the university to drum up attention. And everything about this project seems perfectly constructed to be media fodder.

In the meantime, there were serious problems with the project, mostly having to do with Nim mauling research assistants, which Herb Terrace didn't really address, and had a hard time recollecting in the documentary interviews. The most serious incident, where Nim nearly bit through an interpreter's face, Terrace's reported reaction was that he was worried she would sue him, or that "it would get out."

It was a little hard for me not to think of Marc Hauser during the movie, another high profile non-human primate researcher who has recently fallen on hard times due to questionable ethics. The connection between Terrace and Hauser is tenuous, but they run together in my mind, I guess, because they both worked hard to popularize their research.

And this is why I, at least, am frequently wary of active researchers who are also active popularizers of their own research. It seems almost synonymous with sloppy research and compromised ethics in my mind.

Humans are not socialized chimpanzees

This certainly isn't a new lesson for me, because I've never really thought that humans are just socialized chimpanzees. However, I really like how this point was hammered home in a real way.

In discussions about "human nature," the notion that our "true" nature is somehow more brutish and violent seems to come up a lot. In this conception, society is merely a veneer over top our inner chimp.

Well, society didn't do too much to cover over Nim's external chimp. Our "true" human nature is manifest in the activity of all humans, meaning it must be very broad, and non-uniform, but non-arbitrary at the same time.

Interestingly, I've also heard of research trying to figure out if dogs are just socialized wolves. A bunch of researchers tried to raise wolf pups as if they were dogs, a much more achievable task, I think, than raising a chimp as a human. The results were much the same as for Nim. After infancy, the wolves went nuts and tore the place apart, and the experiment had to be abandoned.

Conclusion

I really liked the movie, and would suggest it to anyone who appreciates a good documentary.

Tuesday, July 26, 2011

Before there was Youtube and the accent meme, there was, I guess, punk rock.

In this music video from 1988, the Dead Milk Men, a Philadelphia area punk band, give a rather hyper-Philadelphian performance. For the most part, Philadelphians aren't that aware of what marks their dialect as distinct from other regions, nor are most non-Philadelphias aware that there is a unique Philadelphia dialect.

Now, I say hyper-Philadelphian for a few reasons. The lead singer for this song, Joe Genaro, definitely Philadelphia dialect speaker, born about an hour outside of the city in Wagontown, PA.

But, local dialect features are one of those things that tend to get leveled a little when singing, and there is no hint of that in this performance. Some things even seem exaggerated to me, which is fitting with the song itself, which was shot in Philadelphia, and makes references to culturally relevant locations in the lyrics.

/ow/ fronting

/ow/ fronting is, perhaps, the most salient dialect feature on display in this song. It's certainly not unique to Philadelphia. In fact, it's what qualifies Philadelphia as the Northern-most Southern city. While Philadelphia has many other Northern features, like a very raised /ɔ/, stereotyped in coffee talk, we depart from the rest of the North by fronting /ow/, and Joe Genaro does this to an extreme degree in this song. Right off the bat at 0:28, he says

And she almost knocked me dead.

Then he immediately follows this up with

I tapped her on the shoulder
And said do you have a beau?
She looked at me and smiled and said she did not know

In fact, all of his /ow/s in this song are incredibly fronted, except for the two tokens in rollin and stolen which, of course, are effected by the following /l/.

Canadian Raising

The song isn't filled with Canadian Raising tokens. In fact, there are only two, but the one is so stressed and clear and wonderful. At 1:01, the waitress says

Well no, we only have it iced.

Canadian Raising continues to be a favorite variable of mine.

Short-a pattern

Philadelphia is known for its complicated pattern of tensing /æ/, which is similar to New York City. The tense version pops up expectedly in

Unfortunately, mad, bad and glad, which are exceptionally tense, don't appear anywhere in the song. However, at 1:29, he says dad, which is definitely lax as expected.

Tokens of /æ/ which are lax in Philadelphia where they are tense in many other dialects show up in

1:03
So we jumped up on the table and shouted anarchy

and

1:24
Her father took one look at me and he began to squeal

and

2:26
Eat fudge banana swirl

/ey/ split

This one is pretty subtle. Most of his tokens of /ey/ don't sound very different from standard, but one word final token at 1:15 is pretty low, almost [æɪ].

On such a winter's day.

Data suggests that all /ey/ used to have this quality in Philadelphia, which is another reason why it's related to the Southern and Midland dialects. A sound change has been raising /ey/ higher and higher, but not in word final position.

on = dawn

Philadelphia maintains the distinction between cot and caught by raising the vowel in caught, similar to New York City. One way in which Philadelphia differs from New York City is in the vowel class of the word on. In most locations North of Philly, on rhymes with the man's name Don. But in most locations South of Philly, at least where a contrast is maintained, on rhymes with the woman's name Dawn. You can hear this in

0:38
I tapped her on the shoulder

1:03
So we jumped up on the table and shouted anarchy
And someone played a Beach Boys song on the jukebox
It it was "California Dreamin"
So we started screaminOn such a winter's day

l-vocalization/darkening

Now, if you think you can reliably code l-vocalization embedded in a punk rock song, god bless you. But, there are a few tokens that are pretty clear. For instance, I don't think there's any /l/ in

0:38
I tapped her on the shoulder

The thing that makes Philadelphia pretty unique is our tendency to darken and vocalize /l/ intervocalically (so balance is pretty confusable with bounce) and in initial clusters (like cluster). I don't want to make any strong claim about being able to reliably hear it in this song, but listen to

2:12
We got into her car away we started rollin
I said how much you pay for this
Said nothin man it's stolen

and compare it to

0:49Let's go slam dance

There is definitely not as much /l/ in rollin and stolen as there is in let's.

* * *

So, do you think I missed anything important? As a side note, I think I have the same shirt as the drummer.

Tuesday, July 19, 2011

This is the visualization of language change that I've always wanted to produce! And now that I've made it, there are all sorts of aesthetic things I'd like to change, but c'est la out-of-the-box-tools-from-google!

I should note that the data underlying this graph would not exist but for the sweat, blood and tears of Bill Labov, Ingrid Rosenfelder, a team of undergraduate transcribers, the NSF, and 3 decades' worth of graduate research teams.

Depicted below is data from the in-development Philadelphia Neighborhood Corpus. We have analyzed 235 speakers who were interviewed as part of the Researching the Speech Community course between 1973 and 2010. That gives us dates of birth between 1889 and 1991, a 102 year timespan! Actually, raw data isn't depicted. Rather, it's the smoothing curve that I fit to F1 and F2.

Hit play to watch it go. You can select particular vowels, and toggle on and off trails. You can also adjust how the bubbles are colored in the top right corner.

The particular vowels on display are /ay/ and /ay0/. /ay0/ is the pre-voiceless allophone, a personal favorite, and look at that thing go! I've also split up men and women, since that has been an important factor in this particular change. The other vowels are there just for context, and are held at fixed points.

Not displayed is the extreme uniformity of this change across speakers. This thing is changing fast, and everyone in our corpus is marching along in surprising uniformity. Can you say "speech community" anyone?

Data munging

So, I took some of the Atlas of North American English data which labels cities and their dialect classification. I don't think I'll look at finer grained ANAE data, like particular vowels' quality, because I don't think that would be too great with the the granularity of the data available from Senseable. I had to associate city names with counties to merge the data with the .svg, and thankfully Google Refine + Freebase was able to get me 2/3 of the way there. There are a few strange errors in the .svg file that no amount of automation was going to get around ("Orandge County, FL" Really?). I also pulled the coordinate data out of the .svg so that I could do this all in R, which is where I feel the most comfortable.

For the ANAE data, I collapsed some sub-dialects together, like Inland North and North, and Inland South and South.

Mis-match Measure

So, I have counties with dialect classification, and counties with calling and sms-ing classifications. I want to come up with a way of evaluating the mis-match between these. Here's a sketch of how I did that.

So, "Within" is the set of counties that are both in dialect D and calling community C. "Outside" is the set of counties that are in calling community C and in some other dialect than D. You might have thought that I'd also include the set of counties that are in dialect D and in some other calling community than C, but that's actually not so important. As I said before, these dialect regions are rather large, so I'd expect there to be many calling communities within one dialect. What's stranger is calling communities which span dialects.

So, for interpreting the ratio, as it reaches 0 or ∞, the fit between dialects and calling communities is pretty good. At 0, a calling community is contained entirely within a dialect. As it approaches ∞, a dialect is more and more marginally part of a calling community.

Next step, I took abs(log(ratiod,c)). Now I have a measure that runs from 0 to ∞, and the closer it is to 0, the bigger the mismatch. I also wanted to boost the match score of smaller dialect regions. I forget why, but it made sense at the time. So, I weighted these absolute log-odds by 1/|D|.

Results

Here are the median results per dialect compared to calling communities, from best to worst match:

West - ∞

St. Louis Corridor - 0.45

Florida - 0.35

Western New England - 0.19

Eastern New England - 0.08

Western PA - 0.07

Texas - 0.06

South - 0.03

North - 0.02

Midland - 0.01

Mid-Atlantic - 0

NYC - 0

And for the sms data:

West - ∞

South - ∞

St. Louis Corridor- 0.5

Florida - 0.34

Eastern New England - 0.17

Western New England - 0.15

Western PA - 0.07

Texas - 0.06

Midland - 0.05

North - 0.02

Mid-Atlantic - 0

NYC - 0

I'd not put so much stock into the Mid-Atlantic and NYC scores. To a large degree this is due to them cannibalizing each other, and they're not that different dialectally anyway.

What's really interesting is the poor Midland and Northern scores. While I haven't worked out a measurement for which dialects are most mixed within calling communities, I suspect their poor scores are related to each other.

Graphs!

In this first graph, each facet is for a calling community in which there is a Northern dialect county. The filled in bits are the counties which are within the calling community, and the colored counties are ones we have dialect data for.

Calling data

In 4 out of 7 calling communities in which there is a northern dialect county, there is also a Midland dialect county. That's basically along the entire border region between the two dialects.

Here's the same graph for sms-ing communities.

SMS data

Conclusions

Yup, these communication communities don't line up with dialect boundaries like you'd expect.

Monday, July 11, 2011

One linguistics topic which non-specialists are almost always interested in is dialect geography, and I don't think that's strictly due to their desire to have regional biases confirmed. It seems like almost everybody has a genuine interest in where and how people speak differently from themselves. Granted, once you move away from fairly shallow lexical differences into phonetic and phonological ones, a lot of people's eyes glaze over.

When it comes explaining why dialect boundaries are in one place, rather than another, dialect geographers tend to have two answers. First, different regions have different historical settlement patterns. Bill Labov frequently points out that the current phonological boundary between the North and the Midland in the United States coincides with boundary between where log cabins were built versus A-frame houses, which itself coincides with two different immigration streams with different points of origin on the East coast.

Second, there are differential rates of communication between regions. Langauge appears to be transferred crucially by face-to-face communication. If two regions have stronger ties of communication between themselves than with other regions, then we think they're probably going to have more similar dialects. This was basically Keelan Evanini's argumentation about why Erie, PA basically has a Western Pennsylvania dialect, even though it had historically been part of the North.

Given this second hypothesis about why dialect boundaries exist where they do, I was pretty excited to see these results coming out of the Senseable City Lab, which in collaboration with AT&T and IBM Research, has produced maps illustrating how US counties cluster together in terms of cell phone traffic and sms traffic.

The lines between communication clusters are exactly those that I would expect to define dialect boundaries. So, I took the call and sms community maps, and superimposed the major dialect boundaries from the Atlas of North American English. Here are the results.

Communication clustering by Calls

Communication Clustering by SMS

Honestly, I'm a little disappointed with the outcome. I expected that for very large dialect regions, like the West and the South, they would would contain many different communication clusters, so that's fine. Where both a dialect boundary and a communication boundary line up with a state boundary, I don't think it should be counted as an alignment. If there's any tendency for people to be more likely to move within state lines than across state lines, then this alignment along state lines is probably better explained by the first factor, settlement history, than communication density.

The crucial place to look for an alignment between communication and dialects seems to be the Ohio, West Virgina, Pennsylvania trifecta. In neither map does it look like communication density lines up quite right. Certainly, Pennsylvania is cut in half into a Western and Eastern region, but it seems like the Western PA dialect extends further East, almost to the threshold of Philadelphia.

Ohio doesn't seem to be sliced up quite right either. In the calls data, Cleveland clusters with the rest of the state, while with the SMS data, it clusters with Western PA. Dialectally, Cleveland is neither like the rest of Ohio nor Western PA. Rather, it is more similar to Toledo and Detroit to the West, and Buffalo to the East.

There are other unfortunate non-alignments, like how Baltimore is clustered with Virginia, while dialectally it's more similar to Philadelphia, and New England isn't chopped up communicationally the way it is dialectally.

I'll conclude by saying that first, pat answers to explain natural phenomena don't always work out, and second, these communication clusters make some dialect boundaries pretty mysterious. If everyone in Ohio is clustered together into a cell phone calling community, then why don't they all talk the same? The answer to this probably has to do with a third factor: meaningful social divisions which are distinct from communication divisions, but remember what I said about pat answers?

Sunday, July 10, 2011

I recently learned about the "fraternal birth order effect," where apparently for every older brother a man has, his probability of being gay as an adult increases. Here's a wikipedia entry.

Now, apparently there's some debate over how real or how strong this effect really is, so I'm almost certainly taking some numerical result a little too seriously. But, it occurred to me that data such as total fertility rate, and birth sex ratios are attainable international statistics. If this fraternal birth order effect is pretty strong and reliable, you should be able to estimate what percent of the male population of a country is gay.

So, I grabbed some data on international total fertility rate from here, and data on birth sex ratios here. Now, I have to make some assumptions. First, all of these calculations take the average total fertility rate as a country level descriptor, but there's almost certainly a unique probability distribution for different fertility rates for every country. Second, I have to treat the probability of having a male baby as being independent from the sex of the prior babies a woman has had. Third, and most importantly, I'm treating fraternal birth order as the only determinant of sexual orientation.

These are all pretty drastic assumptions. For instance, there's some evidence that my second assumption (birth sex of babies from the same mother are independent processes) is false. From the UN data I have, here's the total fertility rate of the country by the sex ratio:

So, I'm thinking about this as a very rough back of the envelope estimate, not to be taken too seriously, but maybe some sort of indicator of the shape of the world.

Here's the math:

babies = 1, 2, ... total.fertility.rate

boy.probability = male.ratio/2

boy.babies = boy.probability^(babies)

prob.gay.first.born = 0.12 (more on this below)

prob.gay.n.born = prob.gay.n-1.born * 1.3 (from wikipedia)

prob.gay = sum(prob.gay.1-to-n.born * boy.babies)

I hope that makes some sense. I grabbed 1.3 from wikipedia, which says "each older brother increases a man's odds of developing a homosexual orientation by 28–48%." I basically made up the probability that a first born son is gay. This was the one number that I couldn't seem to find, so I adjusted and played with it until the predicted percent of gay men in the United States was about 10%.

Here are my results for the top 10 countries for percent of gay men.

Afghanistan (19%)

Niger (18%)

Liberia (18%)

Mali (18%)

Nigeria (18%)

Burkina Faso (17%)

Guinea (17%)

Yemen (17%)

Iraq (17%)

Uganda (17%)

Unsurprisingly, the percent of gay men in a country is highly correlated with total fertility rate. I think this top 10 list highlights the importance of gay rights activism in Africa, especially in Uganda, which is considering making homosexuality a capital offense.

Wednesday, April 13, 2011

Most of my research involves examining and reasoning about data. I'd say that in the course of my education as a linguist, I have developed some pretty ok quantitative and statistical reasoning skills. What's so great about having these reasoning skills is that they are very broadly applicable.

Occasionally, I'll observe, or hear second hand, someone with no quantitative reasoning skills discussing a topic that calls for them, and more often than not I'm blown away by the simple errors they make that lead to large confusions. For example, there was the time that George Will claimed that Obama is narcissistic because he used first person pronouns at a high rate.

"I," said the president, who is inordinately fond of the first-person singular pronoun, "want to disabuse people of this notion that somehow we enjoy meddling in the private sector."

Of course, George Will didn't exactly count how often Obama used the first person pronoun. And crucially, he didn't compare Obama's usage to any other president. Mark Liberman, calling himself "one of those narrow-minded fundamentalists who believe that statements can be true or false" counted and compared, and found that Obama's usage rate of first person pronouns was actually less than the previous two presidents, not that it even really means anything.

President

% of words which are first person pronouns

Obama

2.65%

Bush II

4.49%

Clinton

3.87%

More recently, you have Senator Jon Kyl stating on the Senate floor that over 90% of what Planned Parenthood does is abortions. Of course, when fact checked, it turned out that about what 3% of what planned parenthood does is abortions. Kyl's defense? "It was never intended as a factual statement." I think the Daily Show coverage says it best.

Now, in the next bit the Daily Show did, they called what Senator Kyl did "lying." I have a different take. Worse than lying, I think Senator Kyl has no notion that numbers mean anything. That "90%" is just an emphasis marker, like "so," or "extremely". Of course, this can't strictly be true, because there are some pretty important percentages that determine whether or not he keeps his job, and surely he attends to those.

The question remains, how can Jon Kyl or George Will think they can just say things without any regard to the actual facts of the world? I think the problem is not just isolated to these individuals, and the consequences are potentially severe.

Take the debate that was raging before the passage of healthcare reform. One topic that really caught my eye was rescission, which is when heath insurance companies drop individual's coverage. Insurance Company representatives testified that rescission is very rare, only effecting one half of one percent of customers.

And no one called them out on the uselessness of that number! 0.5% of all insurance customers is entirely uninformative! What really matters is what percent of people who file claims are dropped. And even then, what really matters is how often people who have really severe, and expensive illnesses get dropped. This blog post estimated that it was close to 50% of people who file large claims get their health insurance dropped.

And that's not rocket science! Yet, no one called these representatives out on the (probably intentional) uselessness of their data! I remember thinking to myself "What's wrong with all of you!?"

Here's what it comes down to. As I see it, if you don't understand data, then you don't understand the world, and you will make bad decisions, and be taken advantage of.

In conclusion, I think it would be a great idea to overhaul high school mathematics to make statistics the end game, instead of calculus, as proposed by Arthur Benjamin in this TED talk.