March 30, 2012

As girls have become more socially avoidant and bored by others, their voices have become creakier. A creaky voice, where lots of tension nearly blocks the flow of air, is the opposite of a breathy voice, where laxity allows an above-normal amount of air through. (A breathy and whispery voice are the same in that respect.) People use a creaky register when they want to distance themselves from their conversation partner, and a breathy or whispery voice when they want to draw them near.

The group where a breathy voice is pathologically common is queers, who cannot speak except in a super-breathy voice. It's one of their defining features, much more diagnostic than the sissy "s". This fits with their compulsion to constantly seek out more cocks to suck. To project that they're open to opening up to anyone, they broadcast their breathy voice to all within earshot. "Hey boysss, I'm over heeeere, come talk to me alreadyyyy..."

Just as the too-common creaky voice of girls shows how indiscriminately avoidant they are toward others, the too-common breathy voice of faggots shows how indiscriminately eager they are to bare it all to anyone at all. Normal people strike a balance between these opposite vocal registers, discriminating between situations where you want a person to get closer or farther away. But faggots don't care who showers them with attention, as long as someone does, so they constantly speak in a breathy voice to invite others -- any others -- over to get close.

The two principles that explain most of gay deviance is their Peter Pan-ism and their brains being like those of an addict. They pitifully seek out anyone who'll feed their drama queen craving for an attention fix, just as a junkie would do anything degrading for anyone as long as they gave them some drug money in return. Always using the breathy voice is then an aspect of their addictive tendencies, perhaps more usefully called "co-dependent" in this context. I don't see a Peter Pan angle here.

It also goes to show how phony the so-called friendships are between fag hags and their gay bffs. Listen to them speak and see if the girl reciprocates the gay's breathy voice with a breathy or whispery voice of her own, like two people sharing secrets while close together. I have to suffer hearing these conversations in (where else) Starbucks all the time, and you rarely hear the girl using a breathy voice -- just her usual creaky-croaky voice.

Fag hags don't see their gay friends as real people, but more like robots or simulated humans that they can't truly empathize with, something that gives the illusion of emotional connection without actually having to make themselves vulnerable (a signal of trust). That's obvious for sexuality -- their gay friend can never fall for them, ask them out, or make an awkward pass at them. But since they rarely use a breathy register with their gay friends, I don't think fag hags open up that much even to them. This way they can continue their socially avoidant cocooning while rationalizing it as social bonding: "Well in general people suck, but I'm incredibly close to all my gay friends."

I also notice very little touchy-feeliness between fag hags and their gay bffs, whereas normal friendships between males and females involve somewhat regular physical closeness. I don't think I've ever seen a fag hag sprint and leap into the arms of her gay friend, hold his hand, sit on his knee, or plop next to him and drape her legs over his. That may not happen every time a normal guy and girl friend meet up, but it's noticeable vs. absent over the long run.

And it's not just because normal guys and girls are channeling their sexual tension (which does not exist between a queer and a girl) into more approved behavior, though there is an element of that. This is the same kind of stuff a pre-pubescent girl would do with her father, as a form of daddy-daughter bonding. Fag hags don't see their gay friends in either way -- as a potential romantic partner or as someone who they trust and expect support from, at the level of close kin.

March 27, 2012

On Facebook a friend born in 1989 posted a link to 10 things '90s kids will have to explain to their children. In the comments there, someone posted a link to an even more extensive list of things '90s kids realize, a nostalgia site. Last, among the highest-rated articles at Retro Junk, another nostalgia site, is a three-parter about growing up in the '90s. The writer was born in 1987 and wrote them when he was 22 to 23 (here, here, and here). I poked around more of the site's high-rated articles about being a '90s kid, and his take seems pretty representative.

What jumps out is how they must have never gone outside as children, or even done anything physical while inside, let alone interact with other people. Just about everything in these lists is TV shows, video games, and movies (I hope they at least left the house to see them in theaters). That degree of solipsism is a huge change compared to children of the 1980s. *

Consider the list of about 140 items from the Things '90s Kids Realize site. (Note: as of fall 2011 when I tallied them up.) I count roughly 14 things that are not TV, movies, and video games.

There's only 2 real references to toys or fads: pogs and gel pens. (I group Tamagotchis under video games.) In the first link's comments, several people mentioned those dorky trading card games like Magic. I assumed that toys would have made up a huge part of a guy's boyhood memories -- action figures, building / engineering toys, weapons, or whatever else. Millennials were too busy leveling up their Pokemon to be shooting cap guns or making their own world out of Legos.

Just one reference to clothes (although it is written by a guy): L.A. Gear shoes. Those are actually from the later '80s, but maybe they changed the look in the '90s. Obviously girls from the '80s would list a lot of clothing-related memories, but I think even guys would include Reebok Pumps or Nike Air shoes, Hypercolor, slap bracelets, neon Swatch watches, etc., in their list.

Only two items about music. One is the "Jump On It" dance from an episode of Fresh Prince, as well as the "I'm Too Sexy" song. That first link mentioned the Spice Girls, Backstreet Boys, and 'NSYNC. However, none of these writers express how in love they were with the music -- they just report that so-and-so was trendy, and that some of the guys were cute. I never got attached to '90s music either, but you'd think they would have at least gotten into some kind of music. They just don't dig it at all, whereas music is one of the first things people mention when you ask them what about the '80s they're aching to return home to.

There's one mention of slang -- "da bomb", although they didn't invent that. It was either Gen X-ers or the early '80s cohort; Millennials are not creative enough to come up with even a lame slang phrase like "that's da bomb." In fairness, it's usually teenagers who make up new slang, not children. And yet Millennials didn't do anything there once they became teenagers, in contrast to the Valley girls and surfer dudes, who were cooking up hot new buzz words every week.

And scarcely two references to books: the Goosebumps series, and ordering from the Scholastic Book Club. I was surprised not to see Harry Potter books show up, but as I recall that was when little kids' books started to be aimed more at high school and college students instead of little kids themselves. The explosion in good children's books from the '60s through the '80s would surely show up on a nostalgia site for having grown up then. Dr. Seuss, Scary Stories to Tell in the Dark, Bridge to Terabithia, just to name a few. Certainly authors have continued writing kids' books, but they're either not as good or the would-be audience isn't as excited to explore new worlds as they used to be, since books have left little impression on '90s kids.

The exceptions above are as fun as their childhood ever got, while about 90% of it was boring TV shows, video games, and movies. Part of this is not their fault, but due to the overall plummeting of creativity and innovation over the past 20 years. And another part is not their fault either, but due to their helicopter parents banning anything from the home that would develop their character, bones, or muscles, allowing only things that inculcate passivity (like video games) or that superficially exercise the brain (like flashcards and shallow types of non-fiction -- maybe Wikipedia by now).

Still, even if we had been subjected to loony parents, we would have disobeyed them. Millennials bear the blame for welcoming their own imprisonment. And even if the culture-makers had not given us exciting new things to play with, we would have made it ourselves, like when we invented games like ball tag and Butt's Up, when some other generation invented the cushion-and-blanket fort, or when we pretended that tree branches were guns and swords. Not to mention when kids made up their own songs ("99 Bottles of Beer on the Wall"), urban legends, and other folklore. So Millennials also bear the blame for not inventing a culture of their own.

If it was this bad in the '90s, it could only have gotten worse in the 2000s. When I see him, I try to help my nephew enjoy real childhood experiences, like trekking through the woods and bashing open rotten logs to look at the squirming bugs inside. (He got a real wide-eyed kick out of that -- "An' granma! An', an', an' we saw BUGS!!!") But I only get to see him a handful of times a year. Shoot, even if he were my own son you can only work so hard against the broader societal forces pushing in the sissy shut-in direction. But you still have to do it.

* During our nostalgia trips, my generation would bring up things like the roller rink, mini-golf, hiking home from school through the woods, play-fighting in the woods, cruising around the mall, camping in the back yard, riding our bikes everywhere, etc. Even activities that are now solitary used to be social, like going to the arcade or a friend's house to play video games, rather than play them only while being holed up alone indoors.

March 26, 2012

An earlier post looked at the apparent decline in making home movies over the past 15-20 years, even though it's cheaper than ever to make them. But socially avoidant people, in this case parents, don't enjoy stuff like that, since it would attach them emotionally to their kids. And it would create lasting memories, which avoidant people do not want, as that is just another form of connecting them with others, albeit across time.

People should not mistake the trend toward helicopter parenting as a trend toward greater affection between parents and children. They hover over and elaborately plan everything out about their kids because they view them as blank slate robots -- if they don't program them perfectly, they'll blow up. This approach to child-rearing is emotionally distancing.

Back when parents had real affection for their kids, and let us have our own lives (even as toddlers), they not only made home movies of us, but also recorded conversations on a tape player. A good portion of it was the mother, aunt, or grandmother speaking to whoever would receive the tape -- typically another female "alloparent," but my mother also made tapes for my dad when he was away at sea. Most of the talking was just basic reporting of what's been going on around the home in the past couple weeks, sometimes recorded in several sessions to fill up the entire tape. It wasn't too different from writing a letter.

Aside from the caretaker's reporting, though, one or more of the kids would be recorded as well. The mother might ask the kid over to sing their ABC's, recite some Mother Goose nursery rhymes, or talk back and forth about what they've been up to at preschool, what they did at the park, had for lunch, etc. Tape recording was superior to letters here since infants and toddlers, even elementary school kids, can't express themselves well or at all in writing. Not to mention that a lot of what we ended up doing was just goofing off vocally, which carefree parents back then found endearing, but which today's parents would probably see as an embarrassing malfunction.

Like the home movie case, this hunch is more from personal and observational experience. I don't know anyone who makes audio recordings of their kids at any length, either to preserve for later or to send around right now. Whereas in the good old days, my mother, aunt, and grandmother must have made dozens of tapes when we were growing up (not too far past 6 or 7 years old, though). I found a box of them over Christmas vacation and we huddled around to listen to them for sometimes more than an hour at a time. What they lacked in visual appeal, they made up for with a greater conversational flow with the parent (in a home movie, the parent is usually too focused on operating the camera).

My nephew is now 4 years old, and I have as many audio recordings of him as I do home movies -- zero. There are handfuls of 30-second or less video clips taken from a phone, but nothing like a home movie or a tape recording. As far as I know, people don't record or save their Skype calls with their relatives either. So as with movies, there's just little interest in preserving memories or even forming a closer attachment here and now.

This shows how little it means to praise your kids as a way to make them feel loved and lovable. That has been the approach of helicopter parents raising Millennial kids. The kids aren't fooled and, consciously or not, come to understand that their parents aren't very emotionally invested in them as real, unique human beings, and view them instead as robots to be rewarded for carrying out their programs properly. Even then, don't the kids wise up to the reality that "Good job!" and "You did it!" are not so much heartfelt praises directed toward the child, but shouts of relief to themselves for programming the kid without a malfunction?

Making a home movie or audio tape shows the kid that you think he's interesting and important enough to be the focus of a recording. "Yeah but that'll just turn them into attention-seeking drama queens!" Reality check -- that's Millennials, not us. Our home movies and tapes don't feature our parents incessantly "praising" us (i.e. congratulating themselves on their own parenting skills), but more like ethnographically observing us in our natural habitat, interviewing us as informants, and so on. It was more respectful and affectionate.

The truly bizarre thing is that Millennials never rebelled or got angry at being told as toddlers that they weren't worth loving, only worth programming. Instead of acting out, they withdrew into themselves and stayed that way as they got the same message from the rest of the grown-up world, not only their parents. And bumping into each other, only to find withdrawn peers, meant that they would rarely break out of that mode. They don't even feel very cared for by others in their own cohort.

On an cerebral level, they may know they're separate from previous generations and share that abstract sense of togetherness-in-contrast. But they don't feel the tangible, palpable groupiness that the Boomers and Gen X-ers did (and still do), that comes from acting together toward shared goals, no matter how grand or trivial any particular episode may be.

March 22, 2012

Empathy among group members can foster a heightened sense of community, seemingly regardless of what particular emotion they are experiencing -- joy, relief, gratitude, (righteous) anger, fear, etc.

So, with snark, derision, and insincerity having become so widespread over the past 20 years, what has kept people from using these states of mind as the basis for empathy, leading to a tighter group feeling? One remarkable thing to observe about the people who are steeped in that culture is how shallow they prefer their social connections to be.

It doesn't look like there's some basic defect in their ability to take another person's perspective -- they understand why their group-mate sneers at the butt of the joke -- or their capacity to resonate emotionally -- they are both feeling the same scorn, issuing the same through-the-nose laugh, and so on.

Instead it seems like the intensity of the shared experience is just too minimal to serve as a reminder of how psychologically close the group members can get. The culture of snark is characterized by glib dismissal, the near total absence of emotional strength in any direction. It's the opposite of group members pumping each other up through shared intense emotions of any sort. With snark, you don't feel that others have charged you up with electricity, nor that you have charged them up in your turn. The strength is far below the threshold, so you don't feel truly connected to the others in the group.

There's nothing wrong with giving the out-group a good blasting every once in awhile to bond the in-group more strongly through shared mockery. But when that becomes the predominant or only shared experience, empathy and communion are effectively no longer possible. It seems just as unstable as the other extreme, where only compassion and charity unite a group, never sharing the occasional good laugh from ridiculing the out-group. Obviously, though, the derision-only group will become more spiritually corroded than the compassion-only group.

This isn't the familiar warning not to unify only around what you do not stand for. If that can provoke intense enough anger, anxiety, fear, as well as joy, gratitude, and so on, then that'll do fine for social bonding. Rather, the warning is to not get sucked into the trend toward autistic, flat-affect dismissal. Look at how wispy thin the bonds were among the professional ridiculers during the Age of Reason, before the Romantic-Gothic era restored a measure of solidarity both with your fellows today and your ancestors back into time.

Snark, lampooning, skewering, roasting -- all of that was openly allowed in comedy clubs in Communist countries, just as we take pot shots at our politicians in ours. The authorities must have sensed that no group could unify itself and present any threat to the established order if it was only held together by a culture of dismissive ridicule. The same goes for us -- we only frighten the elites when we're capable of collective joy and anger, not when we're snickering along with some superficial clown like Jon Stewart.

That also explains the link between cocooning and snarkiness: when people don't want to invest much emotionally in others, they can't allow themselves to get too worked up, so they only let shallow emotions like snarkiness to be shared.

These ideas stem from reading last week Philip K. Dick's novel Do Androids Dream of Electric Sheep? It features a Jon Stewart-like talk show host, Buster Friendly, who's always glibly dismissing the religion of Mercerism, whose adherents experience communion through sharing in the pain of a Christ-like figure. They do so with the aid of empathy boxes that put each one of them psychologically in Mercer's place, and at the same time.

Ultimately Mercerism will outlast the steady barrage of jibes and skeptical exposes from Buster Friendly and his followers. It's a very keen observation that a would-be religion based only on shared snarkiness could never unite the followers against one based on shared pain (let alone the more realistic mix of shared pain and shared joy). Sharing the perspectives and feelings of other group members doesn't matter if the emotional level is so flat that you don't feel a common current running through you.

March 21, 2012

Browsed around some Human League songs to see if there was an entire album I'd enjoy, and it didn't look like it. Their first two are a bit too experimental just to be experimental, not much that grabs you. Dare, with the mega-hit "Don't You Want Me?", had better songwriting but still wasn't that catchy. And Phil Oakey's voice on that album is too far in the angry-sobby direction. The groovier Hysteria sounded pretty good, just not great enough to make me go out and get it now.

Then I followed a link from Wikipedia or Amazon to Heaven 17, a group featuring two of the original members with a new singer that formed when the Human League split after their second album. I don't remember hearing them in the '80s, or at '80s night (ever), or the '80s radio station, or anywhere else. Seems like they were big in Europe but not here. Our loss.

I went right out and picked up The Luxury Gap from the nearby used record store and have been listening to it for the last five days straight, and probably will be for the next week. The melodies are all catchy, the song structure isn't very repetitive (just about all have "thru-composed melodic style" in their Pandora entries), memorable motifs abound, the synths have a really warm timbre, the disco grooves make it impossible for you not to respond physically to it, and Glenn Gregory's voice is expressive without going to the emo extremes that Phil Oakey's does.

The only thing that wakes me up from the dream while listening is the occasionally very self-conscious and forced lyrics. When singers get to those kinds of lyrics, they should just muddle their enunciation -- that way a really stilted line won't jar the listener awake. A garbled line just goes in one ear and out the other.

Two videos and two links to videos (to save space). First, the somber yet upbeat "Let Me Go", and the somewhat wistful "Come Live With Me":

March 19, 2012

Blathering endlessly about abstract rights is the opposite of acting regularly to provide care, aid, and support to tangible individuals. Both claim to be helping others, have a pro-social orientation, etc., but one is fake and the other real. I'm not trying to shame people into giving up everything and becoming Mother Teresa, but we should not rationalize our lack of commitment to others by defensively saying, "Yeah but we support their rights."

As a side note, look at how fake the homophile movement is -- they support abstract rights for gays, but won't take the time or effort to urge them back into the closet where they'll be safer to themselves and others. Or even get them to dial down the number of cocks they suck in a week, amount of drugs they blow through in a weekend, etc. Totally fake. All that hot air is just to win a status contest within their faggot-friendly social circle.

Returning to the animal rights people, they come in a variety of flavors, but the most visible and audible are the vegans. They are nerdier and more anti-social than non-vegans, suggesting another case of people who want to distance themselves from some sympathy group, while rationalizing it with intellectual posturing, and claiming a moral high-ground because they support some set of abstract rights.

Are vegans more likely to care for animals? Then they should be over-represented among veterinarians. Bla bla bla about how they might have to administer a sick animal a drug that has been tested on other animals. Just a bunch of intellectual rationalizing -- the sick animal needs medicine to get better.

I couldn't find any formal studies, but at least three informal sources that say, if anything, vegans or vegetarians are less likely to be animal doctors, and that they're happy ordering a steak so rare it moo's. Here is a Yahoo question to this effect, and all respondents who know veterinarians say zippo are vegetarians, except for one who says that 2 of 10 vets she knows are. Here is an interview with a vegan veterinarian, where both the interviewer and the vet agree that vegans are very rare among vets. That is rationalized away by a strained analogy to doctors who smoke -- but they know it's bad and smoke anyway, whereas the vets aren't plagued by guilt about eating meat in the first place. Finally, here is a list from some big vegan website about 10 professions that need more vegans, which lists veterinarian, agreeing with the other sites that it's damn rare to find them.

What about caring for animals as pets? Again no formal studies showed up. (The General Social Survey did have a question about not eating meat, but no questions about owning pets.) The forums I browsed didn't give a clear picture, but it did sound like there's at least a sizable minority who are against owning pets. This doesn't make them look as bad as their near absence among animal doctors, but it still shows how farther in the callous direction they are compared to normal people.

Here for instance is a FAQ on veganism from Animal Equality, which includes this passage under "Domesticated animals" (I realize not all vegans are this doctrinaire):

Vegans do not believe in the breeding of domesticated animals such as horses, dogs, cats, hamsters, rabbits, birds or fishes. Domestication is not in the animals' best interests, as they are dependent on humans for everything that is important to them in their lives. Humans decide what and when they will eat, where they will live, whether they receive affection or exercise, if they are allowed to socialise with members of their own species, and as property, they can be bought, sold, given away or abandoned. 'Pets' may bring us pleasure, but the animals themselves belong in their world, not ours, with the freedom to live as they choose. Vegans do rescue and adopt abandoned animals however, seeing them as refugees deserving of care while they are in this world, but they do not perpetuate the institution of 'pet' ownership.

I doubt that vegans rescue abandoned animals at greater rates than non-vegans, but data or even impressions on that will be hard to come by. In any case, just listen to the wording -- "the animals themselves belong in their world, not ours." Leave me alone and I'll leave you alone -- how's that for mutual affection, empathy, and care-giving? Only to an inveterate cocooner does that sound like good cheer and concern. You know that's how these misanthropes feel toward their family, too: "Mom and Dad, I'd like to show my love and respect for you by cutting you out of my life, and I hope you'll show the same by cutting me out of yours."

What fucking planet did these people come from? Or did I fall asleep and somehow wake up on theirs? Damn, man.

I can't help but touch on one of the other points above, not related to whether they're more caring toward animals or not. Domesticated animals are highly dependent on their owners, but so are we on them. It's not asymmetrical or exploitative. Aren't vegans always whining about how dependent the non-vegans are on domesticated animals for food, clothing, textiles, medical testing, etc.? Woah dude, it's like the animals are exploiting us the way an only-game-in-town monopoly gouges the community. They've got us hooked on their butter and leather -- it's not fair, and we've got to liberate ourselves.

So they cannot be concerned with one species exploiting another, since the animals are exploiting us, making us rely so much on them for our well-being -- especially cats and their owners! It is the very dependence of one species on another that vegans are repulsed by, as though any enduring ties would always wind up being chains. Again it is only the deranged mind of the cocooner that pushes self-reliance to such an extreme.

March 18, 2012

Ever since blocky and fudgey 3-D graphics hit video games, we've heard about how "cinematic" they began to look. In the rare places where I've read people talking about the visuals of video games as a medium, that's what they tend to compare them to -- especially movies, since they show moving pictures, but also painting or still photography.

But video games are interactive, and in particular the player has a fair degree of control over the camera. This makes it totally unlike movies, except for having a time arrow.

In movies, the visual team selects the set of images to show, in what order, at what tempo, etc. In a video game, the player takes over those roles. Even for a single still image within the larger flow of images, the visual team making a movie determines what the composition will be -- what's in frame, how the pieces relate to each other and the background, and so on. In a video game, that is all up to what the player decides to have the camera do at some point in the flow.

Naturally then the visual experience of video games will pale in comparison to watching a movie, as the movie experience is the result of more talented minds thinking over the outcome much harder, while the video game experience is the ad hoc choices of the average person. Cherry-picking the best-looking video game and the worst-looking movie does not disprove the overall point.

If they're not like movies, then the closest medium I can think of is landscape architecture. There too the visual designers have only minimal influence on what the experience will be like, mostly populating an empty space with the basic objects that the spectator will arrange into a stream as they please, and in relation to each other as they please. Will that tree be in their field of view or not? If so, will it be seen from a direction that includes that hill over there or not? This nearly total lack of control over the most fundamental aspects of composition must drive landscape architects crazy, and could explain why visual people go into fields where they have greater control.

This also explains why video games looked better in 2-D -- it gave the player less control, since an entire dimension of where to point the camera was not available. The designers knew that you'd be looking at the world head-on, so they could anticipate a good deal about what their compositions would look like. You could still scroll left-right or up-down, splitting up a field of objects that they intended to be seen together, or picking up two clusters at once that they intended to be seen in separate viewings. Still there was more influence from the people who had a better eye and who were working more tirelessly to put out a pleasing product.

Earlier I pointed out that video games will probably not endure as a narrative medium because they are too interruptible, like radio programs or comic books. This harms them as a visual medium too, since the flow of images is too interruptible, unlike the pre-ordained flow of images in a movie. Giving the audience member too much choice, as video games do, is another aspect of making them more interruptible: they have to decide what order the events of the narrative will unfold in, and they have to decide what the composition of a visual "shot" will be.

A choice here or there might not be so bad, but having to shoulder most of these responsibilities keeps the spectator from drifting into a dream-state where they can just absorb the video game. Just as they're about to get lost in the experience, the player keeps getting snapped awake by having to make yet another decision.

There was a widespread Art Deco revival during the later 1970s and '80s, which like the original took place during the second half of a rising-crime period. Similar environments breed similar outcomes, whether consciously or not on the creators' part.

And yet the neo-Deco buildings didn't look nearly as awe-inspiring as the originals, although a real breath of fresh air from the soul-less mid-century and the mid-century revival of the past 20 years.

At first I thought it was just a case of a somewhat weaker visual culture, so that although there was a similar outcome in the '80s as in the '20s, its peak wasn't quite as high. But after watching Blade Runner again recently, it struck me that maybe all those people who would've been contributing to an architectural culture that would've been just as inventive and spellbinding as Art Deco, had instead switched over to working on the visual design of movies.

It takes an awful lot of people contributing to the overall look of a movie, multiplied by all movies being made in a year. And some of them are the very same jobs needed to make a building, from the production designer (a general overseer) down to the carpenters who build sets and craftsmen who make relief sculpture for decoration.

The movies of the Jazz Age were heavily visual, just like they would become once more during the New Wave Age, but they still weren't as sublime as their descendants would be. Blade Runner looks better than Metropolis, and the Star Wars or Indiana Jones movies look better than The Thief of Bagdad, as wonderfully epic as the '20s movies already look.

So if you combine both architecture and movies into the visual culture, the '20s and the '80s don't seem so different in how high they soared. It's just that more of that excellence went into buildings in the Jazz Age, and more into movies in the New Wave Age.

That gives me hope that during the next such phase in the cycle, we'll enjoy yet another period of a spectacularly moving visual culture. Whether the bulk of that energy goes into movies, buildings, or something new, doesn't really matter so much -- it'll be a pleasant surprise. The important thing is that we'll eventually pull ourselves out of the ever blander world of the past 20 years.

March 16, 2012

So reads part of a NYT headline, although the rest -- "as World Watches Online" -- is just a rationalization of the dorky tendencies of young people today. The story is about Spring Breakers being more wary of acting wild when so many camera-equipped phones could be pointed at them.

The capability and prevalence of technologies that can capture an embarrassing moment while someone is out carousing has only gone upward since the invention of the camera. Yet young people behaving like young people goes in cycles, so technological changes don't explain much of those differences across the decades. Nobody seemed to pay much attention to cheap, widespread cameras during the heyday of streaking, for example. Or that someone might find an embarrassing box of nude Polaroids from your college days.

The main picture and slideshow gives a pretty accurate view of how segregated the sexes have become by now among Millennials, with smaller or larger groups of girls avoiding the yucky boys, and the boys forming bro-circles to take their mind off of how boring the girls are being. And of course plenty of hover-hands and leaning-away on the rare occasion where a boy and girl do get close.

Interesting to read that wet t-shirt contests are nearly extinct, even in southern Florida during Spring Break. If that's only due to an unwillingness among the girls, you'd expect the bar and club owners, or whoever, to provide the next best thing, like hiring some girls to stage a contest, or at least project the video for "Girls on Film". It didn't sound like anything along those lines was making up for it, though, so we conclude that there is also falling demand among the male spectators. I know it sounds crazy to suggest that young dudes on Spring Break don't have sex on the brain, but they really are more asexual and afraid of the natural female body these days.

On a generational note, it's a 28 year-old bartender chick who describes today's Spring Breakers as "very prudish," while the ones freaking out about being well behaved during a what should be a carnival are 26 and under. That's more tentative evidence for my hunch that people born in 1984 were the last to mature into recognizable human beings, with '85 and '86 births being a hazy limbo area that mostly tilts toward the Millennials, who clearly show up with '87 births and after.

Today's 18 year-olds have lived entirely within falling-crime times, and probably were conceived then as well, so they're growing up to be even weirder than the older Millennials, who at least had some exposure to the good old days, even if that environment only had an influence on their developing brains as toddlers.

March 15, 2012

Two recent articles at CNN do a good job of distilling the kinds of anxieties and fears that most people have about today's new technologies. One looks at the wide variety of personal data that cell phones can collect and be accessed by who knows exactly, and the other at Facebook's privacy policy and its push to integrate everyone into its network.

The common theme is the invasion of privacy but more deeply the desire to be left unseen in solitude. You see this when people freak out when they learn that your computer broadcasts an IP address, among other examples. Since we live in an age of cocooning, I guess it's no surprise that that's people's main fear relating to new technologies -- that they will somehow allow others to get around or through the cocoon.

In rising-crime times, when people are more outgoing, they had the opposite fear -- that the increasingly popular gizmos might cut us off socially and emotionally from one another. Here, for example, is George Segal's encapsulation of this anxiety in the context of personal computers, right on the cover of Time magazine in 1983:

The nightmare of the early 1980s has become the utopia of recent decades, where by now even coffee houses -- supposedly places to hang out -- have been converted into joyless hive-like computer labs, each junkie seeking to wall themselves off from the others:

People who lived at least some of their formative years in more outgoing times are somewhat more immune to these changes. My worry about social network sites is not what personal data they're collecting about me -- BFD, it's not like I'm an undercover agent and they'll blow my cover. It's more about how they're evolving away from simulating back-and-forth conversations and toward individual broadcasts directed to and drawing responses from no one.

As for cell phones, Virginia Heffernan did a better job than I could of detailing how interactive and relationship-building it was to talk on the landline analog phone. It's rare to see anyone talking on their phone anymore -- unless they're driving. Increasingly it's just a more portable form of feeding their internet addiction. But even when used for communication, it's almost always texting.

Some things are fine to be taken care of by texting, things that don't have an emotional component and that don't require more-or-less undivided attention. Pointed, specific questions that are in no great rush to be answered. "Do we have any eggs left in the fridge?" "No, I think we're all out." "OK, I'll pick up a carton on the way home then." Or something similar at work, where email isn't a better option (like if they're more on-the-go).

But for most other contexts that we used to talk about over the phone, texting's shallow emotional depth and unpredictable delays between exchanges make it nearly impossible to really connect with another person that way. Remembering back to IM'ing, it wasn't so deep, but at least there were quick responses and an unbroken rhythm to the conversation. So it looks like the main defect of texting is just how interruptible the conversations are. And even during those interruptions, the person who has yet to respond usually does not have their mind on the conversation; they're off doing or thinking about something else.

These features prevent two people from lulling themselves into a dream-like state, where they're enjoying themselves so much that the flow of times goes by unnoticed. Instead it's like constantly trying to fall back asleep and return to a dream, only to keep getting woken up.

Our anxieties about technology always take the form of "such and such will leave us less human," but what that defining human quality is changes dramatically with the rises and falls in the crime rate, which shapes how outgoing or withdrawn we are. In outgoing rising-crime times, we fear that our sociability may be severed, while in cocooning falling-crime times, we fear that our solitude may be breached.

March 14, 2012

I've discovered that this is one of the most tell-tale signs that the songs of some music group are no longer worth listening to. Generally it means that the lyrics are no longer engaging or provocative, the overall energy has been drained, the variety of moods has narrowed, the melody has been cut out of the song's guts, and there are no more ornamental motifs. And it is no longer danceable at all, in the sense of taking over your body and making you tap your feet, drum your fingers, or want to get up out of your seat and move around.

What's left is basically a sonic recording of a heroin ride -- the lyrics are ponderously self-indulgent, a hazy space-out replaces a driving feeling, the mood is monotonously melancholy, the over-emphasis on harmony with hardly any melody heightens this spacey time-come-to-a-standstill quality, and so do the motifs, which are more like slowly surfacing kaleidoscopic flickerings, not catchy riffs or whatever that help you place a song in long-term memory. ("Oh yeah, that song that starts off -- " and cue the opening riff of "Satisfaction".)

No surprise that a good deal of this kind of music tends to be done under the influence of heroin itself, like the varieties of jazz that followed the Jazz Age (such as bebop), or the strains of rock that followed the Rock 'n' Roll Age (such as indie or "post-rock" -- what a faggot name).

The function of this kind of music seems to be to allow the performers and listeners of music to go through the motions of performing and listening to what should be a socially and emotionally engaging experience, but isolating everyone into their own little spaced-out worlds. Thus the link to periods of cocooning (falling-crime times). They don't want to feel like there's a real interaction or shared experience going on, just a room or a hive full of closed-off individuals shooting up opiates through their earbuds.

Some talk about entire genres developing a "more mature sound" in the above way, but usually it's in the context of particular groups changing their sound over time. What are some examples that I've gotten burned by and learned these facts the hard way? That assumes that some of the group's earlier work was engaging, and that their "more mature sound" is a fall from grace. So I don't mean to rag on the group's output as a whole here.

I thought about including YouTube links to compare the earlier and "mature" sounds for each group, but that's going to take too long.

Disintegration by The Cure was the first time I remember consciously reflecting on the disconnect between "critical acclaim" -- i.e., fanboy gushing by loner nerds -- and human response. There are a few memorable songs ("Lullaby," "Lovesong," and the uplifting "Pictures of You"), but overall it is mind-numbing and emotionally off-putting. Sold it back used the next week.

After reading so much unqualified praise for the Eurythmics' album Savage, I checked out some songs on YouTube before buying it (now a standard precaution when the reviews are so effusive). Just couldn't dig it. Like the others here, it wasn't bad or unlistenable, since the core talent was still with the band, and yet -- I feel like saying they should just lighten up, it's only pop music, but their original hits weren't exactly upbeat. They were engaging and memorable, though.

Talk Talk got off to a great start in the heyday of new wave, their first album The Party's Over being about as good as any other from that era. However it was their follow-up, It's My Life, that blew most of the others out of the water -- why doesn't that album exemplify a "more mature sound" in the minds of music geeks? As with the Eurythmics, I've mostly sampled their later three albums before plunking down the money to buy them, and again I just can't groove to it. These are the ones that the reviewing class praises as helping rock music evolve past itself. Who knew suicide was so praiseworthy?

I doubt it's just the luck of the draw for the ones I've sampled either. The Pandora entries for the songs on their last three albums hardly mention melodic qualities at all, whereas 6 of the 9 songs on It's My Life had a sufficiently strong "thru-composed melodic style" that their entries mention it as an identifying feature. That's easily the highest rate of songs on an album with that quality. In plain English, it means that the somewhat largish units of the melody (like phrases or clauses in a sentence) tend not to be repeated very often. To sustain an entire song, that requires a lot more catchy phrases to be written, not to mention be woven one into the next. Song composers tend not to use it that much because it sacrifices easy memorability (less repetition makes it harder to memorize) for greater sophistication. By the end you feel like you've taken a tour through many more melodic places.

That's probably enough to give you an idea. Some more widely understood examples would be Sgt. Pepper's and Abbey Road compared to Help! and Rubber Soul by the Beatles. But I figured I'd be more fair and subject the groups and styles I cherish to the criticism. Again I don't really hold it against the groups too much that an army of try-hard nerds has sanctified their less successful efforts just because it has the self-conscious stamps of Really Serious Music, which out of a deep insecurity they rely on as guides for what is good and bad, rather than just let the music speak to and move them -- or not -- and judge it that way.

March 12, 2012

The comments below brought up the topic of gays being more empathetic. That does seem like a widely held stereotype, but it only shows how debased our standards for "empathizing" have become.

Gays are nearly incapable of empathy with women (the only ones who seek out relationships with gays), something I touched on in a larger post about empathizing with imaginary people. I suspect that the rumor that gays are so good at empathizing initially spread from a self-selected group of women so profoundly disturbed that faggots could effortlessly understand where they were coming from, and resonate emotionally with them.

For empathy involves both a cognitive and an emotional component. The first is the ability to understand the other person's state of mind -- their goals, perhaps their history, their strategies, and so on. Once comprehended, the other's state of mind must give the empathizer a similar emotional feel. It doesn't mean they have to identify with the other, sanction their goals or feelings, or approve of their behavior -- and it doesn't mean they have to do the opposite either. It simply means they can both understand and emotionally resonate with another person's mind.

In the context of gays and women, there is a huge obstacle to comprehending the woman's state of mind because the goals and strategies to achieve them are so antithetical to the queer lifestyle. Women's looks and youthfulness are only part of what they worry about in themselves, and only to the extent that they impact their chances to find a monogamous stable relationship, even a lifelong partner with whom raising a family is at least a thought in the back of their minds. Gays focus almost exclusively on these qualities, and only insofar as they help get a hotter bod into their bed this weekend.

Put bluntly, women care about how marriageable they are, homosexuals obsess over how fuckable they are.

Still, there must be some more basic defect in their ability to empathize, since having opposite perspectives is only an obstacle, and truly empathetic people can understand and resonate with a perspective that's a long ways from their own.

Remember that the two principles that explain most of gay deviance are 1) having the mind of an addict, and 2) Peter Pan-ism -- being mentally stunted at around elementary-school age, aside from some further IQ development and obvious sexual interest.

Impaired ability to empathize could be chalked up to their addictive personalities: they are too focused on scoring their next fix that the concerns, thoughts, and feelings of others are just irrelevant. That pattern of lacking interest does not seem to be what we have here with gays. They socialize a lot with others, so the interest is there.

That leaves their Peter Pan-ism as the cause, and that sounds a lot more realistic. They aren't as profoundly lacking in empathy as a psychopath -- more like it never fully matures, and gets stuck in the quasi-narcissistic, somewhat autistic, and mildly sociopathic phase of children. Gays cannot get what makes other people tick, nor resonate with a random sample of humanity (only fag hags who share many of the same disturbances of the homos themselves). But it comes more from childish brattiness -- "Like omigod, why would anyone think that way or want that stuff? I mean it totally makes like no sense AT ALL. What a bunch of IDIOTSSSS."

I'm sure they'd pass the basic false belief tasks to test for empathetic development and autism, so they're not emotional toddlers. Again I'd say somewhere around elementary-school age.

What's going on when they occasionally do claim to understand and resonate with another's state of mind that is quite contrary to their own? It is what is observed in people with a highly anxious-preoccupied attachment style, or very clingy-needy in layman's terms. They tend to imagine similarities between themselves and the other person that don't really exist, probably as a way to feel more closely attached than they actually are.

This tendency tends to make others all the same -- namely, all like me, and therefore easy to understand and resonate with. "Omigod, I totally know that feeling -- it's JUST LIKE when I..." Of course, it's probably not just like that, and the clingy-needy person is erasing the other's unique features, putting his own in their place, and hallucinating a similarity. This is overwhelmingly the shape that the interaction takes when a homosexual doesn't vapidly dismiss another's state of mind, and at least claims to get them and feel them.

That shows up in what shallow interest they have in history or the future, which as I detailed in the post below is minimal. They can't accept the characters in literature, or the figures in history, for who they are. Everything has to have some kind of gay angle to it, or else they can't sustain interest. In a documentary for the DVD of Double Indemnity, some faggot critic almost welled up with tears talking in a breathy voice about the relationship between Fred MacMurray and Edward G. Robinson's characters -- how intimate a tale it was of "these... two... guys," alone and relying on each other, like it was a flick about prison sex or something. And totally lacking in self-awareness that this is how he came off. Similarly, every cool historical or fictional character just had to have been gay -- just had to have been.

The extensive mental abnormalities of gays are worth thinking about at length because it shows just how sick the culture has become in encouraging them to do as they please. We're telling addicts to indulge their vices, and puffing up the egos of so-called grown-ups who watch My Little Pony, so it's only fitting that the super-hero of today would be defined by both addictive and Peter Pan-ish qualities.

Whatever you think about how far the Civil Rights movement went -- did egging on blacks vs. whites lead to riots, etc.? -- at least we were dealing with a class of human beings. Same with feminism, ugly as that got. We've taken a truly bizarre turn with the homophile movement, holding up the profoundly mentally disturbed as paragons of humanity, when they lack man's most defining quality -- the ability to empathize.

March 9, 2012

The two principles that explain most of gay deviance are their Peter Pan-ism and having the minds of addicts. Little children are interested in ancient civilizations, and fascinated by the future, so the Peter Pan tendencies of gays does not tell us why they are uninterested in the non-present.

Rather, this seems to be due more to their addictive mind -- only out cruising for this moment's fix, whether scoring drugs, sucking cock, shopping, or getting showered with attention from their fag hag friends. Earlier fixes don't matter once their effect has worn off, and fixes too far into the future won't give them a rush right now.

What's the evidence for their not caring about the past or future? Lots, but just to throw out a few examples:

They've never been big in the environmentalist or other preservationist movements, which are usually manned by upper-middle class European do-gooder types. They don't contribute to futuristic fiction, whether in its nerdier or dreading genres. Not to mention how heavily they discount the future by ignoring how decrepit they will become so quickly because of AIDS and the thousands of other diseases they'll pick up, just to indulge their kinky sex lives.

Gays also have no sense of their own past, even the recent past and even when it was political and confrontational, similar in tone and spirit to the racial-ethnic civil rights movements. No fag under 30 or so knows what ACT UP was, and probably wouldn't be able to say much about the Stonewall Riots. They did make a homophile movie about Harvey Milk, though, so they may know about him.

I know there's a stereotype that gays are the preservers of old show tunes, but I don't see that. It may have been true for some small cohort of queers, but today's younger gays couldn't even tell you who the gay singers from the 1980s were, and they were very popular. They are just too caught on the fashion treadmill to wander back in time -- anything earlier than Lady Gaga, Pink, and Katy Perry are all kind of a blur to them.

This shows just how little sense of collective memory they have, as none of their favorite singers today -- or anyone's favorites -- are actually gay. Fag hags maybe, but not gays. As recently as the '80s there was Boy George, Rob Halford, Freddie Mercury, Pete Burns, Marc Almond, Neil Tennant, George Michael, Michael Stipe, just to name a few. In fact, I rarely see gays at '80s night compared to a random night out at a club -- one of the great things about it -- even though you'd think that music would be right up their alley, not just for danceability but also starring more of their own.

March 6, 2012

In the reading I've done since posting my argument that ornament is for enhancing memorability, I still haven't found anyone who's noticed the connection to memory. Some arguments were variants of the lazy economic interpretation about ornament being one weapon used in the struggles between and within classes, obviously wrong since it shows up too much in primitive societies where it's used only to distinguish ethnic groups at war with each other, not as weapons themselves in inter-group conflict. Plus ornament, where it's practiced, begins too early in life -- little girls trying on lipstick and little boys getting temporary tattoos are not part of any class conflict.

The mainstream view, though, was more about ornament as giving pleasure to the audience. Functionally the thing could exist without ornament, but adding it on top makes it so much more pleasing. And when certain movements want to strip surfaces of ornament, it's because of some Puritanical drive against the indulgence in pleasure, seen as sinful, decadent, corrupting, etc.

I don't deny that pleasure plays a role, but it works at a later level than what is ornament for, which is memorability. Some ornaments are more memorable than others, and some of that variation could be explained by differences in how pleasurable they are. Still, if anything pleasure is a means to the end of memorability.

How about a quick test, though? If we consume ornamentation for pleasure, then there should be something analogous to sensory overload and a de-toxing period afterward. Consider clear cases of pleasure like delicious food or no-strings sex. There is such a thing as too much delicious food and too much sex. If pushed beyond that limit, we experience an overload of the organs and senses involved in the pleasurable activity. Soon we go into a refractory period where we can't get eat any more ice cream or go for another roll in the hay, even if we wanted to (which we don't).

We feel nothing like that when we're confronted with excessive levels of ornamentation. We feel puzzled about what exactly it is we're looking at -- there is just too much detail to clearly make out what is underneath it all, at a specific level. We may know that it's a building underneath all the high-relief sculpture, or that there's a melody somewhere under all the warbling, but we don't get a feel for the unique building in front of us or the unique melody we're hearing.

Excessive ornamentation is therefore an overload of the learning and memory systems -- we can't learn what specific thing it is that we're looking at, and so cannot store or retrieve it from long-term memory. We turn away feeling un-satisfied -- we wanted something memorable! We do not nearly collapse from our senses being over-satisfied.

What about those recurring movements that want to abolish ornament? Again I think there is some kind of revulsion they have toward pleasure per se, but that isn't the main motivation. They mostly want for buildings, melodies, etc., to be forgettable. Of course they still want them to be remembered -- not because of any memorable features, but because you're just supposed to worship this building and that melody. The arbiters of architectural and musical taste said so. It is always the mark of a low point in culture when the mainstream insists that we remember forgettable things.

This view explains the authoritarianism of such movements, which is hard to account for if they were merely against pleasure. Edit: Perhaps the anti-pleasure view can capture this too, like if people need authorities to steer them away from their inborn inclination toward pleasurable things.

It also goes with the heavily socially avoidant personalities that these people have -- let me be in my cell in the hive, and I'll let you be in yours. Avoidant people need attention and esteem like everyone else, but they don't want to connect with others, so that only leaves attention-whoring. Look at me and tell me how awesome I am, but not because I've reached out to you and done something worth praising.

On the other end of the spectrum, truly ornamental movements have no leaders, issue no manifestos, and have no desire to force their policies on the masses. Being more socially out-reaching, they lack the pretentiousness and self-consciousness that would kill the mood of belonging to a wild crowd atmosphere. They want to be remembered only for having made something memorable.

This shows why the Victorian pro-ornamental school never achieved the greatness of the earlier Romantic-Gothic period (i.e. Regency in England and Empire in France), the Art Nouveau and Art Deco periods not too long after, or the New Wave look of the 1970s and '80s (I mean the whole visual culture, including movies by that point). They may have genuinely wanted a return of a healthy level of ornament -- neither bare nor bewildering -- but the zeitgeist was too far in the authoritarian, avoidant, manifesto-writing direction. That hyper-activated sense of self-awareness kills any chance you have of falling into the dream state and letting the ideas, words, and pictures just flow out.

March 5, 2012

Here is an NYT article on the rise of "vocal fry," or creaky voice, among young women. Technically the original study did not compare how prevalent it was in two different time periods, but it is very common among young females today, and you didn't used to hear it that much. Future research, as they say, could compare its prevalence in popular teen movies from the '80s vs. the 2000s, or chart-topping pop singers from both times. The difference would be there, the question is how wide the gap is.

First, what does vocal fry sound like? Basically they keep their vocal chords so close together that it repeatedly and irregularly cuts off the stream of air. When the airflow is interrupted so much, it sounds like a creaking door or a croaking frog.

As an example, here is a Millennial girl using vocal fry for minutes on end. She mentions that she's taking public speaking classes, and that her teacher said she has terrible vocal fry. Yet here she is in a YouTube video slipping right back into it, that's how ingrained it is. Here is more of a caricature of what it sounds like (up through 0:40 anyway; you can skip the rest).

I didn't know there was a name for this annoying way of speaking until I read the NYT article -- fry-talkers! Just last week I heard a Millennial student give a 15-minute class presentation using vocal fry almost the entire time. It's not restricted just to younger females, though, as some fat middle-aged hag near me in Starbucks today yakked on and on into her phone with that voice. You may not have even noticed how common it is, but just listen for it and you'll be shocked.

Turning to what it all means:

So what does the use of vocal fry denote?...It can also be used to communicate disinterest, something teenage girls are notoriously fond of doing.

“It’s a mode of vibration that happens when the vocal cords are relatively lax, when sublevel pressure is low,” said Dr. Liberman. “So maybe some people use it when they’re relaxed and even bored, not especially aroused or invested in what they’re saying.”

The muscles that pull the chords together are innervated by the parasympathetic nervous system, which is the "rest and digest" system, the complement to the sympathetic nervous system ("fight, flight, fright, fucking"). Young people today are clearly more vegetative and unexcitable than young people used to be, and this over-activity of their parasympathetic nervous system is probably having side effects on their speech, making them sound robotic.

It is even more useful to see vocal fry in terms of its opposite. Its defining feature is the repeated blockage of airflow through the vocal chords, so the opposite is when too much air flows through them. The scientific jargon for this is "breathy voice." (Normal speech, with neither too much nor too little airflow, is "modal voice.") So take all of those associations with a voice that is very breathy, and then think of the opposite for vocal fry.

Breathy voice signals arousal, creaky voice signals hibernation. Breathy wants to bring the listener closer, creaky wants to keep them at a distance. In fact, breathy voice shades into whispering when the vocal chords don't vibrate much. During a whisper, there is still lots of air flowing through -- just not much vocal chord vibration to provide "voicing." Both breathy and whispery voices are less intelligible, making closeness to the speaker even more important.

So it looks like the key factor driving the increased use of vocal fry is the increase in the avoidant types of social-emotional attachment styles. The avoidant types don't want to depend on others and don't want others to depend on them. They don't let themselves get excited -- if something begins to set something else off, they rush to shut it down. They're control freaks. Part of that is that when any kind of intimacy comes into view, they either have to peel out in the other direction, or if backed against the wall, stiffen their arms out in front to keep the closeness from getting any closer.

To keep others away, they adopt an irritating voice that says you're boring them. In the good old days, the creaky-croaky voice was mainly associated with snobs. Now everybody has a condescending view of everybody.

The attachment styles called "secure" and "anxious-preoccupied" (the clingy or needy ones) are more likely to use a breathy or whispery voice. They want to establish connections, and a whisper says "come here, I have something special to tell you." We immediately relate breathy voice to seduction, for example Michael Hutchence's delivery in "Need You Tonight". However, another second's thought shows that's only one instance of the broader pattern of striving for intimacy. Barry White isn't trying to seduce someone he's just met, but trying to set the mood of closeness and not holding anything back around someone he's been with for awhile.

And Suzanne Vega isn't conjuring up anything romantic or sexual in "Luka", in which a boy whose parents beat him up talks to a neighbor about what's going on. Here the breathy voice is a signal that he's trusting the listener, opening up, and hoping that they can empathize enough so that they might help him out. Imagine how easily you could wipe out that emotional coloring by having it sung with vocal fry a la Britney Spears or Kesha.

Female voices are breathier than male voices, on average, although the difference isn't huge like it is for pitch. The dimorphism emerges around puberty, though again we shouldn't be blinkered by thoughts about courtship and mating. Puberty is also when you start becoming more independent of your parents and more reliant on a crowd of peers to belong to as your social support.

Adolescent males can protect themselves better than females can, so girls probably develop breathier voices because they're more dependent on others for security, and need to reach out and maintain intimacy more. That includes their female friends, but also the males who they'll have to recruit into their circle. Like I said the difference isn't huge, and males will have to do this stuff a lot too.

So although creaky voice could be viewed as a move toward androgyny, it's more simply explained by girls retreat from relying on others, and thus not having to use breathier voices to reach out and cement intimate relationships.

Lab experiments that manipulate female voices show that breathier voices are consistently found to be the most attractive by males. So here's a case where young women, who being at their peak for reproductive potential will supposedly go crazy to make themselves more appealing, are in fact making themselves more repulsive. And as that Millennial's YouTube clip shows, not trying to correct it even when it is pointed out and harped on endlessly.

It's hard to make sense of that unless we remember that social avoidance is the rule, so they don't care if others find them off-putting. Hence all of the pop songs with "I love haters" messages. Girls also stopped wearing make-up, stopped volumizing their hair, and stopped showing skin, so the gross-sounding voice is not an isolated case. They just don't want boys to notice them anymore... and the strategy is working.

I wrote several paragraphs about how this all applies to gay homosexuals, but that reads like a post of its own, so I'll save that for a little later.

March 4, 2012

Dejavu is a misfiring of some brain process related to memory. But is it a sign of a healthy or worsening memory system? Athletes get more sprains than someone who plays video games all day, yet those occasional unusual states of their joints are a sign of their overall healthier and more active condition.

Back when paranormal phenomena were taken more seriously by average people, the General Social Survey asked five questions about how often the respondents had experienced them. One of them (called DEJAVU) asked how often you "Thought you were somewhere you had been before, but knew that it was impossible." (The other questions are SPIRITS, GRACE, ESP, and VISIONS.)

Here is how frequently people had dejavu across the lifespan, where red is never and the higher bars are more frequently:

People experience dejavu less frequently as they age, although it doesn't really start to break down until their mid 40s, and it never recovers. So dejavu is a side effect of a better functioning memory system.

Perhaps a young and healthy brain so rapidly and efficiently codes all sorts of details into long-term memory that once in awhile it does so too fast, giving you the impression that you've already been someplace before. If most of the details are not coded in this too-fast way, just the occasional one or two, that would explain why you don't have a very rich sense of what the previous experience was like -- just a vague feeling that one or two details are familiar.

The age curve for dejavu didn't match with the other four questions above, which looked like some mix of age and time-period factors. Nothing simple like a steady rise over the lifespan. So dejavu is not really a paranormal phenomenon. People are more willing to accept that it happens (probably because most people, though apparently not everyone, has experienced it first-hand), and can think of plausible naturalistic explanations for it without much trouble.

The more typical paranormal experiences, such as seeing events at an impossible distance or being in touch with the dead, appear to result from an over-active empathy system. They feel connections that are not there, kind of like "phantom limb" patients who feel that a missing limb is still attached to their body. The predicted sex difference shows up in paranormal experiences, where females (more empathetic) have them more often.

March 3, 2012

Individuals choose whether or not to adopt a certain technology, and how to use it if they do. I'll never get a smartphone, and even my cell phone I use way more for talking than texting.

The distribution of people along some psychological trait can change within several generations. So even if the same technology were introduced into the different time periods, its adoption and use would differ as well, perhaps dramatically.

In recent decades, we've shifted more in the autistic direction, so that we find people lining up outside Apple stores long in advance of the launch for the newest iPod, iPhone, Macbook, or whatever. Back in the good old days nobody was disturbed enough to do that for the original Walkman or its successors, VCRs, the '80s Macintosh computers, and so on.

Most views of virtual reality agree that it if it were available, it would be used for individual escapist purposes. That's what just about everyone does with the internet and video games, for example. It would be even easier to pretend to be interacting with others in virtual reality, so presumably if it existed right now, what residue of social cohesion that there is would dissolve.

But in 1983, the makers of the movie Brainstorm toyed with the idea of virtual reality that would allow the user to directly experience another person's thoughts, feelings, and memories, i.e. to maximally empathize with them. These qualities would be recorded by the same device, just as a VCR can both record and play back video.

It's not a very good movie, although some of the special effects look great. Happily, they don't try to explain the scientific backstory of how the device works -- only that it somehow records all of those qualities from one person's brain, and can transmit those into the brain of another. The movie focuses instead on what the implications would be.

One man involved on the project uses it to experience sex with a babe, where the guy who recorded it has no uniquely identifying features -- he's just there while the woman rides him. So, the user doesn't empathize with a particular person, just some generic dude who serves as his escapist conduit. That's what it would turn into if adopted today.

But the lead character uses it to experience the death of a close colleague who recorded her brain as she passed away. He re-lives the personal memories that flashed before her eyes, looks down on her body as her soul floated upward, and beholds a sublime sight of angels heading toward a great cosmic light source. Earlier he had used it to see how his estranged wife felt about him, and to give her a recording of his own warm memories about the two of them, so that she could re-live their better times.

He uses the device to empathize with specific other people, not the unidentifiable and interchangeable recorders of a porn episode or a rollercoaster ride experience.

Again it's probably not worth seeing unless you can find it cheap. It struck me more for its historical value, namely that movie-makers once thought of how virtual reality might be used to help us empathize more directly, rather than retreat even further from social interactions.

March 2, 2012

Met another Millennial chick at '80s night tonight with a name that has no clear nickname, Austin. "Aussie" makes her sound Australian, which she is not, and "Austie" doesn't exactly roll off the tongue. Then there was McCall... "Cally"? That sounds like the dog breed.

As much as I love ragging on Millennials, I haven't talked about their helicopter parents in awhile, and they're just as complicit, fully so in this case. In case you missed the original post, here's the punchline graph for how common girls' names have been that end in a long "ee" vowel, in blue, along with the homicide rate in red (they correlate at +0.5)

This suffix (the "diminuitive") suggests cutesiness and informality, so it might tell us how much the parents wanted their daughters to grow up in a buddy-buddy social environment, rather than one where everyone is formal to the point of not connecting.

It's almost as though the parents of Millennials don't want their kids to be able to get up close and personal with each other, or else they'd give them nicknameable names. This would be an unconscious choice of course. But it certainly fits with their child-rearing style of locking them indoors, only letting them out on scheduled and supervised "play dates" (the stupidest parenting idea I ever heard of).

Unless you've been around them a lot, you'd be surprised how hard it is to find a suitable nickname for most of them. It's not the biggest obstacle to getting close to someone, but when it happens with nearly every single one, it does get a little depressing.

Even after years of exposure, it still boggles my mind that the sweet "Becky" has lost out to the dull "Becca" as the nickname of Rebecca.

I met a girl named Madison once, and that can be shortened to Maddy, which is cute enough. (She preferred "Madd," but that sounded too androgynous to my ears.) The cashier's name at the supermarket today was Lexi, which I assume is short for Alexis or something. That's not too bad either, although it has an unidentifiable porno sound to it, perhaps because it rhymes with "sexy"? Those are the only exceptions that spring to mind, although granted my mind's not in the right state for much to spring to it at all...

March 1, 2012

Earlier I looked at the popularity of duets between people who normally aren't in a group together, and showed that they track the trend in the violence rate over the past 50-some years. This is just one of many examples of people coming together more often, and across wider social distances, as coping with the rising level of crime gives them more of a reason to have larger and tighter social networks.

Another way to see the fragmentation of groups is to look at popular songs with more than one name attached as performers, but that are not duets. That is, there's no interaction, addressing each other, and so on -- just a bunch of individuals doing their own isolated thing. Kind of like how in night clubs these days people don't dance with each other but only near each other. The mood they project is the same as well, namely "look how awesome I am" boasting.

Returning to the Billboard Year-end Hot 100 singles, I counted all songs that had more than one artist listed, and subtracted the number of duets already accounted for. Here is how this measure of group fragmentation has changed from 1959 to 2011:

Once again we see a perfect fit with the trend in the crime rate: there are only 4 cases during all of rising-crime times, and the steady increase does not begin until 1993, so that in recent years they account for roughly one-third of the most popular singles in a year.

Just about all of these songs are rap and what passes for R&B; white-people music groups haven't splintered so badly. The lower in status a group is, the harder they get slammed by the falling-crime era dissolution of social bonds. Seems like another world, but I remember when black musicians were either solo artists with a backing band or part of a band or group, not free agents who have no team loyalty and only join up occasionally to add hype value to the song.

As late as the early '90s, most rappers and R&B singers were part of a group -- Boyz II Men, DJ Jazzy Jeff and the Fresh Prince, Naughty by Nature, Jodeci, Digital Underground, Onyx, Wreckx-n-Effect, etc. When Dr. Dre and Snoop Doggy Dogg's songs from The Chronic caught fire during '93, it was the beginning of the end of black music groups. That was also when the black gang problem began subsiding, showing again the general nature of this retreat from group-mindedness. Psh, fuck all y'all, I got ta do mah own thang...

Someone else (not me) could look at how many of the top 100 singles were made by solo artists over time.

I also wonder about the earlier 20th C. crime cycle. I don't have a very good feel for how team-oriented the jazz musicians were from ragtime through the early '30s (the rising-crime period). But I do get the impression from what little history I've read that it wasn't a bunch of free agents or solo acts with a background band. When big band and swing took over during the later '30s and '40s, it was more about the bandleader. The #1 weekly singles from the '40s and '50s that weren't big band do look like mostly solo acts. In the less popular directions that jazz went in (like bebop), it also looked like one star occasionally collaborating with another star, rather than forming a stable team.

White non-jazz musicians of the falling-crime mid-century don't seem to have been part of a cohesive group either, mostly solo singers and a backing band. It wasn't until rising-crime era rock music that groups with more equal-status members came back. Even early rock was by solo musicians like Elvis, Little Richard, Chuck Berry, and so on.