from the shameful dept

Well, you knew it was coming. First, law enforcement trotted out random low level "law enforcement officials" to freak out about Apple and Google's announced plans to make encryption the default on mobile phones. Then it got taken up a notch when FBI boss James Comey lashed out at the idea, bizarrely arguing that merely encrypting your data made individuals "above the law" (none of that is accurate). And, now, Comey's boss, Attorney General Eric Holder has stepped up to issue a similar warning. However, Holder has cynically chosen to do so at the Biannual Global Alliance Conference Against Child Sexual Abuse Online.

At this point, it's all too predictable that when anyone in power is getting ready to take away your rights, they'll figure out a way to claim that it's "for the children!" The statements over the past week by law enforcement, Comey and now Holder are clearly a coordinated attack -- the start of the new crypto wars (a repeat of what we went through a decade and a half ago), designed to pass some laws that effectively cripple encryption and put backdoors in place. Holder's take on this is to cynically pull on heartstrings about "protecting the children" despite this having nothing, whatsoever, to do with that.

When a child is in danger, law enforcement needs to be able to take every legally available step to quickly find and protect the child and to stop those that abuse children. It is worrisome to see companies thwarting our ability to do so.

Again, as stated last week, the same argument could be made about walls and doors and locks.

It is fully possible to permit law enforcement to do its job while still adequately protecting personal privacy.

The key issue here is "adequately" and forgive many of us for saying so, but the public no longer trusts the DOJ/NSA/FBI to handle these things appropriately. And, just as importantly, we have little faith that the backdoors that the DOJ is pushing for here aren't open to abuse by others with malicious intent. Protecting personal privacy is about protecting personal privacy -- and the way you do that is with encryption. Not backdoors.

But Holder used this opportunity to cynically pile on about criminals using encryption, rather than noting any of the important benefits towards privacy they provide:

Recent technological advances have the potential to greatly embolden online criminals, providing new methods for abusers to avoid detection. In some cases, perpetrators are using cloud storage to cheaply and easily store tens of thousands of images and videos outside of any home or business – and to access those files from anywhere in the world. Many take advantage of encryption and anonymizing technology to conceal contraband materials and disguise their locations.

The DOJ has long wanted to restart the crypto wars that it lost (very badly) last time around (even though that "loss" helped enable parts of the internet to thrive by making it more secure). For years it's been looking to do things like reopen wiretapping statutes like CALEA and mandate wiretap backdoors into all sorts of technology. Now it's cynically jumping on this bit of news about Apple and Google making it just slightly easier to protect your privacy to try to re-open those battles and shove through new laws that will emphatically decrease your privacy.

from the media-hype dept

Since we originally predicted and then witnessed major media outlets losing their minds over the horrific attempted murder committed by two young girls in Wisconsin, you may have heard that there has been a second attempted killing by another young girl that's also being billed as a "Slender Man killing." In this latest case, a young girl wore a white mask and attempted to stab her mother. She too is supposedly involved in the Creepypasta community and was interested in the Slender Man story. Media outlets, that really ought to know better, have since been knocking each other over in an attempt to shine the largest spotlight on an internet community that enjoys telling each other ghost stories in an attempt to assign blame for the horrors on which they report. It turns out that, should they delve a little deeper to understand what the Slender Man community is all about, perhaps they should be turning that spotlight on themselves instead.

Fruzsina Eordogh, who appears to be well informed about how the Creepypasta community and the Slender Man myth operates, outlines the folly that is mass media reporting on the two attacks. He claims there have been two glaring omissions in the reporting, the first of which is an absolute refusal to include the attention multipliers of online myth disseminators like Pew Die Pie. Pew Die Pie is something of a YouTube celebrity and his work includes Slender Man videos.

The little girls first tried to stab their friend in a public restroom, then in the woods. Both scenes, of Slender Man catching up with you in a public restroom and in the woods, happen in the first episode of Pew Die Pie's Slender Man video series. Is Pew Die Pie responsible for the stabbings? No, of course not.

In the case of the 13-year-old girl, her mother mentioned her daughter plays Minecraft, and the ultimate bad guy in the game is Enderman, a creepy figure the creator of Minecraft admitted to being inspired by Slender Man. Is Minecraft responsible for the stabbings? Again, no, of course not, but yet again every outlet has failed to mention the Slender Man-inspired monster in Minecraft.

Understanding the point of the above is key: nobody is blaming Pew Die Pie, Minecraft, or anyone else using the myth to tell stories for the killings. The sole point being made is that you have to understand the way the Slender Man community works: it is built off of gaining recognition in wider audiences. That's the whole point of the community, to expand upon the myth by creating doctored pictures, telling wilder stories, and getting Slender Man out there as widely as possible. The whole myth started with fictional photographs and news reports. That's the point. And guess how you take that to the next level?

Think of Slender Man as a community art project, where for years now adults, teens and tweens have been fabricating fake news articles, photographs, and even video games and comics, about Slender Man, this all-powerful, all-knowing spectre-monster with long arms (shaped like claws, or tree branches, or tentacles depending on the artist) that mind controls and kills people. When these two Wisconsin girls say they wanted to honor the Slender Man myth, to make him "real" and prove the "skeptics wrong," it sounds more like they wanted to participate in the community by creating the most credible news article about Slender Man ever. They didn't want to make him real by doing another photoshop, that's already been done. So how do you make the most credible news article about Slender Man? You actually go out and make Slender Man happen in a way the news can cover.

And, because these kids have a basic understanding of the world in which they live, they likely damn well knew the mass media would gobble this up like pigs at a trough. The girls in Wisconsin did more to propel the Slender Man myth into the public consciousness than anyone had before and the news outlets were their tools and unwitting conspirators. By the time the ink was dry on the Wisconsin story, a young girl in Ohio would have all the confirmation required on how to keep the news multiplier going in favor of Slender Man. Slender Man has jumped out of the fictional realm now, like the Poltergeist leaping from a television, and the news is the highway it took to get there.

Any media outlet that reports on this story and fails to mention that they as an outlet are now contributing to the Slender Man myth, is laughably ignorant and dangerous. The press does not exist in a vacuum. The press cannot blame CreepyPasta and memes but not blame itself or Pew Die Pie. In fact, the primary driver of the Slender Man myth is no longer Pew Die Pie, Minecraft, CreepyPasta, reddit, 4chan or Something Awful, but the press itself.

At some point all the people of our world are going to have to sit down and have a long conversation about how our news media operates to glorify some of the horrors we face. Murderers, school shooters, and terrorists all rely on the clockwork-like commitment of mass media to turn them into what they most want to be: a spectacle. We don't have to play along, but we do. That part's on us. But it's also on a media culture that prefers to keep the important role that they play out of the story.

from the what-a-waste-of-time dept

So, this weekend's news in the tech world was flooded with a "story" about how a "chatbot" passed the Turing Test for "the first time," with lots of publications buying every point in the story and talking about what a big deal it was. Except, almost everything about the story is bogus and a bunch of gullible reporters ran with it, because that's what they do. First, here's the press release from the University of Reading, which should have set off all sorts of alarm bells for any reporter. Here are some quotes, almost all of which are misleading or bogus:

The 65 year-old iconic Turing Test was passed for the very first time by supercomputer Eugene Goostman during Turing Test 2014 held at the renowned Royal Society in London on Saturday.

'Eugene', a computer programme that simulates a 13 year old boy, was developed in Saint Petersburg, Russia. The development team includes Eugene's creator Vladimir Veselov, who was born in Russia and now lives in the United States, and Ukrainian born Eugene Demchenko who now lives in Russia.

[....] If a computer is mistaken for a human more than 30% of the time during a series of five minute keyboard conversations it passes the test. No computer has ever achieved this, until now. Eugene managed to convince 33% of the human judges that it was human.

Okay, almost everything about the story is bogus. Let's dig in:

It's not a "supercomputer," it's a chatbot. It's a script made to mimic human conversation. There is no intelligence, artificial or not involved. It's just a chatbot.

Plenty of other chatbots have similarly claimed to have "passed" the Turing test in the past (often with higher ratings). Here's a story from three years ago about another bot, Cleverbot, "passing" the Turing Test by convincing 59% of judges it was human (much higher than the 33% Eugene Goostman) claims.

It "beat" the Turing test here by "gaming" the rules -- by telling people the computer was a 13-year-old boy from Ukraine in order to mentally explain away odd responses.

The "rules" of the Turing test always seem to change. Hell, Turing's original test was quite different anyway.

As Chris Dixon points out, you don't get to run a single test with judges that you picked and declare you accomplished something. That's just not how it's done. If someone claimed to have created nuclear fusion or cured cancer, you'd wait for some peer review and repeat tests under other circumstances before buying it, right?

The whole concept of the Turing Test itself is kind of a joke. While it's fun to think about, creating a chatbot that can fool humans is not really the same thing as creating artificial intelligence. Many in the AI world look on the Turing Test as a needless distraction.

Oh, and the biggest red flag of all. The event was organized by Kevin Warwick at Reading University. If you've spent any time at all in the tech world, you should automatically have red flags raised around that name. Warwick is somewhat infamous for his ridiculous claims to the press, which gullible reporters repeat without question. He's been doing it for decades. All the way back in 2000, we were writing about all the ridiculous press he got for claiming to be the world's first "cyborg" for implanting a chip in his arm. There was even a -- since taken down -- Kevin Warwick Watch website that mocked and categorized all of his media appearances in which gullible reporters simply repeated all of his nutty claims. Warwick had gone quiet for a while, but back in 2010, we wrote about how his lab was getting bogus press for claiming to have "the first human infected with a computer virus." The Register has rightly referred to Warwick as both "Captain Cyborg" and a "media strumpet" and has long been chronicling his escapades in exaggerating bogus stories about the intersection of humans and computers for many, many years.

Basically, any reporter should view extraordinary claims associated with Warwick with extreme caution. But that's not what happened at all. Instead, as is all too typical with Warwick claims, the press went nutty over it, including publications that should know better. Here are just a few sample headlines. The absolute worst are the ones who claim this is a "supercomputer."

Anyway, a lot of hubbub over nothing special that everyone seemed to buy into because of the easy headlines (which is exactly what Warwick always counts on). So, since we just spent all this time on a useless nothing, let's end it with the obligatory xkcd:

from the this-network-runs-on-nonsense dept

When it comes to wireless networks, no amount of hype is too little when it comes to trying to promote looming standards or the next-generation of wireless technology. You probably remember WiMax, which Intel hyped as the "the most important invention since the Internet itself." The press gladly grabbed Intel's claim willingly and ran with it, insisting repeatedly that the technology was going to change absolutely everything. As it turned out, WiMax wound up being a niche solution that barely made a dent before being made irrelevant by other standards, like HSPA+ and LTE.

You might also be familiar with the constant marketing distortions that herald the arrival of the latest "next-generation" (third generation=3G, fourth generation=4G) wireless standard, whether it's the way Verizon initially pretended that their old network was 3G, or the way that all carriers currently pretend to offer the largest 4G network. Ultimately carriers "fixed" complaints about them being misleading by convincing the ITU that they should be allowed to call pretty much everything 4G, regardless of whether we're talking about LTE or carrier pigeon.

Enter the fifth generation of wireless (5G), which hasn't even been defined yet, but which people are already fairly sure is going to wash your dishes, cure cancer, and help John Travolta with his pronunciation problems. The generations generally come in ten year increments, and while 5G is just a vague outline currently, South Korea appears prepared to lead the charge, spending $1.5 billion to research next-generation 5G networks (whatever they wind up being) that they claim will provide speeds 1,000 times faster than what's available today.

"Neelie Kroes, vice president of the European Commission, sees 5G as a potential cure for youth unemployment, which has reached 70 percent in some areas of the European Union. It's also going to be key for e-health services and the automotive industry, she said at a news conference in Barcelona."

Is there anything the next, entirely ambiguous incarnation of wireless technologies can't do? The best part moving forward is, even if you're not actually offering "5G" any time in the next decade, you can always just pretend you do. Put "5G researcher" on your next resume update even if you're a janitorial custodian. Sell "5G" burgers! Insist your company's network is the only network that's 5G, and everybody else's network actually runs on pudding! Go ahead! Nobody will fact check. Enjoy!

from the no-expectation-of-accuracy dept

Okay, so as a bunch of folks have been sending over today, there's been a bit of a furor over a press release pushed out by Consumer Watchdog, a hilariously ridiculous group that has decided that Google is 100% pure evil. The "story" claims that Google has admitted in court that there is no expectation of privacy over Gmail. This is not actually true -- but we'll get to that. This story is a bit complex because the claims in most of the news coverage about this are simply wrong -- but I still think Google made a big mistake in making this particular filing. So, first, let's explain why the coverage is completely bogus trumped up bullshit from Consumer Watchdog, and then we'll explain why Google still shouldn't have made this filing.

First off, you may recall Consumer Watchdog from previous stunts such as a putting together a hilariously misleading and almost 100% factually inaccurate video portrayal of Eric Schmidt, which was all really part of an effort to sell more copies of its founder's book (something the group flat out admitted to us in an email). They're not a consumer watchdog site -- they're a group that makes completely hogwash claims to try to generate attention on a campaign to attack Google.

The press release from Consumer Watchdog fits along its typical approach to these things: take something totally out of context, put some hysterical and inaccurate phrasing around it, dump an attention-grabbing headline on it and send it off to the press. In this case, it claimed that Google had said in a court filing that you have no expectation of privacy with Gmail. That got a bunch of folks in the press to bite with wildly inaccurate headlines:

The first three of those headlines are simply flat-out factually incorrect. I mean, not even close, and it's fairly incredible that those come from the three more "established" or "mainstream" news publications. The last three are slightly more correct, but still completely miss the point. The best debunking of these claims so far comes from Nilay Patel at the Verge who breaks down the details. The filing, which is from over a month ago, is a response to an absolutely, monumentally bogus class action lawsuit filed against Google, arguing, hilariously, that it's a violation of wiretap laws to put ads next to emails based on the text of those emails. No, seriously.

As Patel points out, first, if you put the argument back into context, it's not even about Gmail users -- as the top three headlines above falsely state. Google is arguing that non-Gmail users are consenting to the fact that when they send an email, the ISPs who receive the email will automatically process them. This should not be controversial. At all. Without that concept email doesn't work. As the filing states (which the folks hyping this ignore):

Non-Gmail users who send emails to Gmail recipients must expect that their emails will be subjected to Google's normal processes as the [email] provider for their intended recipients.

In other words, there's no "there" there. All Google was arguing was that courts have held that if you are using a communication service, there's a perfectly reasonable (in fact, expected) recognition that the service provider will have the right to process some information about that communication. In the context of the case that Google cites, the infamous Smith v. Maryland, the argument is that the business provider is reasonably expected to be able to track the user's activity. That's not controversial. The controversial step that Smith v. Maryland then makes is to argue that because the service provider has a right to that basic information it means that the end user has no expectation of privacy with regards to the government getting access to the same info. That's the problem with Smith v. Maryland -- the failure to recognize that massive difference between me (1) consenting to let my phone company record who I make phone calls to in exchange for the ability to make calls and (2) the expectation that it's okay for the government to collect that very same info without a warrant.

Google's citation of Smith v. Maryland is to make the first half of that argument -- showing that courts recognize the obvious: that when you use a communication service, there are certain aspects of information that you know the service provider is going to have access to. Without that you don't have email, or (realistically speaking) the internet.

So, this is all much ado about nothing.

Except... I still think it was a mistake for Google to use this legal argument, and I'm somewhat surprised Google's legal team let this go through in place. First, Google does not need this citation to make this point. There are other cases that can make this point effectively without touching on the government spying aspect. But, the real reason why this is a mistake is that Google has given fairly strong indications in recent statements that it's willing to fight back against certain government requests for user info (and that it's done so in the past). In those cases, the government is absolutely going to cite Smith v. Maryland as its evidence that users have no expectation of privacy in their communications and now they'll also point out that Google cited the case approvingly. Google will want to argue that Smith v. Maryland is outdated law and was decided wrongly and/or in a different time under a different technology ecosystem. And this is a very, very strong argument that has a good chance of winning. But the ability of the government to point out that Google has, in other cases, cited the Smith precedent approvingly -- even if it was really only part of the Smith precedent -- could undermine their arguments against Smith in future cases down the road.

Either way: the freakout here is totally manufactured by a bogus, laughable group that is spreading ideas that would do massive harm to the internet based on a near total ignorance of how things work. Yes, people are on edge given the NSA revelations, but this "gotcha" is no "gotcha" at all. It's just more evidence of the sheer duplicity of Consumer Watchdog. That said, it was still short-sighted for Google to make this claim in a filing. They didn't need the citation, and while it may help them win this ridiculous class action lawsuit, it may come back to bite them down the road in more important cases.

from the ridiculous dept

We've already discussed just how bizarre it is that the US's big terror alert and embassy evacuation has already involved revealing details of how the government figured out what Al Qaeda is up to. It appears that plenty of experts in these fields are completely mystified as to the government's actions here, both in their reaction to the threats and then revealing the specific way they found out about it (at the same time they're defending secrecy is needed over their data collection methods).

“It’s crazy pants – you can quote me,” said Will McCants, a former State Department adviser on government extremism who this month joins the Brookings Saban Center as the director of its project on U.S. relations with the Islamic world.

“We just showed our hand, so now they’re obviously going to change their position on when and where” to attack, said Nada Bakos, a former CIA analyst who was part of the team that hunted Osama bin Laden for years.

“It’s not completely random, but most people are, like, ‘Whaaat?’ ” said Aaron Zelin, who researches militants for the Washington Institute for Near East Policy and blogs about them at Jihadology.net

Of course, now it's come out that it wasn't "email" that the US found out about, but a conference call between various Al Qaeda leadership, but that doesn't really change very much. In many ways, it makes the story even more bizarre. We've already heard from Pentagon-friendly reporters claims that the terrorists were changing how they communicate after the Snowden revelations -- and yet suddenly they all jump onto a conference call that the US government can easily monitor? Really?

from the gone-baby-gone dept

If you remember, about five years ago, a bunch of astroturfing and front groups for the broadband companies started spreading this myth that the internet was facing a catastrophe known as the the exaflood, in which internet traffic would swamp capacity and the internet would sputter to a crawl. They talked about things like "brown outs" where so much traffic would make the internet difficult to navigate. Of course, it was all FUD and scare tactics to hide the real intent: to allow the telcos to put more tollbooths on the internet, to double charge some popular internet companies, and to generally try to avoid investing in basic infrastructure. Of course, it was easy to debunk those claims, but five years later, Broadband Reports takes a look at some of the latest data to note that the feared exaflood never showed up, and the predictions of clogged pipes never appeared -- and the data on internet growth shows little likelihood of that ever happening.

Cisco's latest numbers are an ever further cry from what telecom sector lobbyists and think tankers were predicting in 2010 and before, when they were using a looming "exaflood" to scare regulators and the press and public into buying into bad telecom policy. Companies like Nemertes Research and The Discovery Institute (the latter a PR firm paid directly by carriers, the former long accused of having a rather cozy relationship with AT&T) insisted we'd be seeing Internet "brown outs" by this point courtesy of unsustainable growth rates of up to 100% or more.

The scary predictions were effective. Said lobbyists, think tankers, astroturfers and "fauxcademics" convinced many people that if the telecom industry wasn't given "X" (X being anything from fewer consumer protections and more subsidies to the right to bill by the byte or avoid network neutrality rules), that the Internet would collapse. That obviously never happened and intelligent engineers and networks adjusted, but few of the people who massaged data for their own financial ends over the last five to eight years were ever really held accountable.

Of course, there's always more fear and FUD to go around, so expect plenty more stories about looming problems if we don't give the big broadband guys whatever anti-competitive thing that they want going forward...

from the what-is-it-good-for?--absolutely-nothin' dept

You would have to be a deaf and blind person with a penchant for head-burying to have missed the drum beats of a supposed cyber war the American government has been touting over the past year or so. It's a one-sided conversation that has been hyperbolic on a level normally associated with sketch comedy. Terms like "Cyber Pearl Harbor" are thrown around without any sense of historical context. In fact, many are questioning whether the entire production is simply a political game, with no real threat existing at all. Unfortunately, many more Americans have now incorporated this manufactured fear into their psyches. Still, the drum beat continues, with the United States labeling Iran as our chief enemy in this inevitable, or perhaps already occurring, cyber war.

The first shot was probably the release of Stuxnet sometime during or before 2009. Even though no one has officially claimed responsibility everyone knows who was behind it. Stuxnet hit with a bang and did a whole lot of damage to Iran's uranium-enrichment capabilities. The United States followed that up with Flame–the ebola virus of spyware.

What did the Iranians fire back with? A series of massive, on-going and ineffective DDoS attacks on American banks. This is a disproportionate response but not in the way military experts usually mean that phrase. It's the equivalent of someone stealing your car and you throwing an ever-increasing number of eggs at his house in response.

That's what makes all of this seem so monumentally silly. The government is making use of an American public, which is massively ignorant about who and what Iran is and is capable of, to go legislatively nutbars in our own country. Don't ask me why they're doing it, but they are. Perhaps more importantly, we're being told that we need legislation to protect against an incapable enemy in a war that we started. If that makes sense to you, chances are you need psychiatric care.

And even more problematic, and frustrating for me personally, is that our government isn't even putting in the effort to fool me properly. It's one thing to have Colin Powell waving a test tube at Congress and shouting "We're all going to die!", but it's quite another to have folks like Gen. William Shelton talking about potential risks in a potential war that we potentially started with a potential threat that we created by attacking it. That's entirely too much potential and not enough blatant falsehood. If the government wants to bullshit us, they can't go in half way. I need real creative lying, not nonsense reports that they have to subsequently pull because they're...you know...made up.

ProPublica reported yesterday that a widely cited Defense Department study claiming Iran's Intelligence Ministry constitutes "a terror and assassination force 30,000 strong" has been "pulled for revisions." It seems there's no proof whatsoever that the 30,000 number wasn't pulled out of thin air.

See, it's not that I'm siding with the pea-shooters here, it's that I'm more scared of the guys that started this war with their tanks. Particularly when the result is poorly-conceived legislation.

from the style-and-substance dept

To describe the adoption and sales of 3D TVs as underwhelming would be an understatement. Sales may not be absolutely abysmal, depending on your definition, but this was supposed to be the next thing, and it turns out most consumers don't give two poops about 3D television. (We really love that TechDigest.tv paid us homage with their logo, btw). Despite awful gimmicks, 3D TVs have always felt like a product that was created for a market that manufacturers intended to produce, rather than encourage, through marketing.

Well, if you've been paying attention to the news coming out of CES, one of the new buzz words you may recognize is "4K TV", which offers picture resolutions up to 4 times what was previously available on big screen TVs. But, just as there was with 3D TVs, questions abound about whether or not a large enough market exists for these products.

The word 3D is barely being uttered at CES 2013, but just about all the major TV makers are talking about 4K or ultra-definition HDTV that has four times the resolution of those 1080p sets many of us now own. That's a lot of pixels, which means the picture will be sharper not just when you're sitting several feet away from the set but even if you get up close.

But most us don't get all that close to big screen TVs. The 4K sets being shown at CES are big. Samsung has an 85 inch set, Sony is already selling an 84 inch model. About the smallest set you'll find is 55 inches but even with that size screen, people tend to sit a bit from the screen. I have a 55 inch 1080p set perched several feet in front of my living room couch so I rarely get close enough to my TV to notice any gaps between pixels.

The idea here is that at some point, there are going to be diminishing returns on resolution. Whether 1080p represents that point remains to be seen, but it may be reasonable to think that we've reached a level where more needs to be done to generate interest in higher resolution TVs besides just announcing them and showing the normal demos. They tried that with 3D TVs, along with a few movies that lended themselves well to the 3D experience, and we know that wasn't enough. The real opportunity here is content, specifically good 4K TV content that really takes advantage of all that the technology has to offer. That means content shot with the higher resolutions in mind, including mind-blowing shots that will simply pop with the higher resolution. Sony is the big player here, so you can probably already guess the route they've decided to go.

Taking advantage of the fact that it owns its own movie studios, Sony is trying to jump start content by re-rendering some of its own films into 4K and encouraging short film makers to create content. But it will still be awhile before there is enough native 4K content out there to give viewers a lot of choice of programming.

Sorry, but rehashing old content isn't going to do the trick here, and a lack of early adoption and interest may doom 4K TV to 3D TV's fate. It's all about the implementation. Your new release should show us why we already want the product, not try to generate interest that wasn't natively there. Higher resolutions could be a selling point, were there content that took advantage of it. Given that Sony, as already mentioned, owns its own movie studios, I would have expected them to have timed the product release to something they'd created to take advantage of it. Sadly, it looks like the $20k+ 4K TV devices won't be off to a hot, or useful, start.

from the LEAVE-THE-INTERNET-ALONE!-*sob* dept

The Internet gets blamed for so much. This loose collective of millions of users and websites is blamed for everything from killing off major industries to turning the world's children into short attention span txt fiends. The Internet will kill art via piracy, we are assured repeatedly. A new generation of children will be raised by the wan glow of LCD monitors and nurtured by thousands of ethereal Facebook friends.

Tastemakers heard it, then moguls who were de facto tastemakers, and it spread to listeners who knew nothing about the singer except this beautiful thing she'd written. They fell in love at first listen. They gushed. They sang along. They recorded karaoke videos and public swoon mobs and re-enactments of its summer-love video. They sent it to No. 1 for seemingly the entire summer and sent its singer to what looked an awful lot like dazed stardom.

Doesn't all of that sound absolutely horrible? Apparently St. Asaph would prefer Jepsen wallowed in obscurity so that she never had to be disappointed by the fact that she had and lost fame. Instead, it's better if she never had it, if I'm following the logic here correctly.

Jepsen and her two bandmates recognized it was best to strike when the iron was still tepid and ventured into the studio with enough co-producers and songwriters to choke a "Tribute to Lou Perlman" compilation. Jepsen's debut album was released and promptly fell off the public radar, failing to surpass 100,000 sales. This sort of situation is hardly unique. Plenty of big hits have been followed by a loud sucking noise as fans rush off to examine the Next Big Thing, creating a temporary vacuum in their wake.

St. Asaph discusses the internet's well-chronicled role in Jepsen's rapid rise to fame, though, it's not so much the rise to stardom that concerns St. Asaph (and leads toward murder charges being brought against the Internet). It's what happened during the rise. In her estimation, the homicidal Internet took the spotlight off of the talented Jepsen and shone it on itself, taking something vital away from the actual artist with the endless stream of remixes, lip dubs, image macros, covers and other forms of audience participation.

This sounds counterintuitive; shouldn't it help Jepsen for thousands of people to remix, recreate and otherwise rejoice over her song? But the meme's not about Jepsen; it's about her song, and she is secondary... This is the problem Carly Rae Jepsen's facing: loving "Call Me Maybe" as a meme hasn't made people invested in her as a musician.

That may seem unfortunate, but it's hardly unique and it's hardly new. It certainly isn't an "Internet" problem. In fact, throw quotes around "problem" as well. Super-popular pop stars are rarely embraced as artists. They're embraced as temporary phenomena, a momentary distraction to be enjoyed until the next groundswell displaces them.

Long before the Internet was meming artists to death on a regular basis (and in broad daylight!), people were picking up and discarding pop phenomena nearly as quickly. (If you don't want a bunch of horrible songs stuck in your head, you might want to skip ahead to the next paragraph.) Remember the "Macarena?" Did anyone ever care about the musicians behind the devilishly circuitous hook or the "choreographer" that crafted a dance so easily emulated your grandmother has probably attempted it? How was the album, I ask rhetorically, as if anyone outside of the artists involved have ever listened to the entire thing? How about Right Said Fred, whose "I'm Too Sexy" took clubs by storm for an entirely unreasonable amount of time before vanishing into the pop ether? Lou Bega, temporary mambo king who finally hit it big with his 5th attempt? How about Jesus Jones, who had two singles hit the US Hot 100 but managed to leave the charts untroubled for the next four albums? Chumbawumba were a frickin' anarchist collective, and yet, all anyone in the US knows is they cranked out the perfect drinking song about drinking. The list could go on and on and that's only covering a small part of a single decade.

The Internet doesn't split the artist from their creations. It certainly provides more avenues for interpretation but it doesn't change anything about humanity's relationship with charting artists. Very few artists enjoy continued mainstream success, no matter how artistically valid their non-hit offerings are. To lay this at the feet of an inherently participatory culture that was previously limited to drunkenly bellowing their 75% correct karaoke interpretation or drunkenly performing a 75% correct interpretive dance is to take a few steps into elitist territory and chastise people for only liking the "hits." The tool set the Internet provides may bring a much wider variety of participation (and bring it much faster), but it's not anyone's "fault" that Carly Rae Jepsen's album isn't racking up hundreds of thousands of sales. That's simply the nature of pop culture. The phrase "15 minutes of fame" has been around since before you had an internet connection.

And while you're fitting the Internet for a Murder One charge, you might want to step back and consider that Jepsen's rise to superstardom, however brief, was largely due to this very same Internet. While it's true that the Internet wears many hats -- some white, some black -- you can't just hold it responsible for destroying artists and ignore its star-making power.