from the pumping-up-the-numbers dept

Years ago, when I worked for a company that was trying to do digital distribution of software apps, we had a competitor that used to claim that it had agreements to distribute 300,000 apps. We, on the other hand, had agreements for more like 3,000 apps, which certainly made us look at lot smaller. The problem? There weren't even 300,000 apps out there at the time. The other company had done some deals with clip art providers, and the counted each piece of clip art as an "app." But, in the numbers game, it really looked good (and bad for us).

I'm reminded of that story as Om Malik digs a bit into Apple's claim of 65,000 apps in its iPhone App Store, and points out how misleading this is, because a few providers are uploading bulk apps. These are really one app but they're differentiated by pulling different content from the web in each implementation:

These are typically local search or travel apps written by a single publisher. Molinker is one such example. It pulls content from Wikipedia and Flickr for a country or travel destination and renders it for viewing offline. Molinker offers more than 800 of such applications, at 99 cents a pop. Another bulk apps provider is GP Apps; it has 380-plus apps, each of which essentially takes a search word and marries it to Google Maps.

In reality, each of these is one app, with a single distinct instruction concerning what content to pull. But Apple gets to count them as a separate app to puff up the numbers (which is useful, given the growing competition from other phone app stores). But Om is correct. Such apps should be counted as a single app and the numbers of apps in the store should reflect that. Otherwise, someone could (for example) create an RSS-reader type app, where each one pulls a specific RSS feed. Then upload each one with the millions of different RSS feeds out there, and you could boost the app store's app count to million in no time. But that would be incredibly misleading.

from the it's-not-like-we've-got-computers-that-can-count dept

You know, the one thing that computers are supposed to be good at is counting things accurately. So why is it so hard to do so when it comes to counting votes? We recently wrote about the case in Washington DC's primaries where election officials were struggling to figure out the source of an awful lot of votes for a non-existent write-in candidate. Sequoia, the makers of the e-voting machines in question, were quick to deny any and all responsibility with the hilariously "thou dost protest too much" statement: "There's absolutely no problem with the machines in the polling places. No. No."

Either way, it appears that officials in DC still can't properly add up the votes properly, and are noting that 13 separate races all show the exact same number of overvotes: 1,542, though no one can explain why. Sequoia continues to stand by its original statement that the problem must be one of human error -- though it fails to explain how simple human error would create 1,542 extra votes in 13 entirely separate races -- and why it didn't design a system that would prevent the ability for "human error" to create such votes.

from the are-they-serious? dept

For all the trouble surrounding e-voting, some folks believe that optical scan technologies that simply count the paper ballot votes are a decent solution. Of course, those optical scan technologies are often made by the same companies that make the e-voting equipment, and have been shown to have numerous problems going back many years. And, as per usual with these e-voting companies, they've been highly resistant to independent inspection of the systems. Perhaps that's because the machines can't do the one thing they're supposed to do properly: count the votes.

Down in Palm Beach County, Florida (yes, the home of the infamous 2000 election year "butterfly ballot" with its hanging chads), officials are admitting that they've somehow lost about 3,400 ballots. But they don't seem to be saying they physically lost the ballots -- they're saying that the optical scan machines, provided by Sequoia Voting Systems (no stranger to e-voting counting problems) count the ballots differently when the same ballots are run through different machines. In trying to explain how come a "recount" showed 3,400 fewer ballots than the original count, a county official explained:

The seven high-speed tabulating machines used in the recount are much more "unforgiving" than those that process votes on election day

Does that not seem highly problematic to people? Isn't part of the point of these optical scan machines that they'll count the ballots consistently? If everyone seems to admit that there's an element of near total randomness (chalked up to how "unforgiving" the machines are) in these machines, isn't that reason enough to question their usage at all? As for the election in question, it appears that officials have decided to throw up their hands at the controversy and certify the election, despite the fact that this "unforgiving" recount changed the results of the election. Update: Well, now officials are claiming that it wasn't a technology problem but that they simply didn't feed ballots into the machine. That's not particularly comforting either -- and it's still troublesome that they would suggest that machines would count the votes differently in the first place.