Of the 93 million persons without broadband identified by the study, about 80 million are adults. Small numbers of them access the Internet by dial-up connections, or outside the home at places like offices or libraries, but most never log on anywhere.

…

Asked about the reasons for not having broadband at home, almost half of respondents cited a prohibitive cost, and almost as many said they were uncomfortable using a computer. Forty-five percent answered “yes” to the statement, “I am worried about all the bad things that can happen if I use the Internet.” Others said they viewed the Internet as a waste of time.

A more accurate headline might be “One-Third of U.S. Without Computers, Mostly By Choice.”

This is fascinating and worthy of discussion in our industry. Collectively, we’ve screwed up. Badly. What can we do to make computers attractive to the third of our country who don’t use any of our stuff?

The F.C.C. was mandated by Congress to produce a detailed plan with specific recommendations to hasten the national adoption of broadband in the United States. […] It will recommend, among other elements, an expansion of broadband adoption from the current 65 percent to more than 90 percent, Mr. Genachowski said in a blog post on an F.C.C. Web site last week.

It’s not a question about broadband versus dial-up, but a question of computer-users versus non-users.

How does the FCC intend to make more people care about computers? How will the FCC address those who can’t afford a computer and internet service?

Without good answers to both questions, which I don’t believe are possible, I don’t see how we can significantly raise this metric. And even with great solutions to both, 90% seems unrealistic.

Instead of trying to raise broadband penetration to impossible levels, why not try to improve the broadband options that the majority of the country uses? That’s the sort of thing that the FCC is supposed to do, despite failing miserably to do so for the last decade.

Foreignness was a means of escape—physical, psychological and moral. In another country you could flee easy categorisation by your education, your work, your class, your family, your accent, your politics. You could reinvent yourself, if only in your own mind. You were not caught up in the mundanities of the place you inhabited, any more than you wanted to be. You did not vote for the government, its problems were not your problems. You were irresponsible. Irresponsibility might seem to moralists an unsatisfactory condition for an adult, but in practice it can be a huge relief.

The anonymous comments section of any major media site or popular blog will be so crammed with bile and bickering, accusation and pule, hatred and sneer you can’t help but feel violently disappointed by the shocking lack of basic human kindness and respect, much less a sense of positivism or perspective.

If you’ve never owned or used an iPhone, you’ll probably find the Nexus One to be a very adequate device and will assume that the minor annoyances are just part of owning a smart phone. If you’ve owned an iPhone for any length of time, you’ll likely feel, as I do, that it’s a rather half-baked device with some good ideas but generally weak execution.

So let’s see what happens if [an inventor] tries to use a patent to stop them. He says “Oh No, IBM. You cannot compete with me. I’ve got this patent. IBM says let’s see. Let’s look at your product. Hmmm. I’ve got this patent and this one and this one and this one and this one and this one, which parts of your product infringe. If you think you can fight against all of them in court, I will just go back and find some more. So, why don’t you cross license with me?” And then this brilliant small inventor says “Well, OK, I’ll cross license”. So he can go back and make these wonderful whatever it is, but so can IBM. IBM gets access to his patent and gets the right to compete with him, which means that this patent didn’t “protect” him at all. The patent system doesn’t really do that.

I’ve considered the arguments by Stallman, John Gruber, and Tim Bray on software patents, and I side with Stallman in that software patents are inherently problematic and are a net loss for society.

The major difference in their arguments is that, while all three mention the realities and dysfunctions of the patent system, Stallman focuses strongly on the difference between what it’s intended to do and what actually happens. He also illustrates the reality of trying to develop any nontrivial software in a patent-filled landscape.

Many argue that inventors should be protected and incentivized by patents, otherwise they would stop inventing. It’s a nice theory, but it doesn’t hold up for software.1

We can argue about what the system should do, or what it theoretically does, or what it ideally does, but that’s an academic exercise at best. To evaluate whether software patents are a net gain for society, we need to evaluate their reality, which differs quite a bit from most arguments for why patents are necessary.

The USPTO has repeatedly shown that they do not possess the ability to issue software patents responsibly. This isn’t the agency’s fault — it’s impossible in practice. As Stallman says, it often takes advanced computer scientists to even realize that two given patents are functionally identical, or that a patent application represents something trivial and already in widespread use.

Trademarks and copyrights are much easier for an agency to evaluate, but software patenting has resulted in a mess of trivial, invalid, and duplicate patents being issued and dysfunctionally “enforced” by threats and settlements at a tremendous cost to society.

As a working software developer, the thought of accidentally and unknowingly stumbling into someone’s patent is terrifying. There’s no question that it has hurt our industry in the past and will continue to artificially restrict progress indefinitely, and there’s little convincing evidence that the supposed benefits exist in practice at a large enough scale to maintain the status quo.

It’s highly questionable whether it holds up in most fields, but software is a particularly poor fit. Other extremely poor fits include business methods and genetically modified crops. The lack of enforcement and maintenance requirements for patents is also problematic and promotes dysfunction and fraud. ↩︎

A popular blog truncated its RSS feeds to boost site pageviews. It’s like last week, when The Atlantic changed to partial-content RSS feeds. And that was like every other week, when some publisher did something that some readers didn’t like to make a few more cents.

I dislike the intrusive advertising on Salon, so I don’t read Salon. I dislike Michael Arrington, so I never read anything on TechCrunch (even when they write about me or my products) and have taken technical measures to ensure that I never even land there accidentally and give them whatever tiny profit that one pageview is worth. I don’t like the timebombed, Unicode-breaking Clickability print-friendly view for New York Magazine, since I like reading NYMag-length pieces in Instapaper and Clickability doesn’t work well in it, so I just don’t read NYMag’s articles. I don’t like Ars Technica’s paginated articles, but since I don’t want to pay for a subscription, I just read every page separately, give them all of their separate-page ad views, and save each page to Instapaper if I want to read them that way.

One reaction I’ve never had is to think that I deserve anything from these publishers.

Valid point: [Publisher] should consider doing it some other way because this will alienate some readers.

Invalid point: [Publisher] should do it my way because all content deserves to be free/ad-free/full-RSS/single-page.

I see a staggering amount of entitlement every day in the form of arguments and blog posts like the latter.

We don’t deserve anything. Publishers can do whatever they want. If you don’t like it, don’t send them nasty emails or browse their sites with ad-blockers: just don’t support them. Don’t read their content, don’t link to them, and don’t talk about them. Since money’s not usually involved, vote with your attention and read elsewhere.

We’re often told that we should design our websites and software to mimic real-life objects. The iPhone strengthened this idiom, and Apple has been driving this home hard for the iPad.

But it’s not absolute, and it’s not always the best idea. My favorite counterexample is the typical calculator app:

Nearly everything about a real calculator is faithfully reproduced, but with the good comes the bad: nearly every limitation and frustration has also been reproduced. There’s very little reason to use the software facsimile over its real-world equivalent, and in some ways, the physical object is better.

Despite being faithfully designed to look and work like a real-world object, the Calculator app hasn’t made any progress. It hasn’t advanced technology. It hasn’t made anything more useful or created new interaction models.

My preferred calculator, which I will keepbloggingabout until it’s ubiquitous, wasn’t designed against any physical objects because there’s no physical equivalent to what it does.

Please ignore the two glaring errors I made while cobbling this together for the picture.

Functionally, it’s almost a calculator. But it’s also almost a spreadsheet and almost a list pad. By not constraining its design to that of a common physical object, it’s able to be and do much more than anything in the physical world ever could.

It does a much better job of a number of critical features than the Calculator app, such as multipart calculations, parentheses, editing existing values, and dynamic value references. Even trivial operations are so much nicer that Soulver converts rarely even open Calculator (or use one), preferring instead to keep a Soulver window open somewhere as a scratch pad.

So last week, when good writers (1234) started discussing the merits of emulating page-turning, I took notice. Especially since I added pagination to Instapaper Pro 2.2 and had to make some difficult decisions in the process. There was no question in my mind that it was better for reading than scrolling — even better than my semi-automated, low-effort tilt scrolling.

But I didn’t implement it because books have pages and lack scrolling. Books aren’t even the right physical-object equivalent for Instapaper. Not all reading happens in books.

Instapaper is more like a magazine than anything else, but I’m not about to try to reproduce the soggy, wrinkled covers from being shoved in the mailbox, the perfume samples, the ten-page “continued on” jumps in the middle of articles, or the subscription cards falling out as you’re trying to read.

(The iPad version of Instapaper that I’ve made so far, incidentally, doesn’t resemble any physical objects. I haven’t shoved huge newspaper or book graphics in there in a misguided effort to win an ADA. Just as Soulver looks like nothing but Soulver, Instapaper on iPad just looks like Instapaper.)

I implemented pagination because it improves reading, not because a related physical item separates text into pages.

Improving the product, not faithfully reproducing the physical object, always gets priority. I passed on a long, complex page-turning animation because it didn’t make sense (you’re paging up/down, not left/right) and it would have been distracting. And I opted for an extremely brief cross-fade, rather than a slide, because slides take longer and are more visually jarring.

DVD players don’t make fake whirring noises for five minutes before letting you eject a disc to simulate rewinding. Similarly, nobody should need to perform a full-width swipe gesture and wait two seconds for their fake page to turn in their fake book1, and nobody should need to click the fake Clear button and start their calculation over because their fake calculator only has a one-line, non-editable fake LCD.

It’s important to find the balance between real-world reproduction and usability progress. Physical objects often do things in certain ways for good reasons, and we should try to preserve them. But much of the time, they’re done in those ways because of physical, technical, economic, or practical limitations that don’t need to apply anymore.

UPDATE: I’m fully aware that the iBooks app supports a tap to change pages — it doesn’t require a full swipe, and its animation is quick. I’m speaking more generally here, not specifically about iBooks. ↩︎

VoiceOver can read any book’s text aloud in iBooks. Apple must have cleared that with all publishers as part of the standard iBooks distribution deal.

The iPad plays AVI files with Motion-JPEG video. This is the format used by many consumer-level digital cameras when recording video clips. Given the iPad’s camera-card-reader attachment, this is almost certainly the reason for AVI-MJPEG support.

I’ve been asked a lot today which iPad model I reserved and why. I didn’t think anyone would care, but apparently you do, so thanks I guess, and here’s what you asked for:

I’m getting the 16 GB, WiFi-only model.

Why only 16 GB?

I opted for “only” the 16 GB iPhone 3GS as well last summer, and it has proven to be a good idea for me, given:

Due to my developer responsibility and my gadget addiction, I will probably be buying every version of these devices, so I’m really only buying it for ~1 year.

I never watch video on anything but my TV, so I don’t store much (if any) video on portable devices.

I don’t sync my whole music collection, and I can fit as much as I could possibly want on-the-go or on a trip within about 8-10 GB.

So I figure the 16 GB model is a safe bet, and I can save the $100 for next year’s iPad.

Why WiFi-only?

I really need it ASAP for Instapaper development.

I’ll be using it mostly at home, where I have WiFi, and on trips, during which I’m often on airplanes, outside of cellular reception areas, or on other WiFi networks anyway.

I have a Verizon USB EVDO modem that I’m very happy with. Its contract is about to expire, and I’m probably going to replace it with a contract-lessMiFi, which I can get for only a bit more than the AT&T iPad’s $130 price premium. Verizon’s service costs a lot more, but it really is that much better for data, especially in the New York area, that it’s worth the cost.

Any light mobile data needs can be served by my iPhone.

Obviously, your needs will vary from mine, so you might decide differently.

It’s time for the tech industry to distribute itself beyond the Silicon Valley power center. … Ultimately, I think it’s important that our industry supports people’s ability to live and work wherever they choose, because we need to be around real people in order to understand what the real problems that need solving are.

As much as I’ve always reflexively loathed the “FUD” acronym, if it quacks like an SCO lawyer and all that. As I said, Bray’s not stupid. When he refuses to distinguish between the iPhone App Store and the Internet, he knows what he’s doing.

While I think Bray makes some good points in the referenced piece, I have to agree with this Coyote Tracks post: Bray’s conflation of the internet and the App Store is careless at best, and could plausibly be malicious.

Whatever the intent, it’s a misleading application of partial truths to imply something that differs significantly from the complete truth. In other words, it’s bullshit. And it’s therefore a weak argument that doesn’t help anyone.

The phrase ‘I don’t have time for’ should never be said. We all get the same amount of time every day. If you can’t do something it’s not about the quantity of time. It’s really about how important the task is to you. I’m sure if you were having a heart attack, you’d magically find time to go to the hospital. That time would come from something else you’d planned to do, but now seems less important. This is how time works all the time. What people really mean when they say ‘I don’t have time’ is this thing is not important enough to earn my time. It’s a polite way to tell people they’re not worth your time.

Can you imagine a better reform? Sure. … But an ideal plan isn’t on the table. And what is on the table, ready to go, is legislation that is fiscally responsible, takes major steps toward dealing with rising health care costs, and would make us a better, fairer, more decent nation.

David Barnard posted the left screenshot, saying, “This is why I haven’t complained about AT&T in a while.”

The right screenshot is why I’m still going to complain, why I’ve never cared about iPhone tethering, why I’d rather pair my iPad with a Verizon MiFi, and why I’ve always been happy with the Verizon stick-modem that the MiFi will replace for my laptop.

Obama’s legislation comes from an alternative idea, begun under the Eisenhower administration and developed under Nixon, of a market for health care based on private insurers and employers. Eisenhower locked in the tax break for employee health benefits; Nixon pushed prepaid, competing health plans, and urged a requirement that employers cover their employees. Obama applies Nixon’s idea and takes it a step further by requiring all Americans to carry health insurance, and giving subsidies to those who need it. So don’t believe anyone who says Obama’s health care legislation marks a swing of the pendulum back toward the Great Society and the New Deal. Obama’s health bill is a very conservative piece of legislation, building on a Republican rather than a New Deal foundation. The New Deal foundation would have offered Medicare to all Americans or, at the very least, featured a public insurance option.

The fact that NOT ONE SINGLE REPUBLICAN voted against the party line is damning (more than thirty Democrats crossed the other way, by comparison). Today’s GOP has abandoned all pretense of serving the people or attempting to redress the country’s problems. Today’s GOP belongs to the religious fundamentalists, the loonies and the haters, the lobbyists for the banks and corporations, and the very military industrial complex that Eisenhower warned against in his farewell speech. They proved that tonight.

This is what I do to test my new drive before tomorrow, when I can conveniently buy the Molex-to-SATA-power cable that I need to install it properly in my 2008 Mac Pro. (Which, by the way, I’m still immensely satisfied with and is nowhere near needing any other upgrades except maybe a 25nm SSD later this year.)

This retired hard-drive enclosure was kind enough to lend its SATA connection cables to a more modern drive a few inches away.

It actually works. I now have a USB-mounted Blu-Ray drive and a very messy desk.

If there’s a single feature that elevates the iPhone from the rest of the pack, it’s the way that it urges and enables me to maximize the amount of time I spend thinking and doing and creating, each and every day. I’ve got ten minutes while I wait for a burger to arrive, three minutes at the post office while as a clerk explains the concept of a “forever stamp” to the unenlightened, six minutes waiting in the subway…it all adds up. I leave the house with my iPhone in my pocket, and I come home with new photos, new drawings, a few tiny things written, many pages of books read, and a better sense of the news of the day.

I accomplished (okay, “accomplished”) all of that in crumbs of time that otherwise would have gone to waste. I don’t get that sort of effect from other phones…

I’ve been sitting on this idea forever, but the chances that I’ll ever do anything with it are close enough to zero that I’m letting it go. (It’s not original, either, but it has yet to make it into a widespread calendar product.)

The basic premise is obvious: Calendar software overdoes the metaphor and carries too much baggage from its physical-object predecessor.

I find myself always keeping my calendars in “month” view, since most weeks only have a few items. (I work the same schedule every weekday and I rarely meet with people.)

The problem is obvious when it’s near the end of a month, like today:

(The same problem applies to the Day and Week views at the end of their intervals.)

There are two problems here:

I don’t care about the past. It can be hidden in a separate view for the rare occasions that I want to look at past items. Yet the past is consuming the majority of the interface.

I don’t care about present-and-future items with equal granularity. I wouldn’t mind seeing today in an hour-by-hour view, but I don’t need the same granularity when showing events three days from now.

If I switch to a more granular view for today, I lose the ability to see any of what’s happening next week.

The ideal view1 would contain today’s events in great detail, then events from the next few days in less detail, then an overview of events in the next 3-5 weeks.

Bonus idea

The same problems, with the same potential solution, apply to driving directions and navigation screens.

Pageviews, as a metric used for directly billing advertisers, are a scam. Publishers game it with sensational link-bait articles and bullshit tricks like breaking articles into multiple ‘pages’. Advertisers get stuck paying for valueless impressions.

Let’s start with the developer environment itself. It only runs on a Mac. WHY? If Google can release Mac, Windows and Linux versions of their Android Developer Kit, why the heck can’t you make an emulator and compiler stack with XCode and Interface Builder that runs on PCs?

For one, the iPhone simulator (not emulator — big difference, technically) depend on a lot of underlying shared OS X functionality, so porting it to other platforms is a major undertaking.

The Android Developer Kit is nowhere near this level of sophistication. Its Java-based emulator is barely usable even on a Mac Pro, and its tools are much more buggy and unintuitive than Apple’s. (And Xcode has its share of bugs and unintuitive functionality, so that’s saying a lot.)

But the biggest reason why there’s no iPhone SDK on Windows or Linux is that it doesn’t need to exist. The iPhone is the premier platform where the most money is being made. Developers will come to Apple — Apple doesn’t need to come to developers. (Google does, as the underdog.) It’s the same reason why there’s no OS X or Linux port of Microsoft Visual Studio, and you don’t see a lot of Mac owners yelling at Microsoft for not porting its sophisticated development environment to their chosen operating system.

And it’s not like anyone is stopping you from buying a Mac.

It’s bad enough that they have to learn Objective-C, which is a completely unique programming language to your products.

So? If I want to program for Android or BlackBerry, I need to learn Java — and each platform, presumably, has very different interface libraries even within the same language. If I want to program for Microsoft’s platforms, I need to learn the .NET Framework and C# or ASP.NET. In all of these cases, I haven’t yet needed to use these languages in my professional life, so I’d need to learn them.

Google came to the race late. Why are they making me learn Java? Why don’t they let me write my Android apps in PHP, the language I know best?

Right. It’s a ridiculous argument.

Every major platform is going to require a vastly different library, and probably a different language, from what many programmers have already used. That’s why we have the ability to learn new languages, a process that many of us find enjoyable and enlightening.

… So now some of these developers have gone ahead and invested in a Mac to write iPhone and iPad apps, which were accepted through your process. Then months later, you then yank 5000 applications off the App Store because they are “Overtly Sexual”. … A lot of people spent time and energy and resources into writing these apps, “Overtly Sexual” or not. And then you go ahead and rip out all the wireless discovery apps that use undocumented APIs, or those that have “Minimal Functionality”. Can we have some sort of codified Halakha for this? Like the Talmud of iTablets? So we can stop wasting everybody’s time?

No undocumented APIs: Stated very clearly in the developer agreement since the beginning.

No porn or adult-themed apps: Right there in the keynote, plus in the developer agreement.

“Minimal functionality”: To the best of my knowledge, this has only ever been used to prohibit apps that do absolutely nothing except show a static image or color, or something similarly useless, that could be constructed with less than 10 minutes of work.

But keep writing your angry rants instead of competing with the rest of us in this massive, booming software market.

The state of iPhone development, briefly

iPhone developers have complained a lot (myself included), but that’s because we’re highly motivated to make iPhone software and there have been a lot of rough edges.

But the problems keep getting fixed, and there’s very little left to complain about. Even Apple’s app-review process has dramatically improved over the last few months to be much faster and offer more detailed feedback for rejections, which eliminates or trivializes most1 of the problems with app review.

Meanwhile, the biggest problem with other platforms is insurmountable: Not nearly enough people want to use them. That’s not a problem that we’ve ever had over here.

Not all, of course. But the only way to do that would be to eliminate app review altogether, which introduces other problems that I’m not sure are worth the tradeoff. See: Android Marketplace. ↩︎