So far, Lenovo hasn’t had the best 2015. After preinstalling pernicious adware from a company called Superfish on many of its laptops, the company experienced necessary backlash from concerned customers and the security community. Now it’s trying to make amends.

In a statement on Friday (the day for buried news), the company said it will drastically reduce the amount of software on its machines that doesn’t directly provide customers with services and capabilities they desire. Such additonal software is often called “bloatware,” or adware in the case of Superfish. Lenovo PCs will now ship with just an operating system, software that coordinates with hardware, security software, and in-house applications made by Lenovo itself.

The statement says (emphasis preserved):

The events of last week reinforce the principle that customer experience, security and privacy must be our top priorities. With this in mind, we will significantly reduce preloaded applications. Our goal is clear: To become the leader in providing cleaner, safer PCs.

It's good to see Lenovo talking the talk. And the company says that it will release more information about its plans in the next seven days. But it’ll be a while before we can assess the extent to which the company has cleaned up its security practices. Especially when you think about how all of this happened in the first place for a paltry $250,000 contract with Superfish, according to an estimate by Forbes. That doesn’t indicate great judgment.

Why are corporations inclined toward denials when embarrassing problems are discovered? Two cases in the past week offer some clues—and illustrate how companies can respond vastly better in such situations.

Both revelations involved technology companies. Lenovo had installed some awful third-party software on a number of consumer-marketed personal computers running Windows, resulting in a genuinely horrific violation of customers’ security. Meanwhile, according to a story in the Intercept based on the Edward Snowden leaks, U.S. and British spies hacked Gemalto, the biggest manufacturer of mobile-phone SIM cards, as part of a campaign to undermine the security of users’ phones.

In both cases, the initial reaction from the companies was, essentially, a denial that anything serious was wrong. But Lenovo changed that stance after being confronted, in often harsh ways, by people who knew better—notably security experts who pointed out the absurdity of the company’s what-me-worry claims. Now, in its latest public statements, Lenovo is saying it’s going to do everything in its power to ensure that it never lands in a similar position in the future.*

Gemalto says it investigated, and that actually everything is fine. A corporate statement includes the following: “No breaches were found in the infrastructure running our SIM activity or in other parts of the secure network which manage our other products such as banking cards, ID cards or electronic passports. Each of these networks is isolated from one another and they are not connected to external networks.”

That was the rough equivalent of “move along, nothing to see here”—and it led to a more skewering from security experts. “This is an investigation that seems mainly designed to produce positive statements,” Matthew Green of Johns Hopkins University told the Intercept. “It is not an investigation at all.”

If the security gurus are correct, Gemalto doesn’t even realize the danger it faces. Let’s hope that company’s customers will demand more clarity, though the world’s telecommunications carriers are traditionally joined at the hip with spy-happy governments, and it may not bother them much that once again they’re giving their customers’ security little or no thought.

As Slate’s David Auerbach suggests, Lenovo’s worst offense was ineptitude, which isn’t something you want to see from a company that tens to hundreds of millions of customers want to rely on for their personal-computing platforms. (As Auerbach also points out, the third-party software companies at least as responsible for this debacle—Superfish and Komodia—are disgustingly unapologetic or silent.)

When the Lenovo story broke late last week, I tweeted about how bad this looked for the PC maker, and how that pained me as a longtime customer, because it had diminished the likelihood that I would buy from the company again. In an early email exchange with a senior Lenovo official, who seemed genuinely perplexed by my reaction, I expressed amazement at the company’s blatant misstatements about the security implications. He later acknowledged, as the company’s chief information officer did publicly, that the harsh reactions had been fair.

In doing that, Lenovo was taking a cue, though somewhat belatedly, from the school of public relations called “crisis communications,” and it’s mostly common sense when applied with integrity. The PR practitioners spend a lot of time helping clients prepare for what seems like inevitable crises, so one of the most important rules, of course, is don’t do stupid stuff that will get you in trouble. But since humans run enterprises, problems are likely anyway.

So how can companies handle these kinds of cybersecurity cases?

First, they should never, ever lie about the situations. If we know anything at this point, it’s that digital security is—at best—a moving target. Saying “We don’t know for certain, but we’re looking into this as fast as is humanly possible” makes a lot more sense. Perhaps lawyers are often involved in decisions to brazen it out in crises of this kind, because companies just hate to admit anything that might give class-action lawyers any ammunition.

Second, they should make public mea culpas if they screwed up. Again, the lawyers probably freak out at the possibility, but then again the lawyers work for the company, not vice versa.

Third, they should publicly explain how they’re going avoid recurrences. This is easier when the issue is malware you install yourself, and much more difficult when you’re fighting off some of the best-equipped spies on the planet.

Lenovo has taken a further, and valuable, step: It vows to “become the leader in providing cleaner, safer PCs”—eliminating “what our industry calls ‘adware’ and ‘bloatware.’” It would be great if this move sparked a race to the top, with vendors competing to offer systems that don’t compromise users’ security and privacy.

One of the most responsible admissions of a screw-up followed by strong action to prevent a recurrence came a few years ago from Consumers Union, the nonprofit that operates Consumer Reports. A report on car baby seats was deeply flawed, so bad that it threatened the magazine’s sole basis for survival: the trust of its audience. Consumer Reports retracted the article in a letter to its audience, and then, after a genuine internal investigation, published a long and instructive piece on what had gone wrong, and how it intended to prevent something like that from happening again.

I’m more skeptical now of Consumer Reports. But I still generally trust it. And I still buy it. There’s a lesson there.

*Correction, Feb. 27, 2015: Due to an editing error, this post originally misstated that Lenovo was saying the company would do everything in its power to ensure that it land in a similar position in the future. It actually said it’s going to do everything it can not to land in such a position.

Hyperloop Transportation Technologies, the company that wants to move the revolutionary transit system out of Elon Musk’s brain into the real world, plans to start construction on an actual hyperloop next year.

OK, it will only run five miles around central California, and it won’t come anywhere close to the 800 mph Musk promised, but it’s a start.

The Hyperloop, detailed by the SpaceX and Tesla Motors CEO in a 57-page alpha white paper in August 2013, is a transportation network of above-ground tubes that would span hundreds of miles. Thanks to extremely low air pressure inside those tubes, capsules filled with people zip through them at near supersonic speeds.

The idea is to build a five-mile track in Quay Valley, a planned community (itself a grandiose idea) that will be built from scratch on 7,500 acres of land around Interstate 5, midway between San Francisco and Los Angeles. Construction of the hyperloop will be paid for with $100 million Hyperloop Transportation Technologies expects to raise through a direct public offering in the third quarter of this year.

They’re serious about this, too. It’s not a proof of concept, or a scale model. It’s the real deal. “It’s not a test track,” CEO Dirk Ahlborn says, even if five miles is well short of the 400-mile stretch of tubes Musk envisions carrying people between northern and southern California in half an hour. Anyone can buy a ticket and climb aboard, but they won’t see anything approaching 800 mph. Getting up to that mark requires about 100 miles of track, Ahlborn says, and “speed is not really what we want to test here.”

Instead, this first prototype will test and tweak practical elements like station setup, boarding procedures, and pod design. “This is a very natural step,” Ahlborn says, on the way to building a longer track that allows for higher speeds and testing freight shipping. It’s also a way to prove that yes, this thing can be built.

Those designs were put together by a group of nearly 200 engineers all over the country who spend their free time spitballing ideas in exchange for stock options, and have day jobs at places like Boeing, NASA, Yahoo, and Airbus. They and a group of 25 students at UCLA’s graduate architecture program are working on a wide array of issues, including route planning, capsule design, and cost analysis.

The partnership with Quay Valley makes sense for both parties. It’s a chunk of private land where Ahlborn doesn’t have to grapple with the right-of-way issues that have plagued California’s high-speed rail project. Quay Hays has been trying to build his housing and commercial development project for nearly a decade (the 2008 recession put the plan on hold). The hyperloop fits with his vision of a place where cars take a back seat to nonpolluting public transit systems. (Ahlborn says the track and station will run as least partly on solar power.)

For Quay, it doubles as advertising: The chance to ride in the world’s first Hyperloop is a great reason for people driving down I-5 to take their bathroom break in the settlement he’s evangelizing, take a look around, maybe buy a house.

During rambling remarks Thursday afternoon, James Inhofe of Oklahoma, the chairman of the Senate Environment and Public Works Committee, used a snowball as a prop on the Senate floor. The apparent purpose of this stunt: to show the recent spate of cold weather in the Northeast is a sign that human activity isn’t causing climate change.

The snowball was brought to the Senate floor in a sealable plastic bag.

Inhofe began his speech with the snowball at his side on the speaker’s podium. After he was introduced, he removed it from the bag, held it in his hand, and said, “I ask the chair, you know what this is? It’s a snowball, just from outside here. So it’s very, very cold out. Very unseasonal. Mr. President, catch this.”

Inhofe then underhand tossed the snowball in the direction of Republican Sen. Bill Cassidy of Louisiana, who was presiding over the Senate at the time.

In his comments, Inhofe was his typical climate-denying self—which is frustrating because he wields significant power on U.S. climate policy in the newly Republican-controlled Senate. “I’m not a scientist, and don’t claim to be,” Inhofe said on Thursday. He then cited, among other things, a Newsweek article from 1975 (whose author recently lamented the way climate change deniers use his work), archaeological evidence, and Scriptures, in addition to the snowball, as evidence that refutes the claim that “somehow man is so important that he can change [the climate].”

The day is finally here. In a 3-2 decision Thursday, the FCC voted on an Open Internet Order that reclassifies broadband as a utility—under Title II of the Communications Act—as a way of allowing the FCC to prohibit net neutrality faux pas like fast and slow lanes, throttling, and content blocking.

The FCC has said that it will have a light-touch approach to implementing Title II and, for example, isn't interested in regulating things like pricing. For the FCC the crucial issue is having the ability to enforce its authority against “unjust and unreasonable practices.”

The telecom industry is expected to challenge the decision in court. This was successful in 2014 when Verizon challenged the FCC's 2010 net neutrality protections and won. At that time, though, the court did say that the FCC could reclassify broadband under Title II, and the goal is for that precedent to be upheld.

Net neutrality advocates are celebrating. Michael Weinberg, the senior vice president of Public Knowledge, said in a statement:

By embracing its Title II authority and creating clear, bright-line rules against blocking and discrimination, Chairman Wheeler and the FCC have earned a reputation as defenders of an Open Internet. ... [A] bipartisan wave of Open Internet supporters from across the country came together to make it clear to their government that it had a crucial role in protecting an Open Internet.

But others are concerned. Commissioner Mike O’Rielly, who is, shall we say, not the most thrilled with the decision, said in a statement, “I am sorry to the staff members that were forced to prepare a half-baked, illogical, internally inconsistent and indefensible document. For an institution that prides itself on quality of work and legal and technical expertise, this document is anything but.”

Telecom lobbyists and Republicans have already been thinking about the scenario where the Title II reclassifcation passed, and theysupport legislation rather than agency regulation as the solution to the net neutrality problem. Opponents of the Title II reclassification say that it will discourage investors from putting money into U.S. infrastructure, because future administrations and iterations of the FCC may use the utility status in ways the current FCC doesn't intend. On this point, Republican Sen. Mitch McConnell said in a statement, “The Obama Administration needs to get beyond its 1930s rotary-telephone mindset and embrace the future.”

Scott Belcher, the CEO of the Telecommunications Industry Association, said, "Everybody is in general agreement about having an open Internet. ... We’re in violent agreement on almost everything that’s underway here, but we [need] a legislative solution that makes sense."

But advocates say that the industry created the problem itself by forcing the FCC to resort to utility status. The digital rights group Fight For the Future writes that Title II reclassification was, “The only option that let the FCC stop Team Cable from breaking the key principles of the Internet we love.”

In a statement, Commissioner Ajit Pai said, “If we are going to act like our own mini-legislature and plunge the Commission into this morass, we need to use a better process going forward.” He added that it’s still extremely rare to get bipartisan agreement about net neutrality. “It brings to mind a Texas politician’s observation that there is nothing in the middle of the road but yellow stripes and dead armadillos,” he said. (Pai voted against the proposal, in case that wasn't clear.)

Handling the decision with notable composure and dignity, Verizon just published its response in Morse code, so it would be understandable for 1930s technologists.

As the net neutrality debate rages on, it’s easy to forget that there are people who have never experienced the injustice of an endlessly buffering Netflix movie. And it’s staggering to be confronted with the reality that only 37.9 percent of humans have access to the Internet once a year or more. That’s right: More than 60 percent of us have never connected.

Of course, for net neutrality advocates, the goal is to have a stable, open Internet available whenever this population can gain access, and that’s what Facebook’s Internet.org initiative is working on. On Tuesday, the group released its State of Connectivity report for 2014, which shows progress, but also challenges. The report points out that only 32 percent of people in developing countries have Internet access, compared with 78 percent in the developed world.

The report also says that people are gaining Internet access at a slower rate, a trend that has been going on for four years. The Internet added users at a rate of 6.6 percent in 2014, compared with 14.7 percent in 2010. Though the number of people connected will reach 3 billion in 2015, “at present rates of decelerating growth, the Internet won’t reach 4 billion people until 20197.” That's a while from now.

The report says, “Without the cooperation of industry, governments and NGOs working together to improve the global state of connectivity by addressing the underlying reasons people are not connected to the Internet, connectivity may remain permanently out of reach for billions of people.”

Internet.org breaks the problem down into three categories: infrastructure, affordability, and relevance. The report also discusses how you need both a data connection and a device to actually access the Internet. These organizing principles may seem simplistic, but they are a useful way to see both barriers and potential jumping-off points for improvement. For example, the report says that almost 92 percent of people could connect to 2G data coverage if they had a device and/or a data plan that was affordable.

The report admits that “connecting the world is not an easy task.” It's a refreshingly frank evaluation.

On Friday, Sept. 26, 2014, a telecommunications contractor named Brian Howard woke early and headed to Chicago Center, an air traffic control hub in Aurora, Illinois, where he had worked for eight years. He had decided to get stoned and kill himself, and as his final gesture he planned to take a chunk of the U.S. air traffic control system with him.

Court records say Howard entered Chicago Center at 5:06 a.m. and went to the basement, where he set a fire in the electronics bay, sliced cables beneath the floor, and cut his own throat. Paramedics saved Howard's life, but Chicago Center, which controls air traffic above 10,000 feet for 91,000 square miles of the Midwest, went dark. Airlines canceled 6,600 flights; air traffic was interrupted for 17 days.

Howard had wanted to cause trouble, but he hadn't anticipated a disruption of this magnitude. He had posted a message to Facebook saying that the sabotage “should not take a large toll on the air space as all comms should be switched to the alt location.” It's not clear what alt location Howard was talking about, because there wasn't one. Howard had worked at the center for nearly a decade, and even he didn't know that.

At any given time, around 7,000 aircraft are flying over the United States. For the past 40 years, the same computer system has controlled all that high-altitude traffic—a relic of the 1970s known as Host. The core system predates the advent of the Global Positioning System, so Host uses point-to-point, ground-based radar. Every day, thousands of travelers switch their GPS-enabled smartphones to airplane mode while their flights are guided by technology that predates the Speak & Spell.

If you're reading this at 30,000 feet, relax—Host is still safe, in terms of getting planes from point A to point B. But it's unbelievably inefficient. It can handle a limited amount of traffic, and controllers can't see anything outside of their own airspace—when they hand off a plane to a contiguous airspace, it vanishes from their radar.

The FAA knows all that. For 11 years the agency has been limping toward a collection of upgrades called NextGen. At its core is a new computer system that will replace Host and allow any controller, anywhere, to see any plane in U.S. airspace. In theory, this would enable one air traffic control center to take over for another with the flip of a switch, as Howard seemed to believe was already possible.

NextGen isn't vaporware; that core system was live in Chicago and the four adjacent centers when Howard attacked, and this spring it'll go online in all 20 U.S. centers. But implementation has been a mess, with a cascade of delays, revisions, and unforeseen problems. Air traffic control can't do anything as sophisticated as Howard thought, and unless something changes about the way the FAA is managing NextGen, it probably never will.

This technology is complicated and novel, but that isn't the problem. The problem is that NextGen is a project of the FAA. The agency is primarily a regulatory body, responsible for keeping the national airspace safe, and yet it is also in charge of operating air traffic control, an inherent conflict that causes big issues when it comes to upgrades. Modernization, a struggle for any federal agency, is practically antithetical to the FAA's operational culture, which is risk-averse, methodical, and bureaucratic. Paired with this is the lack of anything approximating market pressure. The FAA is the sole consumer of the product; it's a closed loop.

The first phase of NextGen is to replace Host with the new computer system, the foundation for all future upgrades. The FAA will finish the job this spring, five years late and at least $500 million over budget. Lockheed Martin began developing the software for it in 2002, and the FAA projected that the transition from Host would be complete by late 2010. By 2007, the upgraded system was sailing through internal tests. But once installed, it was frighteningly buggy. It would link planes to flight data for the wrong aircraft, and sometimes planes disappeared from controllers' screens altogether.

As timelines slipped and the project budget ballooned, Lockheed churned out new software builds, but unanticipated issues continued to pop up. As recently as April 2014, the system crashed at Los Angeles Center when a military U-2 jet entered its airspace—the spy plane cruises at 60,000 feet, twice the altitude of commercial airliners, and its flight plan caused a software glitch that overloaded the system.

Even when the software works, air traffic control infrastructure is not prepared to use it. Chicago Center and its four adjacent centers all had NextGen upgrades at the time of the fire, so nearby controllers could reconfigure their workstations to see Chicago airspace. But since those controllers weren't FAA-certified to work that airspace, they couldn't do anything. Chicago Center employees had to drive over to direct the planes. And when they arrived, there weren't enough workstations for them to use, so the Chicago controllers could pick up only a portion of the traffic.

Meanwhile, the telecommunications systems were still a 1970s-era hardwired setup, so the FAA had to install new phone lines to transfer Chicago Center's workload. The agency doesn't anticipate switching to a digital system (based on the same voice over IP that became mainstream more than a decade ago) until 2018. Even in the best possible scenario, air traffic control will not be able to track every airplane with GPS before 2020. For the foreseeable future, if you purchase Wi-Fi in coach, you're pretty much better off than the pilot.

A big, high-risk infrastructure upgrade like NextGen will never move as fast as change associated with consumer technology, but the real hurdles are not technical, they're regulatory. In the private sector, new technologies can be developed freely regardless of whether the law is ready for them. Think of Uber, Lyft, and Airbnb: Outdated regulations slowed them down, but consumer demand is forcing the law to evolve. This back-and-forth is what lets tech companies move fast and break things without risking our safety. But when the government upgrades its technologies, regulations intercede before a single line of code is written.

The government procurement process is knotted with rules and standards, and new technology has to conform to those rules whether or not they're efficient or even relevant. These issues screwed up HealthCare.gov and are screwing up the Department of Veterans Affairs and a dozen other agencies that need computers and software that work. The current process stifles innovation from the start and mires infrastructures like NextGen, which need to carry us far into the future, in the rules of today.

The government needs to change its procurement process, and it's got to let go of its stranglehold on air traffic control. Privatization isn't necessarily the answer. Canada, the UK, Germany, Sweden, and Australia operate air traffic control through various separate entities, from semiprivate to nonprofit to government corporations, that help facilitate the necessary push and pull between technological risk-taking, regulatory caution, and pressure from end users.

The first real pressure on the FAA to show results came, ironically, from Howard. He forced what was essentially the first real-time operational test of the new system. When NextGen faltered, the program faced a level of widespread public scrutiny that it had previously evaded, and the FAA had to respond. The agency published a review of its contingency processes, including new plans to enable control centers to assist each other in emergencies. Brian Howard, hell-bent on destruction, was the best thing to happen to our air traffic control system in years.

On Thursday, the FCC will vote on whether to treat Internet service as a utility. But ahead of this meeting, the Republican stance on net neutrality and Title II utility classification has become kind of muddied.

Net neutrality isn't a completely straightforward partisan issue, so there has always been some uncertainty about what the party line is exactly. Opposing government regulation of a huge industry (that has lots of lobbying money) makes sense for the Republicans, but it's been hard to figure out the best tactic for defending this position when voters on both sides are getting fired up about freedom of information on the Internet.

On Tuesday, the New York Times published an article headlined, "F.C.C. Net Neutrality Rules Clear Hurdle as Republicans Concede to Obama." It outlined some potential sticking points for pro–net neutrality regulation, but basically concluded that Republicans wouldn't stand in the way.

The piece quoted Republican Sen. John Thune, the chairman of the Senate Commerce Committee, as saying, “We’re not going to get a signed bill that doesn’t have Democrats’ support. ... This is an issue that needs to have bipartisan support.” And net neutrality advocates were shown doing a victory lap. Mozilla Director Dave Steer told the Times, “We’ve been outspent, outlobbied. We were going up against the second-biggest corporate lobby in D.C., and it looks like we’ve won.” Steer apparently doesn't know how jinxing works.

Thune struck back swiftly, denying that Republicans will go quietly on the issue. He tweeted, "Claims that Republicans conceded on #NetNeutrality are a mischaracterization. I am committed to a legislative solution to @FCC power grab."

Frederick Hill, a Senate Commerce Committee aide, told the Washington Postthat even though no bill materialized to address the net neutrality debate before the FCC vote, Republicans still plan to introduce one later to advocate for legislation that would override FCC authority. "Once the rules are made public for review, Sen. Thune is committed to pushing ahead," he said. "The FCC's direction is bad for the Internet and bad for consumers."

Republican representatives Jason Chaffetz of the House Oversight and Government Reform Committee and Fred Upton, the chairman of the House Energy and Commerce Committee, released a statement in response to FCC chairman Tom Wheeler's refusal to testify at an Internet regulation hearing on Wednesday. They said:

We are deeply disappointed in Chairman Wheeler’s decision. As Chairman Wheeler pushes forward with plans to regulate the Internet, he still refuses to directly answer growing concerns about how the rules were developed, how they are structured, and how they will stand up to judicial scrutiny. After hearing from over four million Americans on such an important topic to our economic and cultural future, it's striking that when Congress seeks transparency, Chairman Wheeler opts against it.

Perhaps Wheeler's proposal doesn't have the votes to pass, but it seems like Republicans are resigned to the idea that it does, even as they recapture their will to fight.

Global Voices Advocacy’s Netizen Report offers an international snapshot of challenges, victories, and emerging trends in Internet rights around the world. This week’s report begins in Sudan, where Internet users welcomed new regulations on technology exports from the United States.Last month, the U.S. government loosened sanctions on Sudan to allow for the export of communications technologies such as smartphones, tablets, laptops, network devices, and a variety of online services like web hosting, mobile app stores, and cloud storage services. The policy change comes after years of campaigning by activists like Mohamed Hashim Kambal, a lead coordinator of the Lift U.S. Digital Sanctions on Sudan campaign. Nevertheless, problems remain.

In an interview with Global Voices, Kambal described how the sanctions affect Sudanese citizens, and pointed to persisting problems with the newly loosened regulations.

Readers gonna read, and meanwhile, the page-vs.-screen debate continues to rage. Both online and off, we’re awash in contradictory, inconclusive studies about the flowering and/or decline of pleasure reading. There is, too, ample evidence for the convenience of consuming text on the Internet, and for the resulting loss in comprehension: We appear to access more and absorb less. We are told that social reading—on Goodreads, for instance—is great! It also predicts widespread intellectual devastation. (“When we allow another person into the discussion, our dialogue with the author dissipates immediately,” sayeth Proust.) And the immediate proximity, online, of the entire manifold of human knowledge feeds us a lot of useful context, not to mention a form of attention-kryptonite that makes the pre-Web tyranny of choice look like an artists’ collective.

Linguist Naomi Baron’s new book, Words Onscreen,takes up these issues and more, promising to unfold “the fate of reading in a digital world.” Baron is no Chicken Little—her concerns (mostly, that e-perusal might fracture focus and disrupt deep engagement with the written word, even as it brings more content to more people at affordable prices) make sense. This is, of course, well-trodden ground: hyperlinks good, lack of ability to annotate bad. Portability good, headaches and eyestrain bad. Efficiency of skimming for answers good, loss of a rewarding, continuous, and holistic relationship with the text bad.

So! Instead of going back over the research, we convened a séance with the Ghosts of Reading Past, Present, and Future. Their free-form conversation is below.

Ghost of Reading Past: Verily, I read the hard copy version of Baron’s book cover to cover, at a slow and deliberate pace, as is my wont. I also reread passages that I liked or wished to understand more thoroughly. It took me two weeks.

Ghost of Reading Present: I read the e-book. I tried to read it your way, but my eyes ended up tracing an F-shape onscreen. I read the top line left to right, in other words, and then as I went down started skipping more and more of the right half of the “page.” Throughout, I was less likely to reread. The book took me three days to finish.

Ghost of Reading Future: Chose the e-book. Used CTRL-F to search for keywords and then skimmed the one or two sentences around them. Took 20 min, and I was simultaneously on Snapchat, Gawker, and an elliptical machine. #YOLO.

Past: Forbear your smugness, Future! College kids (at least the ones Baron surveyed) actually prefer reading in hard copy. The longer the text is, the more they say they want print. They say they understand the material better and take more pleasure in it when they can hold it in their hands. And these are digital natives!

Present: Yeah, but. E-book sales are through the roof, while print sales are down. Colleges are increasingly putting textbooks online to help defray costs. In an ideal world, maybe students (and people!) would prefer print books, but environmental and financial concerns coupled with market realities are moving their (and our) reading irrevocably screenward.

Future: Wah-wah. Stop weeping over the decline of horse-drawn carriages and ADAPT, you guys. Just because e-books don’t promote so-called “slow reading” (by situating readers in a physical geography, giving them something they can hold and touch and smell) doesn’t mean they make it impossible. Studies show you can train yourself to focus on a Kindle, especially if you disconnect from the Web.

Past: You underestimate haptics! Writes Baron: “We don’t just decipher words on pages. We also sense them.” Consider all the writers who have said they feel closer to their work when they can touch it. Pablo Neruda: “The typewriter separated me from a deeper intimacy with poetry, and my hand brought me closer to that intimacy again.” Iris Murdoch: A word processor is “a glass square which separates one from one’s thoughts and gives them a premature air of completeness.” Umberto Eco: “The book is like the spoon, scissors, the hammer, the wheel. Once invented, it cannot be improved”—

Present: A pithy quote from Umberto Eco isn’t science. I was more worried by Baron’s suggestion that authors are composing shorter pieces, or ones lacking in richness and complexity, to harmonize with our new reading habits. As for physicality, experts do say that print books give you a sense of ownership—not only over the physical object but over its intellectual content—whereas ephemeral screens position you as a “visitor” in the text. You’re in, you’re out, you no longer carry the words with you. Baron thinks this shallower relationship to reading material encourages a focus on data rather than truth. Remember when she describes trying to teach students about the sociologist David Riesman’s theory of inner- and outer-directed cultures? One young woman was so busy using her iPhone to correct Baron’s spelling of Riesman that she missed the distinction entirely!

Future: But again, anecdata city over here. Didn’t you see the part where Baron compared the simplifying processes you find online (algorithmic shortening services like Summly) to print abridgments, book reviews, summaries, and encyclopedia entries? Even the word magazine was adapted from the French “a place to store things”—those “things” being tidbits from longer works you don’t have time to read in their entirety. None of this is new, is what I’m saying.