Writing this essay, I am doing what I do with most of my time: sitting at a desk, ensconced in my iMac’s blue glow, creating, reading, sorting, or sending some immaterial thing or another. I talk to my friends about this sometimes. And despite the radically different lives they lead—some are world-famous CEOs. Others are painters—they all do the same thing. We are computer operators who specialize in email.

A strange thing about this work is that it is difficult to estimate its consequence. Working on an assembly line, it would be quite obvious how many widgets one produced in a day, and the value of those widgets. Building a house, one could look at the progress each week, the rooms framed, the concrete poured, the windows installed. But this relatively new digital work leaves in its immediate wake only fatigued hands and eyes, and a cascade of digital activity that is more difficult to apprehend. And I don’t just mean this metaphorically: For all the track-ability of digital activity, there is so much activity that it is hard to track, and even harder to analyze.

What are we all doing all day? What is the impact of this work? What is attached to the other side of the pulley?

Our economy is fraught with informational gaps of this character. Take health care, an industry in which I have recently been investing. Amazingly, almost no hospital in America can tell you what a given medical procedure costs it to perform. Sure, it has a price it charges, but its actual cost is unknown. The hospital hires a bunch of people and buys a bunch of machines. And then they do things and charge for them, hoping it all adds up to a profitable operation. But it is unclear both to each worker and to management what it costs to do what is done. Each nurse or cardiologist has no idea how five minutes on this or that task impacts the overall operation.

Hospitals are supremely out of date, but these informational absences are emblematic of inefficiencies across the information economy. And the digital revolution—clichéd as the term might be—is just beginning to go to work on these inefficiencies. What might the path forward entail?

First, I think we’re going to see what I would call a “granulation” of work.

Let’s begin with education, which is the precursor to most people’s work. Today, we have a workforce divided between the college-educated and the not. A degree is a single credential meant to distinguish one as generally capable. Then one’s jobs accumulate to form a resume that is meant to give a more developed picture of the subsequent skills that one accumulates. It would make far more sense to credential individuals granularly, as their lives evolve. What if, from age 10, each of us was to collect a badge every time we learned a skill and proved our mastery of it? We could earn these badges in formal educational environments, on our own, or in the workplace. One’s badge collection would be a granular snapshot of one’s proven capabilities. This would mitigate an enormous amount of uncertainty in hiring, and also push education toward a more continuous, lifelong model. After finishing a job, one could earn a few more badges—at 18 or at 72.

And earning badges could itself become far easier as a result of technologies like augmented reality. One company I advise—DAQRI—has built a “smart helmet” that superimposes real-time work instructions over a worker’s field of vision, allowing the worker to understand complex tasks immediately. Instant knowledge transfer of this sort may be one of the most empowering technologies within our reach.

This granulation of skills would serve as a foundation for a granulation of worker time and employer tasks. We have already started to see this trend in the sharing economy of Uber and TaskRabbit—services that allow workers to work when they want. Right now, these sorts of services are re-inventing relatively low-skilled labor, but I predict that we’ll witness an “Uber-ization” of higher and higher skilled work. I can imagine a majority of information economy workers being free agents, competing in real time for tasks that others require, forming temporary, loose affiliations for specific tasks.

But granulating the economy of work is only a first step toward empirically understanding the interactions of its component parts. I started this essay by lamenting the lack of trackability of my own—entirely digital—labor. Yet the analog economy is even farther behind. Luckily, we are starting to sensorize and connect everything from tractors to manufacturing facilities, creating a digital map of the analog world. Once both analog and digital activity is measured, the true revolution begins: trying to understand how things connect, and ultimately produce value.

In the near future, I imagine, each action a worker takes will be precisely quantified and contextualized within the larger effort. And this information will be relayed to everyone working together on something, including the worker herself. A traveling saleswoman will be able to see how an expensed lunch affects her firm's income statement—and whether its ROI justified this expense in retrospect. Each worker decision will be informed by considerably more information, and decision quality will consequently improve. Start-ups like DOMO are already building the tools to integrate real-time business data in this manner, and innovative employers like Valve are finding great success empowering each worker with more comprehensive insights.

I don’t think these efficiencies represent a utopia, and if they do develop they will surely invite ethical predicaments. At which point should a worker’s privacy rights stand in the way of exhaustive performance tracking? If labor does end up more granulated, who should pay for what are now employer-provided benefits like health insurance?

On balance, however, the future I’m describing is one that would empower those who do work of real value to our species. And I see the technological underpinnings of it solidifying daily.