As AI Takes Off, A Dire Demand for GPUs

Artificial intelligence is replacing human blood, sweat, and tears in countless industries around the globe, but this comes as no surprise. Computers can emulate the human brain—but more quickly and comprehensively—by processing huge sets of visual, spoken, or numerical data and recognizing cause-and-effect patterns with greater accuracy. The implications for an actual AI-driven machine that has studied thousands of CT scans from both healthy and cancerous patients, for example, are enormous. This machine and others (cancer detection is a common purpose for AI) can recognize with 95% accuracy tiny tumors in a scan, but also learn to determine which tumors are likely to be benign and which are not.

AI has the same accelerating effect on other sectors such as finance, which employs it for the purpose of predictive market analytics and other profit-driven endeavors. Geneticists also rely on AI and deep learning to draw new conclusions based on endless, convoluted data stored in the genomes of plants and animals. By using a computer-powered brain to study genetic base pairs, their patterns, and resulting traits, AI is helping to draw the blueprint for new pharmaceutical drugs and medicines. Clearly, STEM experts and other industry leaders are relying on AI to push faster innovation, but it has come at a cost.

An amplified demand for AI also means an increase in the computational power required by machines that host these artificial intelligences. As a massive influx of data occurs everywhere thanks to the concept of “going digital”, it’s having an interesting effect on the cloud computing and chip maker industries. To create more cost-effective power, especially for AI-suited tasks, the GPU—or Graphics Processing Unit—is now the default chip type for industry.

Why GPUs?

The dissimilar architecture of a GPU from a CPU is what makes the former more suitable for deep-learning and AI, which pursue multi-faceted and complex goals rather than simple ones—like loading a webpage, for example. Instead of sequentially solving problems, AI requires rapid parallel processing. Even for an AI to learn how to reliably recognize a dog in a picture, it must perform seriously complicated computations in parallel. This AI might look at a picture of a dog, convert it to black and white and assign each pixel a number based on its level of lightness or darkness, creating what looks like a giant matrix of numbers. As it’s repeatedly fed thousands of photos and given the correct definition of each, it learns over time to recognize patterns that mean “dog” and will accurately pick out the dogs in every picture it sees.

To accomplish this, an AI must reference millions of data points to find those that correlate with what it’s “looking” at. This requires intensive parallel processing, an easy task for an average retail GPU, which typically has hundreds of times as many cores, more memory bandwidth, and ten times the computing efficiency as an industrial-grade CPU from Intel. While CPUs are still necessary for tasks that require sequential computing models and can be used to better results in conjunction with GPUs, the latter are now the new default workhorse for the computing industry.

Manufacturing companies like Nvidia have profited significantly from the move to GPU technology, as have larger cloud computing providers like Amazon and Google, who now offer virtual machine environments run on GPU power. Firms that specialize in graphics animation and rendering, AI deep learning, and cryptocurrency mining dedicate large budgets to this need. The most popular cloud GPU platforms offer monthly and even by-the-hour computing subscriptions to satisfy this demand, but offering convenient sales models or centralizing massive banks of GPU power isn’t enough. New, custom infrastructure is also being built to help increase the cost-effectiveness of graphics-based computing.

What Can Blockchain Do for GPUs?

The rise of AI is compelling a forced expansion of the world’s computational capacity. In the early stage this has resulted in a move towards GPU power over CPUs, but the second stage will see a move towards decentralized power. Though centralized organization of GPU data centers is good enough for now, it isn’t cost-effective. Blockchain ledgers have the ability to effortlessly track and organize a potentially endless number of distributed power contributors. Innovators are therefore looking at the blockchain as a better delivery method for power, both for more natural energy denomination and price discovery, but also for its ability to unlock the idle GPU power trapped in retail machines.

Tatau is one of the first blockchain companies to address this unique idea, and it allows anyone with a PC to connect and begin offering their computer’s power to the network. The blockchain employed by Tatau denominates the GPU power streamed from users’ PCs in tokens and pays them for the power it draws. As a marketplace, Tatau’s blockchain primarily connects those who need GPU power at a certain price, and groups of users willing to supply it.

“Market demand for AI compute doubles every 3.5 months, but supply isn’t keeping up. Suppliers are using price as a lever to control usage, and these dynamics are holding back innovation,” wrote Andrew Fraser, the CEO of Tatau.

Other blockchain-based companies have capitalized on the idea of tokenized energy denomination and dissemination as well. Many new players help users to contribute under-utilized power from their otherwise-idle PCs, and the fledgling industry is currently being carved up by companies who specialize in providing power to specific industries such as rendering and animation. Though it’s true that rendering an animated movie is a task much more suited to decentralized GPU power, it’s the potentially endless source of demand for AI-driven advancement that is the true catalyst.