In the last five years the advertising sales market has changed radically, Dag Liodden, co-founder and CTO at Tapad, said in a Wikibon Peer Incite meeting Tuesday, November 27. Just five years ago the online ad sales market was basically static, with ads developed for specific sites based on general profiles of the user population and placed based in part on recent activity by a user. So if you had been searching for information on new cars on Google, for instance, you might see car ads the next time you check your Gmail account. That was the extent of the customization.

Today, Liodden said, Tapad places ads based on a number of variables including what device the consumer is using at that moment, where the consumer is, the consumer’s demographics, what ads the customer has seen recently, and on specifications of the specific ad campaigns such as whether that customer has viewed that ad before more than a certain number of times in a specific time period. Based on those and other data the system determines this customer’s value at this moment to any number of ad campaigns and the ad agencies behind those campaigns enter bids for the ad space based on those computations. The ad exchange then places the ad of the highest bidder.

All of this must happen within 100 milliseconds!

Tapad did not invent this incredibly fast, totally automated custom ad placement environment by itself. However, it added a new degree of sophistication by being the first ad exchange to track users across multiple devices and keep track of what ads the user views and actions the user takes based on an ad across all those devices in real time. Ad campaigns often include limits on the number of times an ad should be shown to an individual within a given time, and Tapad is the first ad exchange that can provide that information. Other ad exchanges now use Tapad to track this data for them.

From a IT standpoint, one of the largest challenges, Liodden said, has been building an infrastructure that can support random, real-time reads of and writes to huge numbers of data sets, each representing an individual consumer, in millisecond time-frames. Spinning disk, which, he said, is pretty efficient at consecutive reads and writes, breaks down very quickly in a random access situation such as this. RAM disk can support the speed, but has two major problems that make it impractical:

Building a RAM disk large enough for the Tapad database, which is over 1.5 Tbytes and growing, is very expensive.

Reloading that database from disk, if a node loses power or a new node is added, takes too long.

As a result, Tapad has turned to NAND flash in a big way. Its total NAND installation is in the range of 3.5 Tbytes, which allows mirroring so that the system can drop two nodes without losing any data. Liodden knows this because it happened recently, when someone unplugged the wrong piece of equipment.

Physical storage is part of the issue, but Tapad also needs a database that is fast enough to serve all those reads and take in all those writes in real time. This requires a stripped down, specialized system. An RDBMS, for instance, is too encumbered by all its features to handle this specific application quickly enough. Hadoop is similarly not fast enough.

Tapad found its answer in Aerospike, a NoSQL database that keeps the keys in memory but the values on SSDs. This provides the speed of read and writes and of the specific kinds of analysis that Tapad needs.

Liodden is quick to say that Aerospike is not a “silver bullet” replacement for all other database technologies, or that SSDs are a practical replacement for disk for all data. Big data analysis, for instance, is better done using disk, which is cost effective for handling very large amounts of data that is not actively transactional.

As for Aerospike versus an RDBMS or Hadoop, Liodden says, “Today you cannot buy just a single system; you have to look at your use cases, your data growth, and other needs and pick the technology that best fits. The growth of NoSQL solutions is based on the proliferation of specific use cases where they work well, but you also will need RDBMS and Hadoop for applications where they provide the best solution.”

The twice-a-month Wikibon Peer Incite meetings present unique opportunities to hear from top IT professionals in the user, rather than vendor, community speak about the challenges they face and solutions they use. These frank discussions with opportunities for attendees to ask questions and add comments are open without charge to interested IT professionals. To receive notices of upcoming meetings, IT professionals are invited to register at Wikibon www.wikibon.org for a free membership.

Bert Latamore is a freelance writer covering the intersection of IT and business for SiliconANGLE. He is a frequent contributor to CrowdChats focused on theCUBE coverage of major IT industry events and site editor at Wikibon.org. He has 35 years’ experience covering the IT industry including four with Gartner, five with Meta Group, and eight with Wikibon. He lives in the Virginia Blue Ridge Mountains with his wife, Moire, and their dog, cat and macaw. In his spare time he enjoys reading, hiking and photography.

Premium Research

Wikibon argues strongly against Revolution towards a 3rd platform. The conclusion from this analysis is that applications will evolve; conversion should be avoided like the plague. The greatest opportunity is to continuously adapt today's operational applications by the addition of real-time or near real-time analytics applied directly to the current organizational processes and applications that support these processes. This is likely to translate to the greatest value to most organizations, and where possible avoid the risks of converting systems. The study of organizations that have applied real-time analytics to their current operational systems have shown incredible improvements in lower costs and greater adaptability. Business and IT executives should understand the enormous potential for adding decision automation through real-time analytics to current operational applications in their organizations. New technologies should be judged by their ability to support real-time analytics applied to operational systems, and supporting incremental improvement over time.

In a recent web-based survey conducted by Wikibon, 300 North American enterprises whom had either been utilizing, or considering the adoption of public cloud, answered questions regarding IaaS (Infrastructure as a Service) perceptions and usages. These questions varied in topic but were centered around an examination of which workloads were best suited for usage in the public cloud. This research examines a few additional key insights that shed some light on the growing IaaS world.

Today's Technology infrastructure management is largely non-differentiated and wasteful. Technology executives must re-think the strategic role of human capital and begin to implement new ways to consume IT as a service. This post draws on the learnings of senior executive Alan Nance from Royal Philips who is dogmatic in its approach to transforming its infrastructure to a service model.

There have only been two successful volume introductions into the marketplace in the last 50 years - DRAM and NAND flash. There has to be a clear volume case with good economics for 3D XP to be able to gain a foothold in consumer products. Without volume in the consumer space, there is unlikely to be much volume traction in the enterprise space. CIOs, CTOs and enterprise professionals should take a wait and see stance, and monitor the adoption of 3D XP in the consumer and military spaces. If and when there is volume production for 3D XP, enterprise adoption should start about two years later.

The use of open source software continues to accelerate and expand in the marketplace, especially in areas where technology is significantly disrupting established business models. IT organizations should be actively seeking to understand how open communities operate, how different licensing models work, and how they can be more actively engaged with both the vendors and communities that are shaping open source software.

CIOs understand that a clear cloud strategy is critical for IT today. Wikibon believes the biggest mistake organizations can make is converting major applications into the public cloud (including SaaS) without thinking about the implications to their existing business process workflows. Wikibon recommends that IT develop and implement a hybrid cloud strategy using the existing management workflows and compliance processes for both the public and private cloud components in the hybrid cloud.

In 2014, Wikibon defined a new category "Server SAN" that sits at the intersection of software-defined storage, hyperscale methodologies and converged infrastructure. This article is the executive summary of primary research that gives the status of the market, examines the vendor ecosystem, lays forth the revenue and 10 year forecast and gives direction for expansion beyond simple "hyperconverged infrastructure". This information is available for public consumption, the full research is available to Wikibon clients.

In this research paper, Wikibon looks back at the introductory Server SAN research, adjusts the Server SAN definition to include System Drag, and increases the speed of adoption of Server SAN based on very fast adoption from 2012 to 2014. The overall growth of Server SAN is projected to be about a 23% CAGR from 2014 to 2026, with a faster growth from 2014 to 2020 of 38%. The total Server SAN market is projected to grow to over $48 billion by 2026. The traditional enterprise storage market is projected to decline by -16% CAGR, leading to an overall growth in storage spend of 3% CAGR through 2026. Traditional enterprise storage is being squeezed in a vice between a superior, lower cost and more flexible storage model with Enterprise Server SAN, and the migration of IT towards cloud computing and Hyperscale Server SAN deployments. Wikibon strongly recommends that CTOs & CIOs initiate Server SAN pilot projects in 2015, particularly for applications where either low cost or high performance is required.

If containers are at the center of a shift in how applications are developers and delivered, and their pace of growth and change is unprecedented in IT history, this could have a massive ripple effect on both suppliers and consumers of the ecosystem of IT technologies.