Without doubt the demonstration on the capabilities of the IPC’s CfX software interface grabbed the attention of visitors and non-visitors alike at APEX 2018. Some of the numbers displayed on the IPC monitoring screen were staggering. The system handled over 500,000 messages over the course of the week!

This led one of our commentators to make the astute observation “well, who can make sense of that much data?” Very true, and a strong indication that we are going to need AI (Artificial Intelligence) much sooner than we think, if we are going to stand a chance of harnessing this new technology paradigm.

But, AI, Deep Learning (DL) and Machine Learning (ML) development are still in their nascent stages. Following the recent Uber crash in Tempe, AZ , it has become clear that AI systems are still having difficulty diagnosing latent, or even hard to detect flaws or defects. By their own admission, top researchers[1] admitted that “Debugging is an open area of research”, meaning in engineering terms, they need to find a way to limit the false calls – sound familiar?

In former days, an engineer would learn throughout his career and amass expert knowl- edge about processes, systems, chemistries etc. Traditionally, he would publish articles and maybe even write a reference book to pass his information onto the next generation before he retires. Today, researchers and scientists are trying to capture this expertise and bottle it in the form of AI/DL/ML to continuously grow the knowledge base and depth of understanding.

Trust in AI systems will develop and grow over time. It is conceivable that there will be examples when the system gets it wrong, like the unfortunate Uber crash victim, but when you compare the statistics of the number of autonomous vehicles currently being tested, versus a similar number of manually controlled vehicles in similar environments, I would be willing to bet the autonomous vehicle’s current safety record outperforms its human counterpart. And it will only get better. There are huge challenges along the road, too many to detail in this short editorial, but it is an area that will be thoroughly debated at our upcoming eSmartFactory 2018 conference in Sunnyvale, CA on May 24th.

In other data-related news, Facebook or “Fakebook”, as I sometimes call it, is making headlines for lax data security, enabling multiple bad actors to hack personal data on a massive scale. In my opinion, this is the tip of the proverbial iceberg.

I do not usually use my Editorial page to write about advertising related matters, but on this occasion I feel compelled. Large and small publishers have faced massive competition in recent years as the industry transitions from print to digital media. Facebook and Google are at the forefront of this, accounting for approximately 60 cents in every dollar spent on digital advertising. They both own and have access to massive pools of rich data on individuals.

However, in the case of Facebook they have allowed bad actors to abuse their system, defrauding their advertisers by recording many thousands of hits from individuals not related to the product or service advertised. These fake Likes come from mainly young, Asian-based youths employed on ‘click-farms’, where they are paid one dollar per 1,000 clicks.

Although I believe these click-farms are acting illegally and outside the direct control of Facebook, Facebook has so far been unable to police the estimated 15 million fake Facebook accounts or the defective menu criteria that advertisers unwittingly use to define their promotional criteria for their advertisements.

As publishers, Global SMT & Packaging is happy to work alongside, complement and compete against Facebook for digital advertising revenue, but we hope to do so on a fair and level playing field. This practice exposes a multi-billion dollar fraud in an industry desperate for fair legislation.