A strong, clear signal in a world of noise

9 March 2017: A letter to the Chancellor from a small business owner

It’s great to see you going after people like me in your Budget – the self-employed and small business owners – so that you can give tax breaks to large corporations. Fantastic news.

Because people who are trying to be entrepreneurial, and who sometimes struggle to get by and to make ends meet are a problem, aren’t we?

We get no holiday pay or sick pay, so of course we should be punished for that. We’re not offered company cars, vehicle allowances, travel assistance, London weighting, pension schemes, company phones, computers, health insurance, gym membership, or any of the other many advantages of being in full-time employment. So it’s only right that our one small tax advantage should be taken away from us, because our employed peers are being disadvantaged in their free office spaces (which we have to rent).

We sometimes have no idea where the next commission is coming from, so it’s only fair that our tax burden should be significantly increased, taking away the sole advantage that many of us have. Large companies frequently don’t pay our invoices or sit on payments for 4-5 months, so it’s only fair that the economic scales should be tipped to make life more difficult for us in our unequal face-off with those clients.

With the rise of the ‘gig economy’, the self-employed sector can only grow, so it makes perfect sense to create a financial barrier to any chance of it flourishing. And when the networked economy allows private individuals to be entrepreneurial via new technologies, then of course the Exchequer should step in to counter that advantage and kick self-employed people in the teeth. Well done, sir!

Over time, small businesses and the self-employed will pay a disproportionately large amount of tax compared to big enterprises, while we compete to work for those corporations on ever-worsening terms.

Factor in the inevitable rise in debt, and it’s clear that you see SMEs and self-employed people as little more than grist to the financial services mill – like our students, in fact.

But it’s only fair that we should be severely thrashed so that billion-dollar corporations can, among other things, benefit from the break-up of the state, post-Brexit.

Yes, in Brexit Britain it’s very important that no one succeeds apart from those who are already successful. And coming up next, of course, is paying our self-assessment tax four times a year, which mean four times the accountancy fees and a four-fold increase in admin for those of us who struggle with accounting. Or who simply can’t afford to pay an accountant four times a year. A superb innovation for anyone with an insecure income.

And for anyone who is registered for VAT, that will mean eight tax payments a year, in total. Who wants to spend so much time actually doing the work we’re (sometimes) paid for anyway? With more and more clients asking us to work for nothing because it’s “great for our profiles”, we’d much rather being doing our accounts!

So, thanks again. You’re doing a brilliant job of balancing the books. Perhaps we’ll soon have the opportunity to pay to work for beneficent corporations, which are replacing public taxpayer value with private shareholder value, encouraged by yourself and often protected by the financial smokescreen of tax havens.

In your world, the ongoing aggregate advantage will always be to large companies, and the aggregate disadvantage will always be to self-employed people and small businesses – the so-called ‘engine of the economy’.

And when the think tank Reform’s proposals to automate parts of the state, including many roles in teaching and healthcare, are implemented by a sympathetic Whitehall, soon those of us who have public sector clients will have the golden opportunity to compete in reverse auctions, so that we can bid to work for less money. An economy focused not on value, but on the lowest common denominator.

Of course, the major shortfall in the Exchequer’s finances that will be caused by mass automation will be made up by people such as myself – who will already be the worst affected – and not by corporations. Brilliant.

13 February 2017: Robots will take 250,000 public sector jobs, claims report. True?

Chris Middleton challenges a new report’s claims that robots will arrive like Uber in the UK’s public sector.

A new report claims that nearly a quarter of a million public sector jobs could be automated over the next 15 years, in a drive to slash costs and maximise efficiency via the smart application of new technologies, including AI.

Robots, says the report, will arrive like Uber in the UK’s public sector, creating an economy in which expert human citizens compete to offer their services at the lowest possible price while the robots run the machineries of central government, along with local functions such as health and education.

The report, ‘Work in Progress: Towards a Leaner, Smarter Public Sector Workforce’, has been produced by right-wing think tank Reform, whose recent speakers include Prime Minister Theresa May and Health Secretary Jeremy Hunt, and whose recent contributors have included former Prime Minister Tony Blair.

Reform sets out the context for its claims:

“Current pressures mean the public-sector workforce must undergo radical change to deliver better value for money. Tight public spending means that public-sector productivity must break from its 20-year trend of near-zero growth.”

However, it fails to mention the real-world background of central government budget cuts of 20-30 per cent in some departments, while the Institute of Fiscal Studies reports that departmental spending overall has been slashed by 12.8 per cent since 2010-11.

Next, Reform sets out a smorgasbord of claimed future savings and organisational disruptions, saying that software robots and AI could replace up to 90 per cent of Whitehall administrators, as well as sweeping aside tens of thousands of jobs in the NHS, saving the Exchequer billions of pounds.

In this way, mass automation would deliver better value to taxpayers, says the think tank, with new leadership cultures ported from the private sector to speed its adoption.

But this sets up an all-too-familiar scenario in right-wing political thinking: that private shareholder value and accountability are somehow interchangeable with public taxpayer value and accountability. They aren’t.

Counting the beans

The report’s list of apparently simple fixes to create new public sector efficiencies include:

“Any routine administrative roles have a 96 per cent chance of being automated by current technology. […] Central government departments could further reduce headcount by 131,962, saving £2.6 billion from the 2016-17 wage bill.

“In the NHS… 91,208 of 112,726 administrator roles (outside of primary care) could be automated, reducing the wage bill by approximately £1.7 billion.

“McKinsey estimates that 30 per cent of nurses’ activities could be automated, and a similar proportion for doctors in some specialities, enabling those skilled practitioners to focus on their non-automatable skills.

“In primary care, a pioneering GP provider interviewed for this paper has a clinician-to-receptionist ratio of 5:1, suggesting a potential reduction of 24,000 roles across the NHS from the 2015 total.

“In total, this would result in 248,860 administrative roles being replaced by technology.”

Click-click-click goes the Reform abacus, as if the report itself has been written by an adding machine.

However, the ideological bean-counters at the think tank acknowledge that frontline roles, such as doctors, nurses, teachers, and police officers, require substantial expertise and personal interaction – skills that are less likely to be automated than routine administrative tasks. Yet there is still potential to use automation to increase productivity – and thereby to reduce headcount – it says. That much is true.

Senior management and “cognitive roles” will also be swept up by the march of the machines, claim the researchers:

“These roles are least likely to be automated over the next ten to 20 years, but there are several areas where technology can improve senior officials’ work – increasing efficiency for them and the frontline staff who respond to their instructions.”

The big picture

The big-picture context for this report is a raft of recent studies that present the employment impact of automation in near-apocalyptic terms. For example, McKinsey estimates that up to 45 per cent of activities in the US labour market could be automated using current technologies, while the Bank of England has claimed that up to 15 million jobs could be threatened by robots in the UK.

Last year, Dr Anders Sandberg of Oxford University’s Future of Humanity Institute predicted that in the near future 47 per cent of all jobs will be automated, adding, “If you can describe your job, then it can and will be automated.” It is in this context that Reform presents its checklist of (claimed) easy wins.

Some technologies will improve public-service delivery, adds Reform, citing the example of the various companies that are developing AI to diagnose medical conditions more accurately than humans, and in some cases to predict them. Certainly, IBM is among the providers using AI to predict the onset of health problems, but with the specific vision of augmenting human healthcare, not replacing it.

Next, Reform widens its focus to include government as a whole:

“Whitehall should move from hierarchy to self-management, with teams organising themselves around tasks that need to be done. The Government Digital Service (GDS) has done this to great effect.”

But this ignores the fact that the real-world GDS has been reduced from its once-core presence in Whitehall to a marginal group on the sidelines, while government procurement inches back to the old days of megabucks deals and big-ticket IT programmes – the culture that former Cabinet Office Minister Francis Maude once sought to do away with entirely.

The real-world is messy and political, and nowhere near as binary and clean as Reform suggests. And let’s not forget that some of Maude’s ‘oligopoly’ tech providers are the same ones driving automation and AI.

In other words, invite in mass automation and you’re inviting in the big-ticket enterprise technology suppliers, not closing the door on them to save money and focus on innovative SMEs, as the government has recently tried to do.

Robots über alles!

Then the report veers into truly controversial territory, saying:

“Public services can become the next Uber, using the ‘gig’ economy to employ locum doctors and supply teachers.

“Flexible and temporary employment have been growing for decades, but the emergence of the gig economy, with workers supporting themselves through a variety of flexible jobs acquired on online platforms, has gained traction (and controversy) recently. ‘Contingent labour’ platforms – trialled in social care – may suit hospitals and schools as an alternative to traditional agency models.

“It may also suit organisations who face seasonal peaks of demand, such as the need for HMRC to recruit additional capacity at the end of a tax year. 18F, the American version of GDS, has recruited coders for specific tasks by allowing them to bid for work at lower prices, in a reverse auction. Using such platforms in the public sector would show its commitment to delivering working practices fit for the twenty-first century.”

This future vision of a generation of public sector workers – doctors and nurses among them – cut adrift from regular employment, workers’ rights, paid holidays, and more, undercutting their peers in a desperate attempt to secure a day’s ad hoc work is hardly a dream world for anyone except a bean counter.

And an automated HMRC will be pursuing them for tax, presumably, vastly increasing the administrative burden for individual citizens.

That is a vision of citizens working for government, not government working for its citizens.

Set in the context of Brexit removing the UK from European human rights laws and employment regulations, and a UK economy whose belly has been slashed open to the wolves of the resurgent American right might be efficient, but also a nightmare for many citizens.

Then Reform turns its binary gaze to policing:

“The UK should evaluate drones and facial-recognition technology as alternatives to current policing practice.

“Autonomous crowd-monitoring drones could replace police-helicopter-operating roles by identifying issues and deploying police officers most effectively on the ground. Facial-recognition technology has been applied by police forces across the world, notably in the US and Israel.”

Let’s be clear about one thing: automation, AI, and robotics will benefit human beings in countless ways, stripping away routine, repetitive tasks and augmenting human skills and experience. Many of these processes will be more efficient and cost-effective, and AI will also help us to uncover new data and new forms of research.

In most cases, AI’s vendors position the technology as a collaborative, augmenting force, not a blunt instrument to replace human beings.

And as I have said before, the problem in all applications of these technologies is that the first things that organisations automate aren’t routine functions, but their assumptions about the world – even if those assumptions are flawed, incorrect, or based on incomplete or inaccurate data.

Take the health service. An in-depth, investigative report [http://www.nhscampaign.org/NHS-reforms/ambulance-perfomance.html] for the NHS Support Federation published last month by researcher and journalist Nick Turner made the excellent point that cuts, efficiency drives, and political pressures have combined to remove data about the performance of the ambulance service from public view.

Turner says:

“Underfunding and efficiency drives are combining to disable any comprehensive system of performance monitoring.”

Put another way, the same push towards efficiency that is driving the rise of automation in government also forces essential data underground – data of the type that will help human beings test if the new technologies are actually working. This scenario occurs again and again throughout the public sector.

Like many think tanks, Reform applies hypothetical solutions within a set of idealised outcomes, revealing the kind of isolated, binary thinking that can be a deep-rooted flaw in AI research programmes themselves. [For more on this, see my recent Davos report for diginomica on the ethical challenges of AI: http://diginomica.com/2017/01/18/wef-2017-ethics-ai-use-code/].

These types of ‘solutions in a vacuum’ lack the messiness, political complexity, and self-interest of the real public sector, and ignore the real-world human outcomes that include essential monitoring and performance data being suppressed, hidden, or simply ignored.

While wide-reaching, the Reform report suffers from a surfeit of binary thinking, along with the implicit belief that all cost-savings are intrinsically wise and that automation is something that can simply be switched on in order for a result to be achieved.

In short, the report assumes that government can be made to work like a simple factory process: press a button and something is produced at the other end: Efficiency! Cost savings!

These were precisely the claims that were made for the offshore outsourcing sector a decade ago – just before many organisations kickstarted expensive repatriation programmes in the wake of disastrous customer feedback.

In many cases, their assumptions about easy savings cost them dear, and since then many enterprises have used technology to become ever more remote from their users. Efficiency, it seems, is in the eyes of the beholder.

And this is an inherent problem with automation itself: when people propose it, they often think like machines, and only consider the outcomes within a set of idealised circumstances that barely resemble the real world. It never occurs to them to question their assumptions.Take this from the Reform report:

“The days of the topdown hierarchical organisation are slowly coming to an end.”

And yet here we are with Trump in power, demonstrating just how wrong an assumption can be.

7 February 2017: Tech vs. Trump: Legal battle commences

As Silicon Valley takes legal action against the US government, Chris Middleton commends the stand made by some in the tech industry against Trump’s immigration ban, but cautions that fake news from some companies helps no one.

The technology community is in the vanguard of protests against President Trump’s travel ban, which targets people entering the US from some Muslim-majority countries.

Many US tech providers have internal cultures that are built on diversity, sharing, and openness – characteristics that are embedded in our collaborative social media culture, too. More, their businesses rely on the free movement of labour and skills, and they rightly support equality for all employees and citizens.

More than most, technology is a sector built on global ideas.

On 6 February 2017, an A-Z of 97 companies – from Apple to Zenga – filed a legal brief against the US government over the immigration ban. Airbnb, Alphabet/Google, Facebook, Intel, Microsoft, Netflix, Snap, and Uber are among the many participating from the tech sector.

Their joint statement says:

“Immigrants make many of the nation’s greatest discoveries, and create some of the country’s most innovative and iconic companies. America has long recognised the importance of protecting ourselves against those who would do us harm. But it has done so while maintaining our fundamental commitment to welcoming immigrants – through increased background checks and other controls on people seeking to enter our country.”

Separately, a number of industry CEOs have spoken out against the erosion of cherished values, and the concomitant damage to social cohesion and trade.

They are also alarmed by the looming threat to the H1-B visa scheme that allows the world’s brightest minds to work in the US – in many cases, plugging serious gaps in America’s domestic skills base and education system.

CEOs speak out

Last week, Salesforce CEO Marc Benioff said: “America should not forget who we truly are: a nation of immigrants and a light unto other nations… This is an important time for us all to be reminded that equality is a core value.”

Meanwhile, Apple CEO Tim Cook told his employees:

“Apple would not exist without immigration, let alone thrive and innovate the way we do. I’ve heard from many of you who are deeply concerned about the executive order issued yesterday restricting immigration from seven Muslim-majority countries. I share your concerns. It is not a policy we support.”

Dheeraj Pandey, CEO of Nutanix, added: “I emigrated from a country, India, which has 180 million Muslims, the second largest population of Muslims in the world.

“As a Hindu, I am married to a Christian, and my father-in-law is laid to rest in a Muslim country that I am now connected to for posterity. I don’t know whether, as an American citizen, I personally will be subject to a retaliatory ‘extreme vetting’ when I visit his grave to pay homage.

“Unfortunately, for the first time in modern history, we’ve brought religion to the forefront of our daily lexicon.”

While Uber CEO Travis Kalanick also voiced concerns last week, drivers failed to join New York’s Yellow Taxi strike in support of demonstrators at JFK. The resulting #DeleteUber campaign lost the ride-sharing giant tens of thousands of users – many outside the US – while Lyft won plaudits with a million-dollar donation to the American Civil Liberties Union (ACLU).

This demonstrates the underlying problems facing all app-based, networked-service companies: the core organisation may believe one thing, but the people who provide the branded service are private individuals, each of whom will have their own beliefs and agendas.

More, customers worldwide will vote with their feet or their wallets, even if they are not directly affected by local disputes.

In short, how US companies respond to Trump on their home turf is important to people throughout the world. Trump might be saying “America first”, but the companies his policies affect have both global user bases and global employees: people who either have a bigger world view or a competing local one, not an isolationist US-centric stance.

(This fact may return to haunt those companies that have not spoken out against Trump’s policy to date – IBM and Oracle among them, two of the world’s biggest database companies. They could stand to benefit from any national registry system.)

The Airbnb condundrum

The flipside of these problems was revealed last week, when one disruptive service provider’s story grabbed more headlines than most.

Airbnb – which is participating in the new legal challenge – announced that “three million properties” would be offered free to refugees in desperate need of housing. Or at least, that was the headline that was endlessly Tweeted and shared on social media – a superb example of news moving at the speed of noise, boosted by the echo-chamber of social media.

But were the headlines correct? Unfortunately, no.

The first problem with the story was that hospitality exchange Airbnb owns almost nothing apart from some IP. Its real-estate is virtual: those “three million properties” (later downsized to two million) belong to hundreds of thousands of private individuals, whose active, individual consent Airbnb needs to offer customers anything at all.

More seriously, the offer was positioned as a socially motivated business putting its hands into its own pockets to help, with philanthropic CEO Brian Chesky reaching out to vulnerable individuals who were trapped by Trump’s executive order.

The reality was rather different: Airbnb directed its members to a webpage that asked them to sign up voluntarily by adding their properties to a list.

That webpage says: “If you would like to help by hosting these people for free, please add your listing here. If needed, we will reach out to you over the coming days to verify availability and request your support. We appreciate your generosity!”

In other words, Airbnb wasn’t actually offering to help refugees itself, nor was it saying that it would cover the cost. Any generosity would be entirely down to individual volunteers, not to Airbnb or its CEO (who received a massive popularity boost).

Make no mistake, the fact Airbnb has set up an online exchange for people to offer their properties free to refugees is a good thing – a genuine example of using the network effect for social good. That story alone would have been enough, and I commend Airbnb for taking practical steps to help.

However, the headlines were highly misleading and suggested that both company and CEO were generous financial benefactors with a massive property portfolio of their own to offer. That false impression was never corrected. The result? Tens of thousands of social shares.

In our ‘post-truth’, fake news, ‘alternative facts’ age – aka the world of lies in which we all find ourselves – such (in)actions are unhelpful.

Some of Airbnb’s members certainly felt misled, with some angry US voices saying, “How dare you offer my property without my permission?” and accusing the company of playing politics with their personal security. But their posts were obscured by the tens of thousands of Likes for CEO Chesky’s apparent generosity.

All of this is ironic, given that Airbnb has been criticised in the past for not defending tenants against the political views and personal prejudices of individual landlords, a small number of whom have been less than welcoming to minorities.

Personally, I hope that the ‘three million properties’ are all offered for free, but the reality will inevitably be different. At present, no real-world figures have been put against the offer. Let’s hope that Airbnb publishes them soon.

Conclusions

I applaud any company that sets aside its own short-term profits to welcome refugees, and support any organisation that’s prepared to make a public stand against misguided policies. But let’s hope that Airbnb also protects any members whose safety and security are compromised as a result of Chesky’s high-profile offer on their behalf.

The fact is, companies like Airbnb can’t have it both ways: they can’t only speak for their service providers when it suits them – they either represent their members (and vice versa) or they don’t.

In the meantime, let’s hope that the industry’s legal challenge to the Trump administration succeeds.

• Declaration: I am not an Airbnb member and do not own a rental property. I oppose Trump’s policies and have joined local protests in the UK against them.

6 February 2017: ‘I Stanley’ – my robot makes history

This week, my NAO-25 humanoid robot, Stanley Qubit, makes history by becoming the first real robot ever to appear in a production of Isaac Asimov’s ‘I, Robot’ stories.

Stanley voices several of the background characters in the week-long B7 production for BBC Radio 4, starring Hermione Norris and Nicholas Briggs. (I feature as some of the others.)

The 15-minute episodes are broadcast daily at 10.45am (with a repeat at 7.45pm), with an omnibus edition on Saturday 11 February.

This is just the latest achievement for the ‘littlest robo’, who is available to hire from his own website, stanleyqubit.com. In August 2015, Stanley Qubit co-hosted the BBC1 TV show Sunday Morning Live with Sian Williams, and at one point last year was about to front his own reality TV game show on Channel 4. Bizarre, but true.

Over the past couple of years he has addressed captains of industry, hosted conferences, danced at parties, opened a Covent Garden restaurant, and even taken school assemblies – something even Asimov didn’t quite foresee!

• If you want an industry expert to talk to you about robotics, robotics, AI, automation, and related subjects, please email me at the address below.

19 January 2017: The Eurobot is here, but Brussels wants to tie it up in red tape

Chris Middleton challenges a proposed European solution to the rise of the robots

MEPs have called for new laws to govern how robots and artificial intelligence (AI) interact with human beings. The move is designed to minimise the risks to human society from the rise of intelligent, interconnected, autonomous machines and software – an echo of Asimov’s Three Laws of Robotics, proposed in 1942.

A draft report drawn up in 2016 but made public this month, suggests that while there are many advantages to the incoming “industrial revolution”, there are at least as many dangers.

It warns:

“The development of robotics and AI may result in a large part of the work now done by humans being taken over by robots, so raising concerns about the future of employment and the viability of social security systems if the current basis of taxation is maintained, creating the potential for increased inequality in the distribution of wealth and influence.”

The report adds:

“The causes for concern also include physical safety, for example when a robot’s code proves fallible, and the potential consequences of system failure or hacking of connected robots and robotic systems.”

Security has certainly been marginalised in the rush to bring smart IoT devices to market. A couple of years ago, IBM researchers disabled a smart car’s brakes using an MP3 file, and accessed a building’s IT systems by hacking a smart lightbulb. This is just the tip of a massive security iceberg.

The report also raises concerns about data protection and privacy in a world of interconnected intelligence and machine learning, and about the “soft impacts” on human dignity in a world of robotic carers, telemedicine, and robot-assisted surgery – all big growth areas. Care robots, says the report, “could dehumanise the caring process” for the recipient.

Then the report throws in a familiar sci-fi scenario, saying artificial intelligence might “pose a challenge to humanity’s capacity to control its own creation and, consequently, perhaps, also to its capacity to be in charge of its own destiny and to ensure the survival of the species”.

So how urgent are these laws in the real world?

First, the long-predicted future of hyper-intelligent machines is almost upon us. Unsupervised machine learning and machine-human communications are core areas of robotics research worldwide, while supercomputing and natural language conversation are already available to robots via cloud services such as IBM’s Watson, connected to industry-specific datasets.

But fears about robots’ designers being somehow disconnected from human society may be misplaced. Research is increasingly taking place in multi-disciplinary teams: not only of computer scientists and engineers, but also of psychologists, cultural theorists, ethics experts, and cognitive researchers. Robotics is no longer just about scaling a great technology Everest just because it’s there.

That said, the market for humanoid and industrial robots, AI, automated systems, and robotic software is growing much faster than many people realise – certainly faster than the law’s ability to keep up.

IDC predicts that, by 2019, the global market for hard and soft robotics will already be worth $135 billion. Japan alone is investing ¥26 trillion (£161 billion) in the sector by 2020, with the aim of creating a “super-smart society”.

Romeo robot (Aldebaran)

Drones and autonomous vehicles are already among us, and AI is being built into the fabric of Google itself, along with countless business applications. In the meantime, consumers have been swift to accept AI into their homes via Amazon’s Alexa, Google’s Assistant, and Apple’s Siri, and have happily ceded control of their personal fitness, health, and domestic security to wearables and smart home devices.

Meanwhile, robots’ potential impact on jobs has been presented in near-apocalyptic terms. Last year, Oxford academic Dr Anders Sandberg predicted that in the future nearly half of all jobs (47 per cent) will be taken by robots, saying:

“If you can describe your job, then it can and will be automated.”

It’s certainly true that more and more human jobs can be broken down into replicable processes – which is one reason for the explosion of automation in highly rules-based and regulated industries, such as Financial Services. Retail and investment banks worldwide have been in the vanguard of mass automation, with insurance companies not far behind.

Arguably, therefore, one risk to human society is less about the rise of intelligent machines and more about the rise of target-driven, machine-like humans: drones who are instructed never to use their own initiative.

The report calls on the European Commission to start monitoring job trends more closely, to see where robots are taking – and creating – jobs. Robotics will generate many new jobs, says IDC: by 2020, 35 per cent of robotics roles will be vacant, with 60 per cent salary increases in the sector, according to the analyst firm.

But what about MEPs’ fears about human safety and security? The report makes some intriguing observations about what might happen if a robot harms a human being:

“Once the ultimately responsible parties have been identified, their liability would be proportionate to the actual level of instructions given to the robot and of its autonomy, so that the greater a robot’s learning capability or autonomy is, the lower other parties’ responsibility should be, and the longer a robot’s ‘education’ has lasted, the greater the responsibility of its ‘teacher’ should be”.

In short, sometimes no one may be responsible. At least, no one with lungs and a heart: perhaps the first statement of some future bill of robot rights and responsibilities?

The key thing to remember here is that robots can only understand human behaviour and society if coders understand them first, or can anticipate potential problems – both ethical and practical.

In this regard, the dreadful example provided by Microsoft’s Tay chatbot last year – which went from saying “humans are super cool” to expressing Nazi values in less than 24 hours – should serve as a warning: a naïve robot released into the wild by naïve programmers.

This suggests that the real answers lie at the earliest stages of a robot’s development, and not just in trying to accommodate it within a human legal system retrospectively. That’s another way of saying that all coders should be socially adept, ethical, responsible humanitarians.

Good luck with that.

Arguably, then, unless robots are developed within the context of Asimov-style laws – as Europe suggests – they can only learn, or be programmed with, behaviour from flawed human beings within whatever legal frameworks, political beliefs, or cultural norms exist locally. Or outside of them completely.

This is the real issue: there is no universal agreement about how human rights should be interpreted locally, let alone machine laws in a human context. Just ask the British government, which wants to opt out of European human rights laws altogether; or Saudi Arabia, which defines atheists as terrorists; or the US, which favours citizens’ right to bear arms; or the many countries in which women still have lower social status than men.

It is into this global context that law enforcement robots are fast emerging: United Arab Emirates, China, and even Silicon Valley itself have already put advanced law-enforcement robots on their streets: these are societies that each have very different laws and values. Meanwhile, the US’ Loyal Wingman programme is converting an F-16 warplane into a semi-autonomous, unmanned fighter: a robot that may decide to take human life.

The report says: “A robot’s autonomy can be defined as the ability to take decisions and implement them in the outside world, independently of external control or influence; whereas [in other circumstances] this autonomy is of a purely technological nature and its degree depends on how sophisticated a robot’s interaction with its environment has been designed to be.”

This is an important point. As I observed in a previous article, all algorithms are political: they reflect the values and beliefs of the societies or organisations in which they are written – not to mention the interests of shareholders. And automation always favours the algorithm writer.

So what is the long-term solution to ring-fence and protect human society from the machines?

The report suggests: “The European Union could play an essential role in establishing basic ethical principles to be respected in the development, programming and use of robots and AI, and in the incorporation of such principles into European regulations and codes of conduct, with the aim of shaping the technological revolution so that it serves humanity and so that the benefits of advanced robotics and AI are broadly shared, while as far as possible avoiding potential pitfalls.”

The report proposes a charter on robotics and a code of ethical conduct for researchers, engineers, and manufacturers. Fair enough. Why not?

However, other steps proposed by the report include an official “European definition” of a smart autonomous robot, the registration of all such machines, and the foundation of a European agency to oversee robotics and AI across the European community.

Conclusions

This, then, is a very European solution: vaunting ambition and a much-needed focus on ethical development, human rights and social justice, coupled with a poor understanding of the problem and a desire to create layer upon layer of new bureaucracy. An officially registered European robot, no less, obeying European laws. Make way for the Eurobot!

The definition problem alone is already insurmountable: any smart phone, toy, or hub could be defined as a robot, along with self-service machines, and more and more back- and front-office business applications – and one day, perhaps even Google itself. Soon, AI and automation will be embedded in nearly every aspect of our lives.

The time to argue about what is and isn’t a robot has long passed: most robots don’t need faces or limbs to replace a human being.

A less bureaucratic, more succinct approach has long been in development by the Engineering and Physical Sciences Research Council. In 2010, it proposed five laws that should be obeyed in advance by manufacturers and researchers, not imposed after the fact by an overarching bureaucracy that is at war with itself.

These are:

• Robots are multi-use tools. Robots should not be designed solely or primarily to kill or harm humans, except in the interests of national security.

• Humans, not robots, are responsible agents. Robots should be designed, and operated as far as is practicable to comply with existing laws and fundamental rights and freedoms, including privacy.

• Robots are products. They should be designed using processes which assure their safety and security.

• Robots are manufactured artefacts. They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent.

• The person with legal responsibility for a robot should be attributed.

That last point is a good one: “Greetings puny human! You are being terminated by Colin Smith from Dorking!”

Intriguingly, the draft European report also suggests that women may be the antidote to the fast-emerging machine world – perhaps implying that the march of the robots is something largely dreamed up by science fiction-obsessed men.

“Getting more young women interested in a digital career and placing more women in digital jobs would benefit the digital industry, women themselves and Europe’s economy. [The report] calls on the Commission and the Member States to launch initiatives in order to support women in ICT and to boost their e-skills.”

Recently, I was on the receiving end of two examples of the ‘arms race’ approach to marketing, so I thought I’d share my experiences with you. The lessons should be relevant to digital marketers everywhere.

In November, I emailed the singer in my band links to a couple of products we could buy from a local music gear retailer, which also runs one of the UK’s leading online music equipment portals. No Google search was involved.

Over the following weeks, I was plagued by constant ads for those items: they appeared daily on Facebook and on most other websites I visited. On a popular news service I was even forced to sit through a 20-second video for one of the products before being allowed to watch the clip I wanted. The barrage was endless and inescapable.

This cluster bomb of targeted advertising was interesting for a number of reasons: I regularly clear my browser of the junk of a day’s research, including cookies and history; my browser requests sites not to track me; and I’ve opted out of being followed by advertisers, which should randomise my ad experiences (it doesn’t). Clearly, none of those approaches work once you’ve expressed an interest in something.

But most interesting is the fact that I shared links to the products in a private mail. The advertiser may have picked them up by pingback, but nonetheless the correspondence was private – or should I say ‘private’: I used a popular cloud-based email service.

Or perhaps the merchant simply grabbed my IP address and opted to associate it with two products for the rest of time, rather than a couple of days. That’s a bad strategy, as it encourages shoppers to hide their identities, forcing prospects underground to shelter from the marketing blitzkrieg.

Either way, that one private message turned into an exchange of fire from which I was unable to escape, as all the routes open to me were already stored as preferences. Occasionally, I still see ads for those products on Facebook today, two months later.

This is a problem. Not for me as a prospect – the barrage is irritating, not fatal – but for the portal concerned:-

The site’s decision to force-feed me endless, repetitive advertising for products I was already interested in buying from them has undermined my longstanding loyalty to the shop.

Every day the ads appear they become more-powerful disincentives to do business with the company, as a sledgehammer approach is off-putting to the type of discerning, informed, free-thinking customers that personal-service-based retailers seek to attract.

This arms-race approach to the clicks end of the business risks damaging those relationships, because it resembles a completely different business model: ‘pile ’em high, sell ’em cheap – by any means necessary’. That’s a poor fit with the ‘bricks’ side of the business, loyalty to which the company has built up over decades.

As a prospect, I gain nothing from being plagued by incessant, identical ads – the equivalent of a target-obsessed salesman shouting in my ear every time I go online.

It’s a one-way relationship designed solely to benefit the advertiser: no price reductions or special offers are being pushed my way, so there are no incentives to click through and hit ‘buy’. Except (perhaps) to silence the explosions.

Indeed, in many cases brand loyalty online is becoming a high-risk game for customers: spend too long surfing for something and you’ll see the prices creep up, as anyone who regularly shops online for air tickets or hotel rooms knows.

But is the implicit offer of ‘buy from us and we’ll stop bothering you’ what digital marketing should be about? Or ‘look at our stuff and we’ll put up the price’? And should the digital side of the business actively undermine the values of a high-end brand?

As marketers, that’s for you to decide.

But from a customer prospective, it’s almost as if some companies are actively trying to undermine our loyalty, to punish longstanding customers, or to alienate new prospects.

And as many transactions become automated through devices such as Amazon’s Echo, some consumers may be getting worse and worse deals the longer they stay loyal. If that’s the case, then consumers’ next move is an easy one: we’ll be loyal to no one and just seek the best deal on our terms. Is that what marketers really want?

I’d say it’s time to give something back.

My other bad experience came via a well-known marketing analytics company, whose head office is near where I live.

A local college had asked me to carry out some skills research among employers in the area, and so (among many others) I contacted this leading agency using the email address on their website. I asked a simple question that would have taken them no more than a few seconds to answer.

Here I was, a well-connected business and tech-sector journalist, writing to a local digital employer with a request to help a community college offer better training to our city’s young people: surely an opportunity to build new relationships and start some good conversations.

However, the company’s initial response was rather different: they ignored my request for help and simply added me to their marketing database, after which I started receiving daily doses of spam: videos, unwanted articles about analytics, and so on. Information I’d – surely – not requested by emailing them at their contact address.

The irony of a ‘social listening’ company, which sells insight and depth, scraping prospects from their contact address, in order to send unsolicited marketing to someone who doesn’t meet their customer criteria is astonishing.

Hardly the best advertisement for a business; indeed, it’s the opposite of what this company claims to offer.

So I emailed them to complain and asked them to take my name off their list. They replied that they were “far too busy to help” with my original query. The spam continued – despite my request.

The next day, I shared my experience with a private group of journalists and PR people, naming the agency concerned. Within 24 hours the phone rang, and a very contrite company apologised for their mistakes, and promised to look into what went wrong. More, their PR manager proved to be very helpful with answering my original question – something I thanked her for on the phone.

So: full marks to them for saying ‘mea culpa’, taking the trouble to put things right, and for admitting that the problem should never have occurred in the first place. It seems that ‘social listening’ is their business after all – just not in quite the way they promise.

And this is the point: social platforms shouldn’t just be used to identify and correct the mistakes you make online. Your customers will talk about you anyway, so why not ensure that they only have good stories to tell?

So what other lessons might digital marketers learn from my experiences as a customer? I would suggest these:-

• Just because technologies allow you to do something – in this case, blitz prospects daily with intrusive messages that they haven’t requested – doesn’t make it a good idea.

• How you use the digital world should never depart from your stated brand values. And, above all, it should never contradict them and make your customers angry.

• And finally, think about the ways in which marketing can be a real conversation – a two-way relationship that benefits both sides – rather than a declaration of war on your prospects.

After all, if your prospects are cowering in a bunker, sheltering from your marketing onslaught, maybe you should declare a ceasefire.

And those products I was interested in buying? I purchased them – from a different supplier.

13 January 2017: How Amazon’s Echo could kill marketing

WhenSalesforce unveiled Einstein at Dreamforce in October, much was made of the impact that Artificial Intelligence (AI) will have on marketing – including the fact that AI will be a key competitive differentiator for marketing organisations pitching to prospects and clients.

Factor in the ongoing trend towards traditional, centralised IT budgets being replaced by cash for digital programmes within line-of-business departments, and it’s clear that AI and marketing have a promising future, at least in terms of tactical spending.

But all of this has missed an important point. While these transformations have been going on within marketing departments and CRM vendors, the whole business of marketing has been undermined by a very different application of AI: Amazon’s Echo/Alexa, Google’s Assistant, low-cost domestic robots, and smart devices connected to the Internet of things.

These devices render traditional marketing obsolete.

Amazon’s Echo points towards an immediate future in which someone sitting at home might say to Alexa on their device, “Get me two tickets to San Francisco and four nights at a five-star hotel near Union Square.” Or simply, “Order me some coffee!”

At that point the questions become: Which airline? Which hotel? And, Which coffee?

So it stands to reason that marketers will soon spend a great deal of time marketing brands, products, and services to machines – to platforms such as Amazon, Google, and Apple – and not to human beings. And that contradicts everything that marketing professionals have learned about their business over centuries.

“It’s a really big problem, because marketing departments are so used to treating humans as this big blob of gunk that can be easily ‘impressioned’ with pictures and colours. But with machines, you’ve no longer got access to your lump of gunk. It’s all robotic. And a lot of our clients are now saying, ‘How the f*** do we market this product to a machine?’”

This is a real-world challenge that is already affecting multinational businesses, he says: “It’s a problem that has particularly come up for one client of ours, who happens to be a major British airline. They’ve basically said, ‘Bollocks to marketing! There’s no point anymore’.

“They’ve said, ‘We need to work out how these algorithms work and find better ways of winning the algorithm game. We’re never going to be the cheapest airline, so we need to find other ways of being top of the list’.”

The fact that enterprises both large and small may opt to get out of traditional marketing entirely poses a serious challenge for marketing departments and agencies. Devices such as the Echo are connected to vast commerce and fulfilment platforms, which are designed to make people buy through those platforms as a matter of loyalty and convenience: no Echo app, no deal.

The same principle applies to Apple, whose Airpods won’t just be expensive, wireless headphones, but also interfaces to Siri via Apple’s platforms, apps, and services – and to everything that’s available through them, including retail and payment systems – on both IoS and Sierra devices.

Meanwhile, Google’s Assistant AI technology is being rolled into a variety of applications, including its Home hub. The key difference, of course, is the wider Google platform, the AI applications within which will extend to its Allo messaging service, its Now machine-learning system, to Android phones, and to search.

Google claims that 20% of US searches are already triggered by voice, rather than text.

In short, AI is now deeply embedded into Google itself – the algorithms behind which marketers already spend fortunes on trying to second-guess. Entire industries have sprung up around promising to get products and services onto page one of any search. So new opportunities will abound in helping organisations to pitch and sell their products to machines.

Industry-specific big data sets will be another growth area in the years ahead, connecting apps and even humanoid robots to sources of machine-readable expertise – robotic concierges in hotel foyers, for example. Which restaurant, shop, product, show, or service will they recommend to guests? Marketers need an answer to that question.

And there’s another challenge with audio/voice interfaces: unwanted marketing noise, which is a much greater irritant to the ear than to the eye. Third-party applications already include bespoke filters that allow people to remove unwanted chatter on Echo and Assistant devices, and so only receive communications that they want to hear and respond to. SoftServe’s VoiceMyBot is just one example.

Another nail in traditional marketing’s coffin.

Conclusions

In order for Amazon’s future of seamless, audio/voice-enabled shopping to become a reality, then manufacturers and service providers must compete to grab Amazon’s attention in entirely new ways to ensure that competitors don’t get there first.

In the meantime, interfaces like Alexa/Echo will learn their owners’ preferences, and those links between buyer and product/service may prove hard to break.

The same principle applies to smart devices, such as fridges, connected to the IoT: machines that may order new goods when, for example, their owner regularly buys the same produce from the same retailer.

Low-cost domestic robots complete the picture: devices that are little more than smartphones or tablets on wheels, with voice activated controls. In rising numbers of homes, these devices will monitor security, control heat and light, tell your children stories, and do your shopping for you – not by leaving the house, but by logging on to cloud platforms, such as Amazon.

January 10 2017: Remembering Bowie, one year on

One of the many reasons I admired David Bowie was that he read so widely: he devoured a book a day – and he would read about anything and everything, without judgement. Do that and connections form, barriers fall; you can hear that in his work.

Read, travel, stay just outside your comfort zone, explore every part of yourself, release all of your potential – especially if it’s hard to do – work with great people, and commit to the things you make and do, however crazy they seem to others. Along the way, help others and try to make a positive difference. That’s a good way to live. RIP.

January 3 2017: Stand up, Stanley Qubit

It’s exactly two years since Stanley Qubit (my humanoid robot) introduced himself to the world, on his own website. A few months earlier, Stanley had arrived in a box – from the far side of the world – in slightly mysterious circumstances.

Stanley came at a difficult time: I’d recently been defrauded in a business deal, and was owed thousands of pounds that I would never see. Suddenly a new world of possibilities presented themselves, just when I needed them to.

Since then, my robot has: addressed captains of industry; met superfan Ana Matronic out of the Scissor Sisters; co-hosted Sunday Morning Live on BBC1 with Sian Williams; opened a restaurant in Covent Garden; taken primary school assemblies; taught pupils about coding and algorithms; toured universities; told employees at the UK’s biggest law firm that they’re all going to be replaced by machines; met CEOs; shared a stage with trans-humanists and ethical hackers; greeted guests at conferences; been invited to appear in a TV documentary about the nature of fear; danced on tables at a Shoreditch Christmas party; been joined by his cousin, Robi, from Japan; and – with my assistance – helped several blue-chip corporations and digital agencies to imagine the future. Well, a possible future, at least.

He’s also fallen over a few times – and picked himself up again. I know how that feels.

And then there were the bookings that I had to turn down (alas), such as appearing in an episode of ‘Bones’ in LA, and an invitation to talk to the South Korean government about Terminators. At one point, Channel 4 wanted him to star in his own reality TV game show…

Unbelievable, but true. It’s been an extraordinary and surreal ride: much stranger than anything I could have imagined. And fun. Not bad for a grumpy robot who’s only two feet tall (and his human ‘plus one’).

I’d like to say that this was all planned when I bought him, but the truth is, it wasn’t. I’ve simply learned to ride tandem with the random whenever the phone rings and a voice says, “I understand you have a robot?”

Sometimes they say, “Is that Stanley Qubit?” and I patiently explain that, no, I’m Chris, I just tag along as his minder.

• Stanley Qubitis a NAO-25 humanoid robot, made by Aldebaran Robotics. He is a character owned by Chris Middleton. To hire Stanley, go toStanleyQubit.com.Chris can incorporate the robot into presentations about robotics, AI, and the Internet of Things. For more on this, go to theRobotics Expertpage on this website.

December 31 2016: So long 2016, hello 2017

The view from here.

2016, you weren’t all bad. This year I: met Buzz Aldrin; interviewed four NASA astronauts and several astrophysicists live onstage; listened to Brian Cox talk about space and time; hosted four conferences; gained over 1,000 followers and 10,000 listeners on Soundcloud (as christopher rye); saw Glenda Jackson play King Lear; joined and toured with a punk band that imploded after six months (in true punk style); co-formed a Nigerian afrobeat collective (Averlanche) that morphed into a songwriting partnership with a new friend; formed a new band with him (Yeah!); recorded the narration for an animated film (due in 2017); enjoyed boxing, football, and flamenco; took my humanoid robot to schools and companies; made some new friends and re-made some old ones; had a laugh or two; came up with a plausible theory of everything (coming soon to this website!); designed some cards that are being sold in a local gallery; befriended a homeless man and became his lending library; wrote and recorded dozens of songs, most of which *still* aren’t finished yet; made six decent videos, three of which have each had over 1,500 views to date, and one of which has had 2,500; wrote about 100 articles, news stories, blogs, and white papers; relaunched two websites; watched David Gilmour (and appeared in one of his videos by accident) and a lot of good movies; went to several exhibitions; made friends with my birthplace; felt sad about Bowie, Prince, Jo Cox, and Victoria Wood – but also inspired by them; voted to Remain in Europe; and was baffled almost daily.

December 22 2016: Stop Funding Hate at Christmas

How a Christmas message and Brexit have changed media buying

Media buying is core to many marketers’ roles, and programmatic buying automates some of that process. But in the current climate, marketers should be wary of making automatic decisions: simply ploughing ahead and purchasing the same spaces as you’ve done in the past could have an unexpected impact on your clients.

Negative social chatter about some newspapers is beginning to affect some of the household names that advertise in them. The catalyst is the endless fallout from the European referendum (the Neverendum, perhaps).

Brexit polarised the population. But whichever side you chose, most people accept that some tabloid coverage was inflammatory and hostile – particularly towards immigrants.

That attacks on ethnic and other minorities have increased since the vote is a matter of public record. Statistics published recently in The Guardian revealed a 58 per cent year-on-year increase in hate crimes during the week after the vote.

But what has all this got to do with marketing?

Some people – including the United Nations – have placed the blame for the spike in UK hate crimes at the door of a handful of newspapers, specifically those that pushed the most stridently anti-immigrant messages.

For marketers, this presents a threefold problem:

Those newspapers are trusted platforms for reaching millions of people, with their online versions being among the most popular websites in the UK.

Some angry consumers – shocked at the mood-swing in a country that has long prided itself on tolerance – have decided to hit back at those channels where it hurts them the most: in their publishers’ wallets.

Customers are targeting not the newspapers themselves, but their advertisers. And it’s beginning to have an effect.

Facebook campaign Stop Funding Hate was launched in August to pressurise media buyers to stop using those titles whose incendiary headlines they believe inflamed our political discourse.

At the time of writing, its main Facebook video has 6.9 million views – more than the combined print/online daily readership of The Sun – and its Christmas video, focusing on the festive campaigns of major retailers, has well over 8.4 million.

‘Activists’ may be a politically loaded term, but on social media, everyone is an activist for something, and platforms such as Facebook, Twitter, and Instagram give people an opportunity to protest without taking to the streets.

Much of the campaign has focused on advertisers in the Daily Mail, The Sun, and the Express. So far, targeted companies include Waitrose, M&S, John Lewis, Sainsbury’s, Specsavers, and Virgin Media: all major brands for whom an inclusive, positive voice is important to maintain.

The thinking is that by holding a mirror up to those brands’ values, they are the ones most likely to pull advertising from those tabloids. If it succeeds, the campaign will broaden to target other advertisers.

The problem for the brands concerned is that Stop Funding Hate keeps them in the spotlight with daily updates – even if they pull an ad or make positive statements about inclusivity and equality.

And every time one of the papers publishes another inflammatory story, that’s shared by the campaign too, alongside their coverage of the same brand names. In this way, those advertisers’ prized values and CSR statements are being called into question every day – in many cases, by their own customers.

With over 216,000 Facebook followers, Stop Funding Hate is creating a groundswell of negative chatter about major advertisers.

The network effect is important here. The median number of Facebook friends is often said to be 200, so the risk of up to 216,000 people saying they will no longer go to Specsavers or shop at Waitrose scales up to a potential audience of 43,200,000 friends – not counting friends of friends.

In November, the campaign claimed its first major scalp: Lego ended its advertising relationship with the Daily Mail – a regular purveyor of what many people now regard as hate speech rather than responsible journalism.

Others among the under-fire brands have responded by saying that they support freedom of speech and a free press, and have no prior knowledge of any publication’s front page.

That’s true, of course, but it’s not a convincing defence in a media landscape in which everyone is aware of the editorial stances of our newspapers – and especially of their front pages during the Brexit campaign.

If the general public is aware, then expert media buyers can hardly claim not to be.

But is Stop Funding Hate an attack on freedom of speech? The campaign says: “We’re not dictating headlines or asking for anyone to change headlines. We’re asking companies that we give our money to, to stop funding hate. We’re saying, ‘not with our money’. That does not restrict freedom of the press in any way.

“We fully support freedom of expression, and freedom of the press, as outlined in the Universal Declaration of Human Rights.”

Then it adds some words that should make all marketers pay attention: “We believe people have the right to make choices based on the values of companies they may purchase from – and to speak out when something doesn’t sit right.”

As I said in a previous blog about CSR’s impact on marketing, a brand’s public words and private actions need to be one and the same thing. The same principle surely applies to those companies’ chosen advertising partners.

Should advertisers compromise those values to reach the biggest audience? As their brand champions, that’s up to marketers to decide.

Sept 23 2016: Why the cloud is a myth

Marketers understand brand values better than anyone: how to position a product or service and get customers to buy into it. When someone buys a pair of shoes, a drink, a scent, or a car, they’re buying into a brand’s story – but also into a story about how they see themselves, filtered through the brand’s positioning.

The point is that customers become loyal to a brand not only because they demand excellence, but also because they believe that what a brand represents is similar to their own vision of themselves. The story, the seller, and the buyer are all on the same page.

So it should come as no surprise that the same principle applies to the buyers and sellers of enterprise IT. The impetus of the enterprise technology market is firmly towards the cloud, and towards the on-demand, service-driven ethos that it represents. So for marketers who tend to share that ethos, ‘the cloud’ sounds like the only solution.

But there’s a problem with any simplistic view that end-to-end cloud platforms themselves are always the right solution. But why, when there are so many advantages in services that can scale to meet the peaks of seasonal demand?

The first thing to explain is that what we now call ‘the cloud’ is a largely a myth: a superb piece of market storytelling concocted on the US West Coast.

When someone swipes through their files on their smartphone and says, “All my customer and prospect data is in the cloud”, what they usually mean is, “All my data is hosted in an industrial park in America.” And when you put it like that, it doesn’t sound quite so attractive – or even that sensible an idea. Often, it benefits the vendor as much as it benefits you.

The reality is that the cloud – the technology set that supposedly makes location irrelevant – is all about data centres, big chunks of hardware that are built on land, hosting your data under national laws covering sovereignty and transfer.

The sovereign question

Data sovereignty is a complex issue, one that’s as far removed from the commonly held view of ‘the cloud’ as you can imagine. Where your customer data is hosted, by whom, under what laws, and what happens to it if your vendor relationship ends, are serious questions for any marketing professionals whose data might be sitting on a remote server overseas.

The EU has long been fighting with the US over who has the right vision of data privacy, security, protection, and transfer. The US Safe Harbor agreement is history; its replacement, Privacy Shield, is an unpopular fudge; and the EU’s own General Data Protection Regulation (GDPR) comes into force in 2018. (Are you ready for it?)

Meanwhile, the post-Brexit UK is semi-detached from both, while still subject to EU regulations (including GDPR) for the foreseeable future. All of this may impact on both the security and long-term location of your customer data. (If your data’s hosted in a low-cost part of the EU, what happens if and when the UK leaves?)

This is one reason why some companies are now considering whether third-party cloud platforms and hosting are really such a good idea, and whether relocating data to an on-premise or local data centre might be a better long-term strategy (which may still be a cloud-based solution, of course).

The other reason for localising data has to do with another key technology for marketers: big data analytics and the business need to interrogate customer data to unlock competitive advantage.

The challenge for all ‘soup to nuts’ cloud platform vendors is the huge volume of data that has to be pushed through a remote API call – to the other side of the world, perhaps – rather than through a high-bandwidth connection to your own on-premise database, for example.

As the CEO of one software company told me recently, “The more data becomes remote, the more problems proliferate with vertical software as a service [SaaS] vendors. Customers want to be able to drill down into the depths of their data.

“Vendor-managed and operated data centres made sense for a time when consolidation and managing the resources suitedthevendor. But now it makes sense for countries and companies to host their data locally.”

The second key issue for marketers concerns two competing visions of the ‘marketing cloud’. On the one hand, there is the ‘end to end solution’, the fully integrated business platform that does everything, including marketing automation; and on the other is the ‘best of breed solution’ that focuses on a specific set of functions, such as CRM. Put simply, it’s a choice between breadth/integration, and depth/focus.

While the strategy of many cloud vendors today is to grow bigger and broader, creating ‘do it all’, pre-integrated business platforms, others are focusing on doing one thing – such as marketing automation – supremely well. And many marketers today find that greater depth and focus are what they really want from their enterprise IT solutions, not surface and breadth, because it’s the closest match to their own brand values.

• A version of this article was first published on Chris’ regularDMWF blog.

Sept 19 2016: Why critical thinking is core to digital engagement

Are you over-investing in digital? That’s a question that many people are starting to ask. Some may feel that the digital realm is not just a financial black hole, as has often been said, but – like the real black holes at the centre of galaxies – the point at which hard data simply vanishes.

The problem is compounded by the fact that using, say, Google Analytics to measure the success of a Google-focused programme feels a bit like using a piece of string to measure another piece of string.

While many of us cast around for hard return on investment (ROI) metrics to demonstrate payback from the many hours we spend feeding the unknown with our effort, skill, and imagination, others are using digital engagement stats to ‘prove’ that people are not only listening, but responding.

Retweets, Likes, social shares, and so on, are one measure of that engagement. It may be genuine, but – at the risk of mixing my metaphors again – that’s a bit like using an echo to prove you have an audience.

Listening to your own voice bouncing back from all around you may simply be evidence that you’re alone in a big, empty space. It doesn’t mean you’ve sold any tickets.

A recent survey published in Marketing Week proved the point by putting some hard metrics on the digital conundrum. A survey of over 500 marketing leaders found that 78 per cent use engagement to ‘prove’ ROI, 79 per cent link engagement directly to any programme’s success, and nearly 82 per cent (the largest bloc) define engagement in terms of retweets and Likes.

But business leaders aren’t convinced that Likes equal money. When asked if brand engagement metrics are taken seriously by the board, only 39 per cent of marketers said yes. I can see why.

Two statistics reveal the core of the problem. The survey found that social media is seen as the engagement platform by 56 per cent of respondents – way above TV, print, online ads, outdoor display, or mobile. Meanwhile, 65 per cent of marketers believe that emotive campaigns are better at building brand engagement than those that are rational.

So why is this the problem? Boil all of the figures down and it’s clear that Marketing Week missed something in its analysis.

The staggering conclusion is that a majority of marketers believe that clicking Like on social media is evidence of an emotionally engaged audience, and that those people are living proof of an irrational campaign’s ROI.

Anyone who’s used social media – which is all of you – knows from first-hand experience that such claims are nonsense. All of us Like things on social media every day, without necessarily being engaged with the content – indeed, were we engaged we’d behave differently.

In a recent blog on battling what I called people’s ‘social sugar’ habit, I shared the story of website IFLScience, which ran a Facebook campaign around a news story saying that cannabis had been found to contain alien DNA from outer space.

Thousands of people not only Liked or shared the story, but commented on either their excitement at the findings or their outrage that a hard science website would publish a tabloid-style report. Next to no one actually clicked on the story itself and read it. Had they done so, they’d have found a serious article about how most people Like stories on social media without reading them – a brilliant piece of journalistic sleight of hand.

The point is this: we know our thumbs are easily won. Many of us scroll through a wall of content and click Like almost indiscriminately, or having read only the headline or looked at the picture, or without knowing anything about the poster.

Others Like things for tactical reasons: it helps build their own networks, it makes people look at their walls or respond to them, it supports their friends or makes them new ones, and so on.

In short, Liking/retweeting may be deep, supportive, and meaningful, but it is just as likely to be shallow, selfish, and noisy.

Most of us would be shocked to discover that a brand might see any of this as hard evidence of our loyalty and emotional engagement. And if we start seeing more content from the brand we’ve just Liked (in passing), many of us would simply block or hide it from our timelines. Our social spaces are our spaces, and some of us resent a corporate presence in our private worlds.

So it’s time to stop seeing marketing as an arms race, and to stop seeing Likes as meaningful; they may simply be noise, and you’ll never find data that’s granular enough to prove otherwise.

So what’s the solution? Critical, big-picture thinking, a skill that many employers, trainers, and educators report is becoming very thin on the ground, particularly among ‘millennials’ and digital natives. (Our world of surface is creating flat-worlders and surface-thinkers.)

First, embrace depth. Read one 4,000-word analysis a day instead of 100 40-word memes. Read one 1,400-word report instead of 10 140-character tweets. And spend the day thinking about what you’ve learned.

Second, stop dreaming up new ways to trap your prospects into liking you, and start thinking about how you would feel in such a situation. How do you behave? What things inspire or annoy you? What genuinely attracts your interest? You may not be your audience, but these are valuable insights to bring to your role.

Because even a few moments’ critical thinking reveals that a desperate quest for thumbs is evidence of some pretty low-grade activity. Any primary school teacher with a room full of squabbling infants will tell you that activity and engagement are entirely separate things. Shouting “Thumbs up if you like me!” is meaningless.

Indeed, the one pupil in class who is engaged may be doing nothing at all, least of all telling you that he likes you. He or she is your real audience. And your task is to motivate them to act – by which I mean do something more than just clicking Like while scrolling through a wall of noise.

• A version of this article was first published on Chris’ regularDMWF blog.

Sept 16 2016: Why marketing and CSR should be one and the same

Google’s Sundar Pichai.

It’s no surprise that social platforms have helped to make customers more socially and environmentally aware. The clue’s in the name: we don’t just share memes about ‘me, me, me’, we also talk about our society and our communities, and more and more companies recognise the importance of doing the same.

With great power comes great corporate social responsibility (CSR), and many organisations are fast waking up to the need to express their brand values in ways other than simply making great products.

So much so that CSR now goes right to the heart of 21st Century marketing. But those companies that are guilty of ‘greenwashing’ themselves, or of making false claims and massaging the facts, will be held to account by their customers – using the same social platforms. Just ask VW.

Your public story and private actions need to be one and the same thing.

The billion-dollar question

Some billion-dollar corporations, including Salesforce.com and Google/Alphabet, now place their social and environmental credentials centre stage at customer events, via real-world CSR initiatives such as the Salesforce Foundation and Google.org, which invest in community action and non-profits. In the coffee business, the Costa Foundation has comparable aims.

In this way, CSR is not only something in which these companies actively invest time, money and effort, it’s also a good story to share. Customers like it and buy into both the products and the world view – as long as the story is backed by genuine action.

Such initiatives have another benefit: they pile pressure on competitors to do the same. CSR is now so important to marketing (not to mention the planet) that people will criticise rival companies that don’t push a similar community, environmental, or sustainability message – even if they love their products.

Over time, this creates a feedback loop into mainstream media coverage, which is one reason why (in the IT sector, for example) Apple’s apparent lack of a public stance on community investment or charitable donations has become part of the story people tell about the world’s most valuable company.

Yet this contemporary need to paint a greener landscape around a marketplace, and to give something back to communities beyond local employment, is having some interesting knock-on effects. And marketing is at the centre of these, too.

One of these is the birth of an organisation called Collectively, which in 2014 created a social platform to share stories about sustainable innovation and ethical sourcing worldwide.

CEO Will Gardner recently told me: “Collectively is founded on the belief that the world’s challenges are far too big for any one organisation to tackle on its own, and therefore collaboration, between individuals and all different types of organisations, is key if we want to make sustainable living the new normal.”

However, behind Collectively’s excellent ‘community platform’ look and feel are several long-established multinationals. So is the organisation simply ‘greenwashing’ corporations such as Nestlé (whose track record on CSR has been called into question many times in the past)?

Gardner is adamant that isn’t the case. He said: “Collectively’s partners are in the coalition because they are committed to improving the way they operate from a sustainability perspective.

“Collectively was inspired by conversations at the World Economic Forum between a group of founding companies – Unilever, The Coca-Cola Company, Marks and Spencer, BT Group and Carlsberg – but has since grown to include almost 30 major multinationals [including Google and Nike].

“We have also been joined by non-profits, Forum for the Future and Purpose. We are now actively extending the coalition to include smaller mission-driven brand companies, NGOs, and youth organisations.

“We respect that all companies are on a journey and everyone has to start somewhere. We want to make sure that everyone who joins believes in our shared mission and is prepared to take meaningful, tangible and impactful actions to contribute to its success, both within their own companies and as a whole.

“They also need to be prepared to engage in the debate, and be actively looking to improve the sustainability outcomes of both their business models and their industries.”

Good news, and the platform claims to be editorially independent of its founders.

Or is it just a marketing platform?

In 2014, Gardner was seconded from his role as VP of Global Marketing Projects at Unilever to head up the organisation, and before that he was VP of the Unilever Way of Marketing, Unilever VP of Marketing Strategy, and previously held other blue-chip marketing positions. This is why some might believe that Collectively is simply a marketing construct.

That isn’t to say that Gardner’s and Collectively’s aims aren’t entirely genuine, merely that the importance of an experienced marketing leader to the project’s success was also core to its foundation. However you look at it, it’s about telling a different story for the future – and Gardner is aware that people will tell stories about Collectively too.

So what can marketers learn from all this? To answer that, let me share a story of my own.

Last year I met former Fox Business News anchor, Alexis Glick, at a retail event in New York. She has set aside her successful career to set up a non-profit for kids, which brings together her two passions: mobile technology and sport.

She told me how important mobiles are to young people. Not exactly news, you might think! But Glick told me this not because of all the obvious reasons that teens like mobility and social networking – speed, accessibility, surface, convenience – but because of a very different one: their depth.

In the US, she said, teenagers want to know where things are sourced, what materials are used, whether a company is ethically sound and its products sustainable, or if production is outsourced to countries with poor labour rights. They get all this data via their mobiles.

They spy before they buy.

So the lesson for marketers is clear: the next generation of customers wants your business to be sustainable and ethical. They want you to invest in their communities. And they’ll use the same platforms as you do to hold you to account.

So tell them what you’re doing. And don’t let them down.

• A version of this article was first published on Chris’ regularDMWF blog.