In the NHS's 70th year, and as part of the push for digitisation, the introduction of an appointment-booking app has been praised, while a GP chatbot has been given the thumbs-down by The Royal College of General Practitioners (RCGP).

Book Appointments With A Free NHS App

A free app, due to be launched at the end of this year, will enable NHS patients to make GP appointments, order repeat prescriptions, and access the 111 helpline for urgent medical needs.

The app, which is being jointly developed by NHS Digital and NHS England, and is part of NHS England's wider strategy to digitise the health service, will be made available through the App Store or Google.

Other Options

As well as booking appointments and ordering prescriptions, the app will also give patients other options such as allowing them to opt-out of sharing their personal information for research and planning purposes across the health service, mark their preferences on organ donation, and register their choices for end-of-life care.

Helpful

Many commentators have praised the idea of the app as something that could provide extra convenience to patients e.g. reducing the 8am scramble for GP appointments, and take some of the increasing load off some areas of the NHS.

Security Caution

Some commentators have stressed the need to ensure that the security, reliability, and the identity verification processes of the app are of the highest international security standards in order to protect the personal details and medical history of patients.

Big No for Doc App

While the NHS appointment-booking app has been receiving cautious praise, the new Babylon AI chatbot that can diagnose medical conditions (and offer health advice based on what users tell it) got the thumbs-down at an event held by The Royal College of General Practitioners (RCGP).

Accuracy?

One of the main aspects of the bot that upset physicians were claims by Babylon that the bot has achieved medical exam scores of the same level as or higher than a human doctor. The company says that according to its robust testing program, which includes relevant sections of the MRCGP exam, which is the final test for a trainee GP, Babylon's AI bot’s average pass mark was 81%. This mark is higher than the 72% average pass mark achieved by real doctors over the past five years.

These claims have been disputed by RCGP, which has stressed the point that no app or algorithm is able to do what a GP does.

What Does This Mean For Your Business?

Apps are being used in useful and value-adding ways in so many other sectors, it is no surprise that they are being developed for healthcare, and with the purpose of taking some of the burden off the NHS. For most people, the NHS is s trusted organisation anyway, and an app that can essentially perform administrative functions, such as booking appointments, sounds as though it could be very useful. The trust that many have in the NHS may also be enough to minimise security concerns. One criticism may be, however, that it may exclude the older members of society, many of whom are regular users of NHS services.

Even though an AI app may be able to pass theoretical exams (such as the Babylon AI app) getting people to trust it to make a diagnosis and then health suggestions, particularly when it has been criticised by real doctors, may be a step too far at the current time. That particular app company, however, has faced criticism in the past over its ‘GP at Hand’ app for the NHS, which allows patients at five London clinics to consult with their GP via a video call. The RCGP criticised it for cherry-picking patients, and leaving GPs to deal with the most complex patients without sufficient resources.

Either way, the NHS is committed to digitising some aspects of its services, and in introducing technology, a balance needs to be struck between adding real value in a fair way to all, while not being to the detriment of any NHS users and practitioners.

The ‘Deceived By Design’ report by the government-funded Norwegian Consumer Council has accused tech giants Microsoft, Facebook and Google of being unethical by leading users into selecting settings that do not benefit their privacy.

Illusion of Control

The report alleges that, far from actually giving users more control over their personal data (as laid out by GDPR), the tech giants may simply be giving users the illusion that this is happening. The report points to the possible presence of practices such as:

– Facebook and Google making users who want the privacy-friendly option go through a significantly longer process (privacy intrusive defaults).

– Facebook, Google and Windows 10 using pop-ups that direct users away from the privacy-friendly choices.

– Google presenting users with a hard-to-use dashboard with a maze of options for their privacy and security settings. For example, on Facebook it takes 13 clicks to opt out of authorising data collection (opting in can take just one).
– Making it difficult to delete data that’s already been collected. For example, deleting data about location history requires clicking through 30 to 40 pages.

– Google not warning users about the downside of personalisation e.g. telling users they would simply see less useful ads, rather than mentioning the potential to be opted in to receive unbalanced political ad messages.

– Facebook and Google pushing consumers to accept data collection e.g. with Facebook stating how, if users keep face recognition turned off, Facebook won’t be able to stop a stranger from using the user’s photo to impersonate them, while not stating how Facebook will use the information collected.

Dark Patterns

In general, the reports criticised how the use of “dark patterns” such as misleading wording and default settings that are intrusive to privacy, settings that give users an illusion of control, hiding privacy-friendly options, and presenting "take-it-or-leave-it choices”, could be leading users to make choices that actually stop them from exercising all of their privacy rights..

Big Accept Button

The report, by Norway’s consumer protection watchdog, also notes how the GDPR-related notifications have a large button for consumers to accept the company's current practices, which could appear to many users to be far more convenient than searching for the detail to read through.

Response

Google, Facebook and Microsoft are all reported to have responded to the report’s findings by issuing statements focusing on the progress and improvements they’ve made towards meeting the requirements of the GDPR to date.

What Does This Mean For Your Business?

GDPR was supposed to give EU citizens much more control over their data, and the perhaps naive expectation was that companies with a lot to lose (in fines for non-compliance and reputation), such as the big tech giant and social media companies would simply fall into line and afford us all of those new rights straight away.

The report by the Norwegian consumer watchdog appears to be more of a reality check that shows how our personal data is a valuable commodity to the big tech companies, and that, according to the report, the big tech companies are willing to manipulate users and give the illusion that they are following the rules without actually doing so. The report appears to indicate that these large corporations are willing to force consumers to try to fight for rights that have already been granted to them in GDPR.

The non-profit, global trade group, the Wi-Fi Alliance, has announced the commencement of the rollout of the new Wi-Fi Protected Access (WPA) protocol WPA3 which should bring improvements in authentication and data protection.

What’s Been The Problem?

There are estimated to be around 9 billion Wi-Fi devices in use in the world, but the current security protocol, WPA2, dates back to 2004. The rapidly changing security landscape has, therefore, left many Wi-Fi devices vulnerable to new methods of attack, fuelling the calls for the fast introduction of a new, more secure standard.

WPA2 Vulnerabilities

For example, WPA2 which is mandatory for Wi-Fi Certified devices, is known to be vulnerable to offline dictionary attacks to guess passwords. This is where an attacker can have as many attempts as they like at guessing Wi-Fi credentials without being on the same network. Offline attacks allow the perpetrator to either passively stand and capture an exchange, or even interact with a user once before finding-out the password. Using Wi-Fi on public networks with the current protocol has also left people vulnerable to 'man-in-the-middle' attacks or 'traffic sniffing'.

One key contributor to the vulnerability of using Wi-Fi with the WPA2 standard is the home / business using obvious / simple passwords.

What’s So Good About The New Standard?

The new WPA3 standard has several advantages. These include:

The fact that it has been designed for the security challenges of businesses, although it has two modes of operation: Personal and Enterprise.

The equivalent of 192-bit cryptographic strength, thereby offering a higher level of security than WPA2.

The addition of Easy Connect, which allows a user to add any device to a Wi-Fi network using a secondary device already on the network via a QR code. This makes the connection more secure and helps simplify IoT device protection.

WPA3-Personal mode offers enhanced protection against offline dictionary attacks and password guessing attempts through the introduction of a feature called Simultaneous Authentication of Equals (SAE). Some commentators have suggested that it ‘saves users from themselves’ by offering improved security even if a user chooses a more simple password. It also offers ‘forward secrecy’ to protect communications even if a password has been compromised.

In Tandem For The Time Being

The current standard WPA2 will be run in tandem with the new WPA3 standard until the standard becomes more widely used.

Protection Against Passive Evesdropping

In June, the Wi-Fi Alliance also announced the rollout of the Wi-Fi Enhanced Open, a certification program. This provides protection for unauthenticated networks e.g. coffee shops, hotels and airports, and protects connections against passive eavesdropping without needing a password by providing each user with a unique individual encryption that secures traffic between their device and the Wi-Fi network.

What Does This Mean For Your Business?

Wi-Fi security and the security of a growing number of IoT devices has long been a source of worry to individuals and businesses, particularly as the nature and variety of attack methods have evolved while the current security standard is 14 years old.

The introduction of a new, up-to-date standard / protocol which offers greater security, has been designed with businesses in mind, offers more features, and protects the user from their own slack approach to security is very welcome. WPA3 will be particularly welcomed by those who use networks to send and receive very sensitive data, such as the public sector or financial industry.

The Samsung Galaxy S9, Galaxy S9+ and Note 8 are all reported to have been recently affected by a bug in the Samsung Messages app that sends out photos from the user’s gallery without their permission … to random contacts.

What Happens?

According to Samsung phone users on social media and the company’s forum, some users have been affected by a bug in the default texting app on Galaxy, Samsung Messages. Reports indicate that the bug causes Samsung Messages to text photos stored in a user’s gallery to a random person listed as contact. The user is not informed that the pictures have been sent, or to whom, and there has even been one reported complaint that a person’s whole gallery was sent to a contact in the middle of the night!

Why?

Although there is no conclusive evidence concerning the cause, online speculation has centred on the bug being related to the interaction between Samsung Messages and recent RCS (Rich Communication Services) profile updates that have rolled out on carriers including T-Mobile. These updates have been rolled out to add updated and new features to the outdated SMS protocol e.g. better media sharing and typing indicators.

Acknowledged

Samsung is reported to have acknowledged the reports of problems, and is said to be looking into them. Samsung is also reported to have urged concerned customers to contact them directly on 1-800-SAMSUNG, and the company supposedly have been in contact with T-Mobile about the issue. T-Mobile is recorded as saying that it is not their issue.

What Can You Do?

As well contacting Samsung, and in the absence of any definitive news of a fix as yet, there are two main possible fixes that Samsung owners can pursue. These are:

To go into the phone’s app settings and revoke Samsung Messages’ ability to access storage. This should stop Messages from sending photos or anything else stored on the device.

Switch to a different texting app e.g. Android Messages or Textra. There are no (known) reports of these being affected by the same bug.

What Does This Mean For Your Business?

People pay a lot of money to get the latest phones and to get the right contracts to allow for the high volume of communications associated with business use. It is (at the very least) annoying, but more generally scary and potentially damaging that personal, private image files can be randomly sent. These photos could, for example, contain commercially sensitive information that could put a company’s competitive advantage at risk if sent to the wrong person. Also, some photos could cause embarrassment for the user and / or the subject of the photo, and could damage business and personal relationships if they fell into the wrong hands. Some photos sent to the wrong person, as well as compromising privacy, could pose serious security risks.

At a time when we acknowledge that photos of ourselves / our faces stored by e.g. CCTV cameras are our personal data, Samsung could find itself on the wrong end of GDPR-related and other lawsuits if found to be directly responsible for the bug and its results.

A new website has been launched by manufacturer Nextbase allowing drivers to upload their dash-cam footage of dangerous drivers they’ve filmed, thereby making it easy for drivers to submit their footage to the police.

Initiative

The initiative, which has already received widespread praise, allows owners of any brand of dash-cam, action camera, mobile phone or any other type of camera from any manufacturer to upload footage to the National Dash-Cam Safety Portal (NDSP), and then to send it on to the appropriate local England or Wales police force.

As well as uploading footage, drivers can use the free portal to submit witness statements, all of which are securely stored, and only viewable by the police force to which they are submitted.

How Does It Work?

Part of the Nextbase website, the portal at https://www.nextbase.co.uk/national-dash-cam-safety-portal/ shows a clickable map of England and Wales divided into regions. Drivers with footage to submit are asked to click on the region where the incident recorded took place. Clicking on the geographic region then reveals the police force for all regions. Clicking on the relevant police force region should, when / if the police force has chosen to use the portal, send you to the relevant police force website and allow you to submit your statement and footage.

Drivers submitting footage are also prompted to contact their local force by email or by calling 101, and to email their witness statement to a given police email address in order to help speed up the process of reporting the incident.

Since the initiative is still in its early stages, many of the relevant police forces are not yet fully participating in the video-submitting system.

Dash-Cam Footage Can Be Used In Court

Dash-cam footage can provide useful information and evidence in court cases and the first jail sentence for dangerous driving that used dash cam footage as its main evidence took place in 2015.

Things To Remember

Drivers submitting footage and statements via the portal should be aware that by doing so they are filing an official police report, the process can require the driver to take time to answer lots of questions, and that there is a possibility that they may be required to appear in court. Also, if the footage shows the driver who submitted the footage to be breaking the law e.g. speeding to catch up with and film the perpetrator, they may also be prosecuted.

The NDSP web page provides FAQs to answer questions about the type / quality of footage and the process.

What Does This Mean For Your Business?

Anyone who drives on UK roads, particularly as part of their job and / or their daily commute is likely to have witnessed dangerous or irresponsible driving. Dash-cams have provided one way for drivers to have some fall-back protection against the reckless and / or criminal actions of others and against potentially costly insurance implications. Footage provides something more than just testimony and conjecture.

The big advantage of the NDSP portal is that, for the first time, it provides a central point for drivers to go to submit footage, and it simplifies the process of submitting footage and statements to the correct police force.

Critics could argue, however, that this initiative could be promoting a trade-off between road safety and privacy, and could be encouraging a culture of citizen surveillance and suspicion.

For Nextbase, the portal (and the media reports about it) will provide some positive publicity if the system works properly and securely, and since it is part of their product website, could even lead to some more sales of dash-cams.

California-based vehicle tech corporation ‘Tesla’ is suing a former employee, who some saw simply as a Whistleblower, over alleged acts of industrial espionage.

Named

The former Tesla technician who stands accused by Tesla boss Elon Musk of industrial espionage has been named as Martin Tripp. The allegations made against Mr Tripp include that he was hacking and stealing company secrets, and that he wrote software that was designed to aid in the theft of photos and videos.

Tesla has also alleged that Mr Tripp was partly motivated to commit malicious acts against the company after he failed to get a promotion. Tesla has filed a federal lawsuit against him.

Tesla is also reported as saying that 40-year old forces veteran Tripp made false claims to the media about the information he (allegedly) stole, particularly where claims about punctured battery cells, excess scrap material and manufacturing delays are concerned.

Whistleblower?

Far from being an alleged criminal who meant the company harm, Mr Tripp claims that he is simply a Whistleblower who the company is trying to get rid of in order to cover up details about products / components that could damage the company’s reputation if they were known.

For example, Mr Tripp claims that he has simply been trying to expose "some really scary things" at Tesla, including punctured batteries being used in vehicles. Mr Tripp has also alleged that he became disillusioned with Tesla when (as he alleges) he saw how Elon Musk was lying to investors about how many cars they were making.

Mr Tripp has also been reported as saying that he didn’t write any software to aid the theft of photos and videos because he has no patience for coding, and that he didn’t care about failing to get a promotion.

Tripp is looking for legal protection as a whistleblower.

Silencing a Scapegoat?

Mr Tripp has been reported as saying that he is being made a scapegoat because he provided information that was true, that Tesla are doing everything they can to silence him, and that he feels that he had no rights as a whistleblower.

The local Sheriff’s office is reported as announcing that there is no credible threat to the Tesla’s lithium-ion battery factory, known as the Gigafactory.

Mr Tripp has been reported as saying that he allegedly turned whistleblower after his concerns were not taken seriously by anyone in the company.

What Does This Mean For Your Business?

It would certainly not be unheard-of for a disgruntled employee / former employee to pose a security risk or commit acts of sabotage. For example, back in 2014, Andrew Skelton, who was an auditor at the head office of Morrisons (supermarket chain) in Bradford, leaked the personal details of almost 100,000 staff. Mr Skelton is believed to have deliberately stolen and leaked the data in a move to get back at the company after he was accused of dealing in legal highs at work.

We are also familiar with how difficult companies / organisations and other interested parties can make it for people who are ‘whistleblowers’ e.g. reports in the media about Dr Hayley Dare who received poison-pen letters was dismissed from a 20 year unblemished career with a 3 line email after raising concerns over a patient’s safety with her employer, an NHS Trust.

In the case of Tesla, it is currently not possible to say whether or not Mr Tripp is a whistleblower or a disgruntled former-employee with malicious intent. What it does remind us though is that corporate / company culture should be such that employees feel able to express their concerns, are listened to, and that it is viewed as a positive way to find areas to make improvements and modifications that could actually help a company in the long-run.

The Tesla story should also remind companies to plug some basic security loopholes in IT systems when employees leave / are dismissed. This includes simply changing passwords, access rights, and monitoring systems to ensure that nothing untoward is happening.

After numerous complaints over the last two years and even an online petition by a customer, Apple has decided to offer free repairs or replacements for the butterfly keyboard on its MacBook and MacBook Pro laptops.

What Happened?

For quite some time now, some MacBook and MacBook Pro laptop users have been complaining about problems they have experienced with the ‘Butterfly keyboard’. These problems have included letters or characters repeating unexpectedly, letters or characters not appearing, and keys feeling "sticky" or not responding in a consistent manner.

Petition and Lawsuit

The problems have been so bad that one user set up a Change.org online petition asking Apple to recall every MacBook Pro released since late 2016, and two fed up Apple customers have filed a lawsuit against the company (both back in May) in a San Jose, California, federal court.

The petition, which attracted over 21,000 signatures, was set up by someone listed as Matthew Taylor, who claimed that every one of Apple's current-generation MacBook Pro models, 13in and 15in, is sold with a keyboard that can become defective at any moment because of a design failure. Mr Taylor is reported as saying that he believes that the problems are widespread and consistent, and can be infuriating for users.

The lawsuit has been brought by Zixuan Rao, of San Diego, California, and Kyle Barbaro, of Melrose, Massachusetts, who allege that Apple's model year 2015 or later MacBooks and model year 2016 or later MacBook Pros are defective.

Hands Up … Maybe

Apple has now held its hands up and acknowledged in a statement online, that the problems of characters repeating unexpectedly, letters or characters not appearing, and keys feeling "sticky" or not responding in a consistent manner “may” exist in a “small percentage” of its Butterfly keyboards.

Program

Apple has, therefore, launched a program which will mean that Apple or an Apple Authorised Service Provider will service eligible MacBook and MacBook Pro keyboards, free of charge. The type of service that Apple / the Apple Authorized Service Provider can offer will be determined after the keyboard has been examined, and Apple says that this may involve the replacement of one or more keys or the whole keyboard.

Eligible Models

Apple has released a list of models that are eligible for the repair / replacement program. These models are (courtesy of the Apple website):

On the one hand it is good news that Apple is prepared to repair / replace keyboards free of charge. On the other hand, some would say that it’s a shame that it’s taken 2 years, thousands of complaints, a petition with tens of thousands of signatures, bad publicity, and even a lawsuit to bring Apple to the point of admitting that there “may” be a problem with the keyboards that warrants free repair / replacement program to be set up at some cost to Apple.

It is all-too-common in the technology industry for products (usually software) to be distributed before all the bugs have been discovered and ironed-out or patched. In this case, many Apple customers were clearly saying that their keyboards didn’t work as they should, and it is this kind of thing that can turn happy customers into very vocal critics of a company. For businesses that have been affected by the problem, the repair / replacement program is likely to be welcome … but long overdue.

If you / your business has been affected by the problem, the advice from Apple is to first back up your data, then find an Apple authorised service provider and make an appointment at an Apple Retail Store (or send your device by mail to the Apple Repair Centre). Apple says that your MacBook or MacBook Pro will be examined prior to any service to verify that it is eligible for this program, and examination will determine the type of service, or whether a replacement will be needed. It is estimated that the service could take a few days, and Apple says that the program covers eligible MacBook and MacBook Pro models for 4 years after the first retail sale of the unit.

It has been reported that financial market regulators from the US, the UK and Asia are pressing for an exemption from GDPR.

Growing Calls For Exemption

Even though GDPR only came into force a little over a month ago (May 25th), financial regulators from several countries, most notably the US, have been pressing over several years for an exemption to be built-in, and have hosted multiple meetings about the matter on both sides of the Atlantic.

What’s The Problem?

Before GDPR, financial regulators could use their exemption to share vital information e.g. bank and trading account data, to advance misconduct probes. Now that GDPR is in force, regulators are, therefore, arguing that no exemption means that international probes and enforcement actions in cases involving market manipulation and fraud could be hampered.

Regulators say that they are particularly concerned about the effects on U.S. investigations into crypto-currency fraud and market manipulation (for which many actors are based overseas) could be at risk. Without an exemption, regulators say that cross-border information sharing could be challenged because some countries’ privacy safeguards now fall short of those now offered by the EU under GDPR.

Seeking An “Administrative Arrangement”

The form of exemption that regulators are reported to be seeking is a formal “administrative arrangement” with the Brussels-based European Data Protection Board (EDPB), headed by Andrea Jelinek. The written arrangement would clarify if and how the public interest exemption can be applied to their cross-border information sharing.

Which Regulators?

Reports indicate that the regulators involved in discussions about getting an exemption include the EU’s European Securities and Markets Authority (ESMA), the U.S. Commodity Futures Trading Commission (CFTC), the Securities and Exchange Commission (SEC), the Ontario Securities Commission (OSC), the Japan Financial Services Agency (FSA), Britain’s Financial Conduct Authority (FCA), and the Hong Kong Securities and Futures Commission (SFC).

Why Not?

The worry from the EDPB is that granting exemptions could lead to the illegitimate circumventing and watering down of the new GDPR privacy safeguards, now among the toughest in the world. This, in turn, could lead to the harming EU citizens which is exactly the opposite of the reason for the introduction of GDPR.

The matter has, however, been complicated by the fact that regulators’ slow response to the 2007-2009 global financial crisis was partly blamed on poor cross-border coordination, which has since been improved, and better information sharing after the crisis is reported to have lead to billions of dollars in fines for banks e.g. for trying to rig Libor interest rate benchmarks.

What Does This Mean For Your Business?

A financial crisis (e.g. involving bad behaviour by banks) can create serious knock-on costs and problems for businesses worldwide, and it is, therefore, possible to see why financial regulators feel they need an exemption so that they can continue to share information which will ultimately be in the interest of business and the public. It is likely, therefore, that discussions will continue for some time yet to try to find a way to grant exemptions in certain circumstances.

The contrary view is that granting exemptions will water down legislation that was designed to offer stronger protection to us all, potentially putting EU citizens at risk, and allowing organisations that we can’t effectively monitor to simply circumvent the new law and behave how they like. This could undermine the privacy and rights of EU citizens.

Privacy groups have led calls to halt the blanket collection and storing of communications data in the EU area, and the creation and storing of the “audio signatures” of 5.1 million people by HM Revenue and Customs (HMRC).

Collection of Communications Data

The privacy groups Privacy International, Liberty, and Open Rights Group, have filed complaints to the European Commission which call for EU governments to stop making companies collect and store all communications data. Their complaints have also been echoed by dozens of community groups, non-governmental organisations (NGOs), and academics.

What’s The Problem?

The main complaint is that communications companies in EU states indiscriminately collect and retain all of our communications data. This includes the details of all calls, texts and so forth (i.e. who with, dates, times etc).

The privacy groups and their supporters argue that not only does this amount to a form of intrusive surveillance, but that the practice was actually ruled unlawful by the Court of Justice of the European Union (CJEU) in two judgments in 2014 and 2016.

Privacy groups have expressed concern that some companies in some EU states have tried to circumvent the CJEU judgements, and the CJEU have clearly stated that general and indiscriminate retention of communications data is disproportionate and can’t be justified.

In the UK, for example, the intelligence agencies collect details of thousands of calls daily, but under the CJEU judgements, this amounts to breaking the law.

HMRC Collecting Recordings of Voices

Perhaps even more shocking is the news this week that, according to privacy group Big Brother Watch, the UK HM Revenue and Customs (HMRC) has a Voice ID system that has collected 5.1 million audio signatures.

The accusation is that HMRC is creating biometric ID cards or voiceprints by the back door. These voiceprints could conceivably be used by government agencies to identify UK citizens across other areas of their private lives.

Big Brother Watch has also expressed concern that customers are not given the choice to opt out of the use of this system.

Helpful and Secure

HMRC, which launched the Voice ID scheme last year, asks callers to repeat the phrase "my voice is my password" to register and access their tax details, and says that the system has been very popular with customers. HMRC has also said that the 5 million+ voice recordings that it already has are stored securely.

Privacy campaigners are calling for the deletion of the voiceprints that are currently stored, and for a different system to be implemented, or to at least allow customers to opt out of Voice ID and to be able to use an alternative method.

What Does This Mean For Your Business?

Businesses may be very aware, after having to adjust their own systems to be compliant to the recently introduced GDPR, that all EU citizens should now have more rights about what happens to their personal data. The term 'personal data' in the GDPR sense now covers things like our images on CCTV footage, and should, therefore, cover recordings of our personal conversations and biometric data such as recordings of our voices / voice prints / audio signatures.

While we may accept that there are arguments for monitoring our communications data e.g. fighting terrorism, many people clearly feel that the blanket collection of all communications data, not just that of suspects, is a step too far, is an invasion of privacy, and has echoes of ‘big brother’.

Biometrics e.g. using a fingerprint / face-print to access a phone or as part of security to access a bank account is now becoming more commonplace, and can be a helpful, more secure way of validating / authenticating access. Again, images of our faces, fingerprints, and our audio signatures (in the case of HMRC) are our personal data, and it is right that we would want them to be secure, and as with GDPR, that they are only used for the one purpose that we have given consent for, and not to be passed secretly among states and unknown agencies. Also, the ideas that we can opt in or opt out of systems, and are given a choice of which system we use i.e. not being forced to submit a voice recording, is an important issue, and one that many thought GDPR would address.

As more and more biometric systems come into use in the future, legislation will, no doubt, need to be updated again to take account of the changes.