Category Archives: Privacy

Post navigation

This is our first blog of 2015 and we’d like to wish all the readers of SecurityWatch a very Happy New Year!

So what are the predictions for cybersecurity issues this year?! More open source software bugs, vulnerabilities in mobile payment systems, IoT attacks…etc. Apart from these issues, there is one global concern which is ongoing and undoubtedly growing – PRIVACY.

Surveillance issues are at the forefront due to rising terrorist activities. Such activities that could be potential threats to a nation or it’s people, compel governments (or as claimed so by them) to keep a close eye on all activity over the wire within their remit.

Not long ago, such operations were conducted covertly. But the NSA and GCHQ revelations by Edward Snowden starting June 2013, were an eye-opener for many. An international survey on Internet security and trust reported that, of ‘23,376 Internet users in 24 countries reported that 60% of Internet users have heard of Edward Snowden, and 39% of those ‘have taken steps to protect their online privacy and security as a result of his revelations’ which is considerable number.

Recently UK’s prime minister announced that, if elected again, he would block chat messengers that support end-to-end encryption (such as WhatsApp, iMessage, Telegram, Cyberdust, etc.), as part of his plans for new surveillance powers announced in the wake of the Charlie Hebdo shootings in Paris. Seems like the onus is now on the citizens to assist the governments by sacrificing their privacy as opposed to the them putting in more resources to tackle terrorist threats.

And it isn’t just the governments ready to put their hands on any kind of personal information available over the wire, there are other actors involved as well. Cyber theft is escalating and information is being sold on the deep web or darknet for financial gain. Moreover, companies monitor user activity more than ever before to keep track of users and their activities to boost sales.

Such growing interest in personal information for malicious purposes compels us to think more and more about protecting our privacy online in the internet era. This Hindi proverb, in my view, explains it well –

Which means – Marriage is like a delicious tempting sweet, the one who consumes it suffers as well as the one who doesn’t (unless you absolutely hate sweets)! Which is entirely true if we substitute Internet in place of Marriage in this case. Anyone using the internet needs to be cautious and must take proactive measures to protect their privacy if they want to have a good relationship with it!

There are already complaints being lodged and measures being taken to strengthen the privacy regulations in Europe. Among them is the “Right to be Forgotten” Ruling (C-131/12) that states a search engine will have to delete information, along with the links when it receives a specific request from a person affected.

Some users of the internet, especially the younger generation, might relate to privacy as only changing their twitter or Facebook settings to restrict feeds and pictures to contacts.

Privacy is a fundamental human right. This is acknowledged by Article 8 of the European Convention on Human Rights, which provides a right to respect for one’s “private and family life, his home and his correspondence”. The Charter of Fundamental Rights of the European Union and Universal Declaration of Human Rights have similar sections on privacy protection.

However, not every fundamental right that a citizen possesses is set out in a country’s constitution. For example, in Ireland, the Constitution does not specifically state a right to privacy but the courts recognize that the personal rights in the constitution imply the right to privacy.

Privacy is an integral element of democratic societies and this applies to the digital world as well. Digital technologies may be designed to protect privacy. Since the 1980s technologies with embedded privacy features have been proposed. During that time, deploying Privacy Enhancing Technologies (PETs) (e.g. encryption, protocols for anonymous communications, attribute based credentials and private search of databases) was seen as the solution as opposed to embedding of privacy into the design of technology. However, apart from a few exceptions such as encryption, PETs haven’t really become a standard or a widely used component in system design.

Most of us may have heard about the relatively newer concept of Privacy by Design (PbD) which has been around for a few years now. It was developed by the former Information and Privacy Commissioner of Ontario, Dr. Ann Cavoukian, back in the 90’s. Dr. Ann argued that “the future of privacy cannot be assured solely by compliance with legislation and regulatory frameworks; rather, privacy assurance must ideally become an organization’s default mode of operation.”

Privacy by Design is believed to be accomplished by practicing its 7 Foundational Principles which have been have been translated into over 30 languages.

Proactive not Reactive; Preventative not Remedial

Privacy as the Default Setting

Privacy Embedded into Design

Full Functionality – Positive-Sum, not Zero-Sum

End-to-End Security – Full Lifecycle Protection

Visibility and Transparency – Keep it Open

Respect for User Privacy – Keep it User-Centric

Privacy is a challenging subject that covers a number of domains, including law, policy and technology. Some believe that the concept of Privacy by Design is too vague and since it does not focus on the role of the actual data holder, but on that of the system designer, it is not applicable in the privacy law.

Despite the criticism, Privacy by Design has been globally recognized and adopted. The U.S. Federal Trade Commission recognized Privacy by Design in 2012 as one of its three recommended practices for protecting online privacy. In addition, a variation of the concept, known as ‘Data protection by Design’ has been incorporated into the European Commission plans to unify data protection within the European Union with a single law – the General Data Protection Regulation. The variation apparently goes beyond mere technical solutions and addresses organisational procedures and business models as well. However, since the proposal does not explicitly define or give references for definitions of either data protection by design or privacy by design, the precise meaning of these concepts is nebulous.

In an effort to encourage adoption and implementation of privacy by design and, provide guidance on privacy engineering practices, several bodies have taken initiatives.

European Commission

In January 2012 the European Commission proposed a regulation on data protection that will replace the existing Data Protection Directive. The proposal for the new regulation in general associates the requirements for data protection by design and data protection by default with data security and contains specific provisions relevant to Privacy by Design and by Default.

European Union Agency for Network and Information Security (ENISA)

In December 2014, European Union Agency for Network and Information Security (ENISA) published a report to elaborate on how privacy by design can be implemented with the help of engineering methods. According to the ENISA report-

“The principle “Privacy/Data Protection by design” is based on the insight that building in privacy features from the beginning of the design process is preferable over the attempt to adapt a product or service at a later stage. The involvement in the design process supports the consideration of the full life-cycle of the data and its usage.”

The report is intended for data protection authorities, policy makers, regulators, engineers and researchers. It discusses the notion of a privacy design strategy, and how it differs from both a design pattern and a PET. Moreover, the report briefly summarizes the eight privacy design strategies as derived by Hoepman from the legal principles underlying data protection legislation for both data and processes. It also provides a list of privacy implementation techniques.

The report identifies and highlights some limitations of privacy by design too. The predominant ones are – fragility of privacy properties if two systems are combined or one embedded in the other, absence of a general and intuitive metric that allows comparing two systems with the same or similar functionality with respect to a set of privacy properties, increased complexity and reduced utility of the resulting system and different interpretations of privacy by design.

National Institute of Standards and Technology (NIST)

A similar initiative is underway by NIST as well, called the Privacy Engineering initiative which focuses onproviding standards-based tools and privacy engineering practices to help evaluate the privacy posture of existing systems, enable the creation of new systems that mitigate the risk of privacy harm and, address privacy risks in a measurable way within an organization’s overall risk management process. The organization published a draft last year in April – NIST Privacy Engineering Objectives and Risk Model Discussion in which a definition for Privacy engineering was proposed –

“..a collection of methods to support the mitigation of risks to individuals of loss of self-determination, loss of trust, discrimination and economic loss by providing predictability, manageability, and confidentiality of personal information within information systems.”

However, as per our knowledge, this is not the final accepted definition and a meeting to update the draft will be held in February 2015.

Although the requirement for such initiatives was long due, these standards, regulations and guidelines can only take us so far when it comes to protecting our privacy in times of these technological transformations and rising cyber security threats. Nevertheless, using the right means with the right technology and embedding privacy and data protection in the way we design/build solutions could certainly facilitate the protection of our user identities in this crazy world of the internet

‘Ello guv’nor, I heard you sold your kid for WiFi. Perhaps I could interest you in another good deal? It’s called tech for privacy and I know you’re gonna luv it.

Having decided to pass on the logical numbering of the next rendition of Windows, Microsoft’s new operating system will be called Windows 10.

In a move many see as an attempt to put the memory of the not-so-popular Windows 8 behind it, the company is all steam ahead as it marches toward the inevitable retail release of its replacement.

In the meantime, however, early adopters can grab a technical preview to see how Redmond has accommodated Start button-loving fans of its arguably much better Windows 7.

Being one of the first people to get your hands on a new operating system may sound pretty cool but that will only be the case if you read the privacy policy first (something you should always do before installing new software).

Why?

Because Microsoft sharing the tech preview with you is a reciprocal agreement which sees your data travel back in the opposite direction.

Specifically, the Windows Insider Programme policy says,

“Microsoft collects information about you, your devices, applications and networks, and your use of those devices, applications and networks. Examples of data we collect include your name, email address, preferences and interests; browsing, search and file history; phone call and SMS data; device configuration and sensor data; and application usage.”

While the sheer volume of collectible data is staggering and far beyond what I for one would be happy to give up if I had a choice, it is standard fare these days, mores the pity.

More disconcerting though are these two following entries:

“We may collect information about your device and applications and use it for purposes such as determining or improving compatibility” and “use voice input features like speech-to-text, we may collect voice information and use it for purposes such as improving speech processing.”

and

“If you open a file, we may collect information about the file, the application used to open the file, and how long it takes any use [of] it for purposes such as improving performance, or [if you] enter text, we may collect typed characters, we may collect typed characters and use them for purposes such as improving autocomplete and spellcheck features.”

Did that sink in?

If not read it again and you will see that signing up for the Windows 10 preview will see you giving Microsoft permission to both record your voice and, specifically what you say, and to collect everything you type on your keyboard.

In other words, you will be voluntarily installing voice and keyloggers onto any system running this version of Windows.

Ouch!

There is no word on whether the privacy policy will be similarly worded when bundled with the final version and I suspect, and hope, that it won’t – I’d like to think that Microsoft is merely gathering so much data to help it make improvements to the new operating system before its retail release.

But there are no guarantees of anything these days, especially where technology is concerned and, likewise it seems, in the realm of data gathering.

So, my advice, is to research Windows 10 thoroughly upon its general release and to check out its privacy policy in its entirety before letting it anywhere near any of your devices.

Alas, most people will not do so though. After all, the latest tech is often so enticing that people will do the craziest things to get on the bandwagon.

From glassholes (not you Neira, you’re cool) to joggers with glorified digital watches, people everywhere are getting excited about the next big thing in what I would describe as self-eroding privacy.

Whilst Google Glass owners may be in short supply, possibly put off by the cost, the number of people owning health and fitness gizmos seems to be on the rise, aided and abetted by other cool-to-have devices such as the newly released iPhone chunky that can help tap into all that data.

In some ways I can see why the ability to monitor fitness metrics could be quite enticing, allowing users to set their own goals and to motivate themselves through self-stretching of targets or via competition with others.

That said however, some performance measurements can lead to disappointment if you start getting into e-competition with other people who may have published their own results online, either intentionally or inadvertently (yes lads, two minutes of moderate exertion is pretty lame, or at least that’s what she said).

And that’s the problem you see – some health, wellness and fitness data should remain private from your family and even the lads or ladies down the pub. And I’m not just talking about the obvious faux pas linked to above either – other data really shouldn’t be common knowledge in my opinion, or at least not so common that it appears on the web.

Comparing heartbeats and other metrics at the gym could be a good thing but sharing such data with a mechanism that is easily scoured and mined by who knows who is not so good is it? I mean, would you want your insurance company to know that you are a 30-year-old with the fitness level of a pensioner? It’s ok, I know it’s not your fault, it’s all that sitting at a desk and the pizzas, well, they’re just too nice. But what would an underwriter think? Higher premiums perhaps? I don’t see why not.

After all, who are you sharing that data with? Do you even know? Has the app developer made it clear during the signup and installation routine? Did you even bother reading all that gumpf when you downloaded it?

Does the app developer have a social networking aspect where you can share and compare data? Who has access to what? Is the data made public such as in the example above where ‘performance’ data appeared in Google search results? Are data-storing websites secure? Does your smartwatch company sell your data to third parties or share it with them?

So many questions, all of which could have a huge impact on your privacy.

And just what benefits are you getting any way?

Is your health improving? Will a wearable make you fitter? Surely self-motivation is key, not technology.

And what does your doctor make of all this data you are producing about your health? Not much, to be honest. In fact a new survey of physicians here in the UK highlights a potential problem with the new army of high-tech health buffs – many are self-diagnosing but they’re not very good at it.

In fact, less than 5% of doctors thought that health apps and websites offered any kind of value as patients start taking it upon themselves to figure out their own health and fitness routines or even research their own perceived medical conditions.

Heaven forbid that someone would take the advice of a watch over their GP but I guess its happening already and will only become more commonplace in the future.

In case you haven’t guessed already, I don’t like wearable tech. It’s too invasive by nature and the data it produces is arguably not secure or private enough by default, nevermind should someone ever decide to target it. And it’s usefulness? For some people such devices could be invaluable in enhancing their training routines but then I would guess such people would probably do ok without it anyway. For everyone else? What do you think?

Well, ok, maybe not 5 seconds. But your latest Facebook post could soon be gone in a timescale chosen by you (well, ok, anywhere between 1 hour and a week).

A small number of users spotted the new feature in a Facebook iOS app earlier this week which allows users to set a deletion date at the time they create a new post.

A Facebook spokesperson confirmed the existence of the trial feature, saying:

“We’re running a small pilot of a feature on Facebook for iOS that lets people schedule deletion of their posts in advance.”

Small scale trials of new features are nothing new for the social networking giant which is constantly looking to evolve. Facebook users will be grateful, however, that this one is not as secret as say, testing how users react to positive and negative news, the secret emotion experiment which recently surfaced and did little to enhance the reputation of a company which many fail to equate with privacy protections.

That said, Facebook may be learning what its users want, as evidenced by the recent addition of the ‘privacy dinosaur’ aka the new privacy checkup tool.

So, that means all users will be able to self-destruct all of their postings in the future, wiping them off Facebook’s servers for ever more, right?

Well, before you see Facebook as a means for posting questionable or sensitive content, you may wish to consider the fact that the answer to that question is not clear – it looks like the removal from a user’s timeline will be permanent but I’d be very surprised if Facebook would want to let anything fall off its own servers (we know it keeps a record of anything typed into the status box, regardless of whether the user subsequently decides to publish it or not, for example).

Then of course there is the fact that virtually nothing shared on the web is private ever again anyway – the kids of today ain’t half bright you know and they can take screenshots and everything.

So before you even start contemplating using Facebook’s potential new service, or the Slingshot app, or Snapchat to post something you otherwise may have kept to yourself (or should have) remember that nothing that is published can be unpublished and privacy can sometimes be an illusion. Or, as my boyhood hero Michael Praed would say, “Nothing is forgotten. Nothing is ever forgotten.”

Aaargggh, no thanks, that sounds like a mighty stressful and bank balance-busting exercise in futility to me.

But then again, I’m not Microsoft so perhaps I’ve got good reason to not want to end up in front of a judge. Not that I’ve done anything wrong of course. Honest. Just ask GCHQ – its minority report division already knows I’m a saint now and will continue to be so in the future too.

Microsoft, however, is so keen to have its say in court that it has invited proceedings upon itself. Kind of.

After US authorities made demands over emails stored on a Microsoft server in Dublin, Ireland, the software giant said no dice and has now taken the unusual step of asking the US government to hold it in contempt of court so that it can accelerate the privacy-based case onto the appeals stage.

The case centres around a series of emails which are said to to be relevant to an investigation into drug trafficking but, despite the potential gravity of that case, Microsoft disagrees with the government view that data held overseas is there to be grabbed, instead suggesting that US jurisdiction should terminate in line with its physical borders.

An outstanding warrant, about which almost nothing is known publicly, has caused Microsoft much consternation with the company promising to appeal any adverse ruling “promptly.” The company objected to the search on many levels, including the fact that it believes an existing precedent applies:

“The U.S. has entered into many bilateral agreements establishing specific procedures for obtaining physical evidence in another country including a recently-updated agreement with Ireland. We think the same procedures should apply in the online world.”

In a blog post, the company also highlights how it is taking the moral high ground in making a stand for privacy and also cites backers such as Apple, Cisco and the EFF.

None of this is to say that Microsoft feels it is above the law though, merely that it believes that government should play by the rules and follow established processes:

“We appreciate the vital importance of public safety, and we believe the government should be able to obtain evidence necessary to investigate a possible crime. We just believe the government should follow the processes it has established for obtaining physical evidence outside the United States.”

Now, after some procedural confusion, US District Judge Loretta Preska has found Microsoft in contempt, allowing the company to proceed with its appeal immediately. Meanwhile Microsoft has come to an agreement with the Department of Justice that allows it to escape punishment for that ruling, though the government said it retains the right to seek sanctions at a later date if it feels it necessary to do so, with the full stipulation saying:

Microsoft has not fully complied with the warrant, and Microsoft does not intend to comply while it in good faith seeks further review of this Court’s July 31 decision rejecting Microsoft’s challenge to the Warrant.

While Microsoft continues to believe that a contempt order is not required to perfect an appeal, it agrees that the entry of an order of contempt would eliminate any jurisdictional issues on appeal. Thus, while reserving its rights to appeal any contempt order and the underlying July 31 ruling, Microsoft concurs with the Government that entry of such an order will avoid delays and facilitate a prompt appeal in this case.

The parties further agree that contempt sanctions need not be imposed at this time. The Government, however, reserves its right to seek sanctions, in addition to the contempt order, in the case of (a) materially changed circumstances in the underlying investigation, or (b) the Second Circuit’s issuance of the mandate in the appeal, if this Court’s order is affirmed and Microsoft continues not to comply with it.

Personal data from Facebook, Twitter and other social media sites will be monitored more by employers over the next decade, according to a new report from PwC, which says that one third of young people would happily trade in their privacy in return for a little job security.

The future of work: A journey to 2022 report surveyed 10,000 workers around the world as well as 500 human resources professionals in order to guage their attitude towards their social media use being monitored by their employers.

The report suggests that data available through Facebook, Twitter and other social channels could be used by employers to gain an insight in to what motivates their workforce along with other information including why staff change jobs and what could be done to improve their wellbeing within the organisation.

“Just as advertisers and retailers are using data from customers’ online and social media activity to tailor their shopping experience, organisations could soon start using workers’ personal data (with their permission) to measure and anticipate performance and retention issues.

This sort of data profiling could also extend to real-time monitoring of employees’ health, with proactive health guidance to help reduce sick leave. Key to the success of organisations being able to use employee data will be developing measurable benefits for those who hand over their data and building trust through clear rules about how data is acquired, used and shared.”

According to the research, half of the global workforce will be aged 32 or under by 2020, which will see a shift in attitude towards the use of technology and personal data. The PwC report says that these younger workers are far more relaxed about the sharing of data than previous generations, with 36% saying their employer is welcome to their personal data.

Whilst I can see why an employer would love to gain access to an employee’s social postings, either by viewing what is publicly available or via explicit consent, I struggle to see how the staff member gains from such an agreement.

By giving an employer permission to access their social media accounts, the individual would be giving up their privacy for very little return. The employer would gain all sorts of insight into how their staff think and what they do with their time when away from the workplace but I fail to see how that could be used to motivate them further, or increase their feeling of wellbeing. From the employees’ point of view I can see nothing to gain whatsoever. How giving up access to their social media accounts would lead to the claimed increase in job security I do not know.

This just seems to be another case of the general poulace giving up their rights for very little in return. Or, as Benjamin Franklin may have said “Those who surrender their social media accounts for job security will not have, nor do they deserve, either one.”

Considering the laid back attitude many youngsters have towards the sharing of their personal data these days I do wonder if, in the future, that approach will come back to bite them where it hurts.

If you have some old tech you want to sell then eBay may be your first port of call. As much as I dislike the site and some of its practices, it still presents a means of putting unwanted goods in front of a huge number of eyeballs. But the problem with that it is it has generated a marketplace that appeals to a massive number of people, many of whom are not as security conscious as perhaps they could be.

I myself have bought a second-hand laptop in years gone by, only to discover that the previous owner had made absolutely no attempt whatsoever to clear their private data from the machine. I discovered his favourite websites (I hope he visited THAT site when his wife wasn’t around), I know who he banked with, I wasn’t partial to his taste in music, but I did agree strongly with the Liverpool FC background he left on it.

Ultimately, what I learned is that some people lack the security awareness, or are too lazy, to wipe their personal data from computers and other devices before disposing of them via an auction site or the local tip. Based upon hard drives I’ve been given by friends, it is a widespread problem which we can only hope to eradicate by raising the issue and educating people.

But sometimes education isn’t enough.

Take the Hudl tablet for example. Ken Munro of Pen Test Partners recently conducted an experiment, in conjunction with the BBC, in which he examined the data deletion systems on Android devices.

Purchasing second-hand Hudls from eBay, Munro discovered that even those previous owners who had wiped the device before shipping were at risk of having their confidential data accessed.

Munro found that the device retained information even after a factory reset due to a flaw in the Rockchip processor’s firmware. The known bug allowed him to read and write to the device using freely available software. Extracting information only took minutes but the analysis of the data typically took a couple of hours per machine. Once done, however, Munro was able to determine PIN codes, wi-fi keys, cookies and other browsing data that would have allowed him to spoof the original owner.

A Tesco spokesperson told the BBC that:

“Customers should always ensure all personal information is removed prior to giving away or selling any mobile device. To guarantee this, customers should use a data wipe program.”

The spokesperson went on to say that any Hudls returned to Tesco would be securely wiped by the company, but urged users to visit the Get Safe Online website if they have any further privacy-related concerns.

Marc Rogers, principal researcher at Lookout, explained further, saying that a secure wipe should be used before disposing of any data-storing device. Such a wipe will overwrite all onboard memory with ones and zeroes, rendering it useless to any third party that later tried to access it. Unfortunately though, most manufacturers have adopted a different approach to factory resets he said:

“There’s an Android function to wipe data and most manufacturers are using that. But all that does is remove the index of where data is and does not delete data at all.”

Lookout also revealed that police had revealed that the average underground price for a second-hand smartphone with personal data on it was around £600, which just goes to show the potential value of that data to the crook who ends up buying it.

As sales of smartphones and tablets increase, in part due to their convenience and portability, it is increasingly likely that owners will entrust more and more data to them. When those devices are subsequently sold on the selfies left in memory may provide the new owner with a few chuckles, but there is a chance that the banking data, credit card numbers and less than safe for work snaps may leave the original owner with something far more tangible than the thought of a stranger laughing at them.

So, if you are selling a Hudl, or any other device that has previously held your personal data, ensure that you wipe it securely before placing that listing.

In the same week that Google announced that it will give a search ranking boost to security-conscious websites, Yahoo has now revealed that it too will take a proactive stance on encryption.

The company announced at Black Hat that it will apply end-to-end encryption to its email services before the end of 2015.

The move is likely in response to the Edward Snowden revelations about government surveillance that have prompted many tech firms to assess their stance on privacy and encryption.

Thus far, Google has taken the biggest strides, with the aforementioned ranking change following previous announcements of support for end-to-end encryption in its Mail, Drive and Search products.

The change will likely be welcomed by Yahoo’s 273 million email account holders who had previously been left behind as other email providers adopted encryption.

Yahoo’s encryption will not hide details such as who has emailed who, or the contents of the subject line, but the contents of the message will be covered by a version of PGP encryption which has so far not been cracked.

“We have to make it to clear to people it is not secret you’re emailing your priest. But the content of what you’re emailing him is secret.”

PGP relies upon both the sender and receiver of an email having their own encryption key which could potentially lead to similar problems as those experienced at Lavabit which closed down after being force to hand its keys over to the authorities.

Yahoo and Google, however, both claim that they will not hand keys over, not least because they are massive companies with the funds required to finance a large number of lawyers, with Stamos saying:

“That’s very different from a publicly traded multibillion dollar company with an army of lawyers who would love to take this argument all the way to the Supreme Court.”

Mark James, security specialist at ESET welcomed the news but pointed out that the average man in the street may not understand how to take advantage of the change:

“It’s great that two of the largest internet email providers will be offering us the ability to send end-to-end encrypted emails to each other. After Google announcing it was doing the same thing a few months ago it is good to see another leading email provider following suit.

It won’t mean a lot to the average user but anyone who wants to protect their emails when using these providers will be able to do so by using these browser extensions.

So what does it actually mean? Well once the browser extension is added and configured you will be able to send an email with the contents completely scrambled to anyone except the sender and receiver. No one will be able to read the content. There are many encryption tools available for those that want to install and use them but for the average user they are often scary to set up. I for one welcome any type of “easy” security.”

I personally hope that Yahoo and Google do make their email encryption easily understandable by the less savvy web users out there though because we seemingly live in a society where having nothing to hide doesn’t mean no-one will go looking anyway.

Are you whiling the time away until you get your first smartwatch or preparing to run to the local store to buy the latest fitness tracker?

If so, you may wish to know that snoops can track such devices and at a fraction of the prices you will be paying for the latest in wearable tech.

New research from Symantec has shown that it is possible to track individuals, even in crowded places, via cheap and readily accessible hardware.

The security firm took a Raspberry Pi and added components including a Bluetooth 4.0 adapter, SD card and battery pack. All-in, the home-made tracker cost around $75 which is about £44/56 Euros.

The company took a number of such devices to busy public locations in both Switzerland and Ireland, as well as a major sporting event, and ran them in passive mode. By simply scanning the airwaves for signals broadcast by wearables, the RasPis were able to successfully track each and every one of them via their serial numbers or a combination of other factors, prompting the researchers to say:

“In our testing, we found that all the devices we encountered can be easily tracked using the unique hardware address that they transmit. Some devices (depending on configuration) may allow for remote querying, through which information such as the serial number or a combination of characteristics of the device can be discovered by a third party from a short distance away without making any physical contact with the device.”

The researchers also delved further into wearable tech and the associated apps, looking for other potential security and privacy concerns, and it found several.

Symantec discovered that 52% of the self-tracking apps it examined did not have a privacy policy which, it says, may suggest that the developers do not take security and privacy as seriously as they perhaps could.

Researchers also discovered a large amount of unintentional data leakage with the average app contacting 5 domains (one even contacted 14 domains) in a short period of time. Whilst there may be legitimate reasons for a fitness or other tracking app to contact a number of domains for the transmission of data or to serve ads, for instance, Symantec said that the number of domains being contacted increased the risks of data leakage through human error, social engineering or careless or malicious handling of data.

The researchers also discovered other concerns, such as weak session management, which could lead to session hijacking, which could in turn lead to further problems.

Symantec’s blog post ends with the company pointing out that self-tracking apps and devices are not synonymous with privacy and suggesting that those who value their privacy will not get involved in self-tracking in the first place (I agree).

However, knowing that many users will continue to use fitness trackers, smartwatches, etc., regardless, the company offers up the following tips which I would describe as being little more than damage limitation rather than a security solution:

Use a screen lock or password to prevent unauthorized access to your device

Do not reuse the same user name and password between different sites

Use strong passwords

Turn off Bluetooth when not required

Be wary of sites and services asking for unnecessary or excessive information

Be careful when using social sharing features

Avoid sharing location details on social media

Avoid apps and services that do not prominently display a privacy policy

Read and understand the privacy policy of app and services

Install app and operating system updates when available

Use a device-based security solution if available

Use full device encryption if available

If you would like more information on Symantec’s research a whitepaper can be found here.

The Information Commissioner’s Office (ICO) has today released a new report that considers how big data will operate within existing data protection laws which ensure that personal information is:

Fairly and lawfully processed

Processed for limited purposes

Adequate, relevant and not excessive

Accurate and up to date

Not kept for longer than is necessary

Processed in line with your rights

Secure

Not transferred to other countries without adequate protection

The Big data and data protection report accepts that the use of big data can bring benefits to companies and doesn’t wish to stifle innovation. That said, the ICO is keen to point out that organisations still have an obligation to keep information both private and secure, offering the following practical advice for dealing with personal information used in big data analytics:

Personal data – Does your big data project need to use personal data at all? If you are using personal data, can it be anonymised? If you are processing personal data you have to comply with the Data Protection Act.

Privacy impact assessments – Carry out a privacy impact assessment to understand how the processing will affect the people concerned. Are you using personal data to identify general trends or to make decisions that affect individuals?

Repurposing data – If you are repurposing data, consider whether the new purpose is incompatible with the original purpose, in data protection terms, and whether you need to get consent. If you are buying in personal data from elsewhere, you need to practice due diligence and ensure that you have a data protection condition for your processing.

Data minimisation – Big data analytics is not an excuse for stockpiling data or keeping it longer than you need for your business purposes, just in case it might be useful. Long term uses must be
articulated or justifiable, even if all the detail of the future use is not known.

Transparency – Be as transparent and open as possible about what you are doing. Explain the purposes, implications and benefits of the analytics. Think of innovative and effective ways to
convey this to the people concerned.

Subject access – People have a right to see the data you are processing about them. Design systems that make it easy for you to collate this information. Think about enabling people to
access their data on line in a re-usable format.

The ICO’s head of policy delivery, Steve Wood, says that there is a buzz around how big data can be used for social benefits as well as the more obvious economic advantages it can provide. He did, however, highlight how organisations are struggling to understand how they can put big data to innovative new uses without falling foul of the law. Wood also explained that individuals are also expressing concern over how their personal data is being used in big data scenarios.

The answer, he says, begins with organisations being more transparent about how they are using big data:

“What we’re saying in this report is that many of the challenges of compliance can be overcome by being open about what you’re doing. Organisations need to think of innovative ways to tell customers what they want to do and what they’re hoping to achieve.

Not only does that go a long way toward complying with the law, but there are benefits from being seen as responsible custodians of data.”

The ICO report says that openness is a key factor, pointing out how organisations need to ensure that personal information is only used in ways previously communicated to users. The complexity of big data, it says, should not be used as an excuse to use data without consent.

Responding to concerns that existing data protection law is insufficient in the face of big data, Wood added that:

“Big data can work within the established data protection principles. The basic data protection principles already established in UK and EU law are flexible enough to cover big data. Applying those principles involves asking all the questions that anyone undertaking big data ought to be asking. Big data is not a game that is played by different rules.
The principles are still fit for purpose but organisations need to innovate when applying them.”

The organisation notes how the area of big data is fast-evolving, leading it to conclude that its guidance will likely change over time. In light of that, the ICO positively encourages feedback which can be sent to [email protected] up until September 12 of this year.