.@NakedSecurity: New bill would give parents an ‘Eraser Button’ to delete kids’ data – The COPPA overhaul would ban targeting ads at kids under 13 and ad targeting based on race, socioeconomics or geolocation on kids under 15.

Two US senators on Tuesday proposed a major overhaul of the Children’s Online Privacy Protection Act (COPPA) that would give parents and kids an “Eraser Button” to wipe out personal information scooped up online on kids.

The bipartisan bill, put forward by Senators Edward J. Markey (D-Mass.) and Josh Hawley (R-Mo.), would also expand COPPA protection beyond its current coverage of children under 13 in order to protect kids up until the age of 15.

The COPPA update also packs an outright ban on targeting ads at children under 13 without parental consent, and from anyone up until the age of 15 without user consent. The bill also includes a “Digital Marketing Bill of Rights for Minors” that limits the collection of personal information on minors.

The proposed bill would also establish a first-of-its-kind Youth Privacy and Marketing Division at the Federal Trade Commission (FTC) that would be responsible for addressing the privacy of children and minors and marketing directed at them.

“Rampant and nonstop” marketing at kids

Markey said in a press release that COPPA will remain the “constitution for kids’ privacy online,” and that the senators’ proposed changes would introduce “an accompanying bill of rights.”

As it is, Markey said, marketing at kids nowadays is rampant and nonstop:

In 2019, children and adolescents’ every move is monitored online, and even the youngest are bombarded with advertising when they go online to do their homework, talk to friends, and play games. In the 21st century, we need to pass bipartisan and bicameral COPPA 2.0 legislation that puts children’s well-being at the top of Congress’s priority list. If we can agree on anything, it should be that children deserve strong and effective protections online.

The right of kids to be forgotten

The proposed law has the flavor of the EU General Protection Data Regulation (GDPR), what with the greater control it grants citizens over how their personal data is obtained, processed, and shared, as well as visibility into how and where that data is used.

The citizens, in this case, would be children and their parents, who would be entitled to get their hands on any personal information of the child or minor that’s been collected, “within a reasonable time” after making a request, without having to pay through the nose to get it, and in a form that a child or minor would find intelligible.

The bill also requires that online operators provide a “clear and prominent means” to correct, complete, amend, or erase any personal information about a child or minor that’s inaccurate: in other words, what the senators are calling an Eraser Button.

What would change?

These are the specific privacy protections that the bill would strengthen:

Prohibiting internet companies from collecting personal and location information from anyone under 13 without parental consent, and from anyone 13 to 15 years old without the user’s consent.

Requiring manufacturers of connected devices targeted to children and minors to prominently display on their packaging a privacy dashboard detailing how sensitive information is collected, transmitted, retained, used, and protected.

Recently, the FTC has been flexing its COPPA bicep like never before. Last week, video-streaming app TikTok agreed to pay a record $5.7 million fine for allegedly collecting names, email addresses, pictures and locations of children younger than 13 – all illegal under COPPA.

These tech companies know too much about our kids, and we don’t know what they’re doing with that data, Senator Hawley was quoted as saying in Markey’s press release:

Big tech companies know too much about our kids, and even as parents, we know too little about what they are doing with our kids’ personal data. It’s time to hold them accountable. Congress needs to get serious about keeping our children’s information safe, and it begins with safeguarding their digital footprint online.

The Markey-Hawley bill rightly recognizes that the internet’s prevailing business model is harmful to young people. The bill’s strict limits on how kids’ data and can be collected, stored, and used – and its all-out ban on targeted ads for children under 13 – would give kids a chance to develop a healthy relationship with media without being ensnared by Big Tech’s surveillance and marketing apparatuses. We commend Senators Markey and Hawley for introducing this landmark legislation and urge Congress to act quickly to put children’s needs ahead of commercial interests.

.@NakedSecurity: Citrix admits attackers breached its network – what we know – On Friday, software giant Citrix issued a short statement admitting that hackers recently managed to get inside its internal network. According to a statement by chief information security officer Stan Black, the company was told of the attack by the FBI on 6 March, since when it had established that attackers had taken “business documents” […]

The FTC says that its Consumer Sentinel Network has noticed a “striking” increase in the median dollar amount that people 70 and older report losing to fraud. When they started to peel back the layers, the Commission found a number of stories that involve people of that age group having mailed “huge” amounts of cash to people who pretended to be their grandchildren.

People from all age groups report having fallen for phoney family and friends: the reported median loss for individuals is about $2,000, which is more than four times the median loss of $462 reported for all fraud types.

But that’s nothing compared with how much money is being bled out of the elderly: those who send cash reported median losses of a whopping $9,000. About one in four of the ripped-off elderly who report that they lost money to a family or friend imposter say that they sent cash: a far higher rate than the 1 in 25 of people who sent cash for all other frauds.

CBS News talked to one man who got scammed in a way that the FTC says is a common ploy.

Slick scripts

It started with a phone call one morning in April, Franc Stratton told the station. The caller pretended to be a public defender from Austin, Texas, who was calling to tell Stratton that his grandson had been in a car wreck, had been driving under the influence, and was now in jail.

Don’t be afraid, the imposter told Stratton: you can bail out your grandson by sending $8,500 in cash via FedEx. It didn’t raise flags for a good reason: Stratton had done exactly that for another family member in the past.

The cherry on top: the “attorney” briefly put Stratton’s “grandson” on the phone. The fake kid sounded injured, so Stratton drove to the bank to get the cash.

Stratton went so far as to go to a local FedEx to overnight the money to an Austin address. But later that night, he said, he and his wife looked at each other and said, Scam!

Fortunately, they came to their senses in time to call FedEx to have the package returned. He got his money back, but Stratton is still frustrated. Of all people, he should know better, he says: he’s retired now, after a career spent working in intelligence, first for the Air Force and later as a cybersecurity programmer.

That’s how slick the scammers are, with their meticulously prepared scripts, and it shows that they know exactly how to put people into a panicked state, where they’re likely to make bad decisions. Stratton said he fell for it “because of the way that they scripted it.”

Self-defense for grandparents

These scams are growing more sophisticated as fraudsters do their homework, looking you and/or your grandkids up on social media to lace their scripts with personal details that make them all the more convincing.

Grandparents, no matter how savvy you are, you’ve got an Achilles heel: your love for your grandchildren. The fakers know exactly how to milk that for all it’s worth.

The FTC warns that they’ll pressure you into sending money before you’ve had time to think it through. The Commission offers this advice to keep the shysters from wringing your heart and your wallet:

Stop. Breathe. Check it out before you send a dime. Look up your grandkid’s phone number yourself, or call another family member.

Don’t overshare. Whatever you share publicly on social media becomes a weapon in the arsenals of scammers. The more personal details they know about you, the more convincing they can sound. It’s one of many reasons to be careful about what you share on social media.

Pass the information on to a friend. Even if you haven’t been targeted yourself, you probably know somebody who’s either already gotten a call like this or who will.

Cybersecurity and other data-related issues top the list of risks for heads of audit in 2019; here are key actions audit must take.

The number of cyberattacks continues to increase significantly as threat actors become more sophisticated and diversify their methods. It’s hardly surprisingly, then, that cybersecurity preparedness tops the list of internal audit priorities for 2019.

Other data and IT issues are also on the radar for internal audit, according to the Gartner Audit Plan Hot Spots. Cybersecurity topped the list of 2019’s 12 emerging risks, followed by data governance, third parties and data privacy.

“These risks, or hot spots, are the top-of-mind issues for business leaders who expect heads of internal audit to assess and mitigate them, as well as communicate their impact to organizations and stakeholders,” said Malcolm Murray, VP, Team Manager at Gartner.

What audit can do on cyberpreparedness

The Gartner 2019 Audit Key Risks and Priorities Survey shows that 77% of audit departments definitely plan to cover cybersecurity detection and prevention in audit activities during the next 12-18 months. Only 5% have no such activities planned. And yet, only 53% of audit departments are highly confident in their ability to provide assurance over cybersecurity detection and prevention risks.

Here are some steps audit can take to tackle cybersecurity preparedness:

Review device encryption on all devices, including mobile phones and laptops. Assess password strength and the use of multifactor identification.

Review access management policies and controls, and set user access and privileges by defined business needs. Swiftly amend access when roles change.

Review patch management policies, evaluating the average time from patch release to implementation and the frequency of updates. Make sure patches cover IoT devices.

Evaluate employee security training to ensure that the breadth, frequency and content is effective. Don’t forget to build awareness of common security threats such as phishing.

Participate in cyber working groups and committees to develop cybersecurity strategy and policies. Help determine how the organization identifies, assesses and mitigates cyberrisk and strengthens current cybersecurity controls.

Data governance

Big data increases the strategic importance of effective mechanisms to collect, use, store and manage organizational data, but many organizations still lack formal data governance frameworks and struggle to establish consistency across the organization. Few scale their programs effectively to meet the growing volume of data. Left unsolved, these governance challenges can lead to operational drag, delayed decision making and unnecessary duplication of efforts.

What audit can do:

Review the data assets inventory, which must include, at a minimum, the highest-value data assets of the organization. Assess the extent of both structured and unstructured data assets.

Review the classification of data and associated process and policies. Analyze how data will be retained and destroyed, encryption requirements and whether relevant categories of use have been established.

Participate in relevant working groups and committees to stay abreast of governance efforts and provide advisory input when frameworks are built.

Review the analytics tools inventory across the organization. Determine if IT has an approved vendor list for analytics tools and what efforts are being made to educate the business on the use of approved tools.

Third parties

Efforts to digitalize systems and processes add new, complex dimensions to third-party challenges that have been a perennial concern for organizations. Nearly 70% of chief audit executives reported third-party risk as one of their top concerns, but organizations still struggle to manage this risk. What audit can do:

Evaluate scenario analysis for strategic initiatives to analyze potential risks and outcomes associated with interdependent partners in the organization’s business ecosystem. Consider enterprise risk appetite and identify trigger events that would cause the organization to take corrective action.

Investigate third-party regulatory requirements, assess how effectively senior management communicates regulatory updates across the business and how clearly it articulates requirements for third parties.

Evaluate the classification of third-party risk and confirm that the business conducts random checks of third parties to ensure classifications properly account for actual risk levels.

Data privacy

Companies today collect an unprecedented amount of personal information, and the costs of managing and protecting that data are rising. Seventy-seven percent of audit departments say data privacy will definitely be covered in audit activities in the next 12–18 months. What audit can do:

Review data protection training and ensure that employees at all levels complete the training. Include elements such as how to report a data breach and protocols for data sharing.

Assess current level of GDPR compliance and identify compliance gaps. Review data privacy policies to make sure the language is clear and customer consent is clearly stated.

Assess data access and storage. Make sure access to sensitive information is role-based and privileges are properly set and monitored.

Review data breach response plans. Evaluate how quickly the company identifies a breach and the mechanisms for notifying impacted consumers and regulators.

A new phishing report has been released that keeps track of the top 25 brands targeted by bad actors. Of these brands, Microsoft, Paypal, and Netflix are the top brands impersonated by phishing attacks.

Email security provider Vade Secure tracks the 25 most spoofed brands in North America that are impersonated in phishing attacks. In their Q3 2018 report, a total of 86 brands are tracked, which consist of 95% of all attacks detected by the company.

Overall, Vade Secure has stated that phishing attacks increased by 20.4% in the 3rd quarter with the most targeted being Microsoft, followed by PayPal, Netflix, Bank of America, and Wells Fargo.

Cloud based services and financial companies remain the two most targeted industries with Microsoft being the top targeted brand as attackers try to gain access to Office 365, One Drive, and Azure credentials.

“The primary goal of Microsoft phishing attacks is to harvest Office 365 credentials,”stated Vade Secure’s report. “With a single set of credentials, hackers can gain access to a treasure trove of confidential files, data, and contacts stored in Office 365 apps, such as SharePoint, OneDrive, Skype, Excel, CRM, etc. Moreover, hackers can use these compromised Office 365 accounts to launch additional attacks, including spear phishing, malware, and, increasingly, insider attacks targeting other users within the same organization.”

Office 365 phishing emails typically indicate that the recipient’s account has been suspended or disabled and then prompts them to login to resolve the issue. These phishing forms are almost identical to a legitimate Office 365, and by creating a sense of urgency, the attackers hope the victims will be less vigilant as they enter their credentials.

Followed by Microsoft, are PayPal phishing schemes where attackers try to gain access to victim’s money and Netflix, which is used to steal credit card information.

Of particular interest is that attackers tend to follow a pattern as to what days they send the most volume of phishing emails. According to the report, most work related attacks tend to occur during the week with Tuesday and Thursday being the largest volume days. For Netflix, the most targeted days are Sunday when people are taking a break to watch some TV.

Phishing attacks become more targeted

Vade Secure has also noticed that attackers are starting to decrease the amount of times a particular URL is used in a phishing campaign. Instead attackers are using unique URLs in each phishing email in order to bypass mail filters.

“What should be more concerning to security professionals is that phishing attacks are becoming more targeted,” continued Vade Secure’s report. “When we correlated the number of phishing URLs against the number of phishing emails blocked by our filter engine, we found that the number of emails sent per URL dropped more than 64% in Q3. This suggests that hackers are using each URL in fewer emails in order to avoid by reputation-based security defenses. In fact, we’ve seen sophisticated phishing attacks where each email contains a unique URL, essentially guaranteeing that they will bypass traditional email security tools.”

Protecting yourself from phishing attacks

As phishing attacks become more sophisticated, they also become harder to detect. Using cloud services, attackers are now able to secure their phishing forms with SSL certificates from well known and trusted companies such as Microsoft and Cloudflare. This allows the forms to look authentic to victims.

As you can see from phishing attack below, the login form looks legitimate, the site is on a Microsoft owned domain, and the page is secured. To many, this would appear as a legitimate Microsoft form. In reality, the attacker is hosting their form on a Microsoft cloud service in order to create this sense of legitimacy.

Therefore, it is always important to scrutinize a site before entering any login credentials. If the URL looks strange, there is incorrect spelling, grammar is incorrect, or something does not feel right then you should not enter any account credentials. Instead contact your administrator or the company itself if you are concerned your account has problems. If you don’t know the sender, don’t open the email.

Proprietary data — including information that would enable systematic hacking of company servers for sabotage, industrial espionage and worse — is protected from legal exposure by a complex set of well-understood laws and norms in the United States. But that same data is accessible from company phones.

Can the police simply take that information? Until recently, most professionals would have said no.

Why? Because business and IT professionals tend to believe that smartphones are covered by the Fourth Amendment’s strictures against “unreasonable searches and seizures,” a protection recently reaffirmed by the Supreme Court. And smartphones are also protected by the Fifth Amendment, many would say, because divulging a passcode is akin to being “compelled” to be a “witness” against yourself.

Unfortunately, these beliefs are wrong.

The trouble with passcodes

Apple last year quietly added a new feature to iPhones designed to protect smartphone data from police searches. When you quickly press the on/off button on an iPhone five times, it turns off Touch ID and Face ID.

The thinking behind the so-called cop button is that, because police can compel you to use biometrics, but not a passcode, to unlock your phone, the feature makes it impossible for the legal system to force you to hand over information.

Unfortunately, this belief has now been undermined.

We learned this week that a Florida man named William John Montanez was jailed for six months after claiming that he forgot the passcodes for his two phones.

Montanez was pulled over for a minor traffic infraction. Police wanted to search his car. He refused. The police brought in dogs, which found some marijuana and a gun. (Montanez said the gun was his mother’s.) During the arrest, his phone got a text that said, “OMG, did they find it,” prompting police to get a warrant to search his phones. That’s when Montanez claimed he didn’t remember the passcodes, and the judge sentenced him to up to six months in jail for civil contempt.

As a precedent, this cascading series of events changes what we thought we knew about the security of the data on our phones. What started as an illegal turn ended up with jail time over the inability or unwillingness to divulge what we thought was a constitutionally protected bit of information.

We’ve also learned a lot recently about the vulnerability of location data on a smartphone.

The solution for individual users who want to keep location and other data private is to simply switch off the feature, such as the Location History feature in Google’s Android operating system. Right?

The fiasco was based on false information that used to exist on Google’s site. Turning off Location History, the site said, meant that “the places you go are no longer stored.” In fact, they were stored, just not in the user-accessible Location History area.

Google corrected the false language, adding, “Some location data may be saved as part of your activity on other services, like Search and Maps.”

Stored data matters.

The FBI recently demanded from Google the data about all people using location services within a 100-acre area in Portland, Maine, as part of an investigation into a series of robberies. The request included the names, addresses, phone numbers, “session” times and duration, log-in IP addresses, email addresses, log files and payment information.

The order also said that Google could not inform users of the FBI’s demand.

Google did not comply with the request. But that didn’t keep the FBI from pushing for it.

In fact, police are evolving their methods, intentions and technologies for searching smartphones.

Police data-harvesting machines

A device called GrayKey, from a company called GrayShift, can unlock any iPhone or iPad.

GrayShift licenses the devices for $15,000 per year and up to 300 phone cracks.

It’s a turnkey system. Each GrayKey has two Lightning cables. Police need only plug in a phone, and eventually the phone’s passcode appears on the phone’s screen, giving full access.

That may be why Apple introduced in the fall a new “USB Restricted Mode” for iPhones. That mode makes it harder for police (or criminals) to crack a phone via the Lightning port.

The mode is activated by default, which is to say that the “switch” in settings for USB Accessories is turned off. With that switch off, the Lightning port won’t connect to anything after an hour of the phone being locked.

And the U.S. isn’t the only country with police data-harvesting machines.

A world of trouble for smartphone data

Chinese authorities have their own technology for harvesting the data from phones, and that technology is now being deployed by police in the field. Police anywhere in the country can demand that anyone hand over a phone, which is then scanned by a device, the use of which is reportedly spreading across China.

Chinese authorities have both desktop and handheld scanner devices, which automatically extract and process emails, social posts, videos, photos, call histories, text messages and contact lists to aid them in looking for transgressions.

Some reports suggest that the devices, which are made by both Israeli and Chinese companies, are unable to crack newer iPhones but can access nearly every other kind of phone.

Another factor to be considered is that the protections of the U.S. Constitution end at the border — literally at the border.

And once abroad, all bets are off. Even in friendly, pro-privacy nations such as Australia.

The Australian government on Tuesday proposed a law called the Assistance and Access Bill 2018. If it becomes law, the act would require people to unlock their phones for police or face up to ten years in prison (the current maximum is two years).

It would empower police to legally bug or hack phones and computers.

The bill would force carriers, as well as companies such as Apple, Google, Microsoft and Facebook, to give police access to the private encrypted data of their customers if technically possible.

Failure to comply would result in fines of up $7.3 million and prison time.

Police would need a warrant to crack, bug or hack a phone.

Police would need a warrant to crack, bug or hack a phone.

The bill may never become law. But Australia is just one of many nations affected by a new political will to end smartphone privacy when it comes to law enforcement.

If you take anything away from this column, please remember this: The landscape for what’s possible in the realm of police searches of smartphones is changing every day.

In general, smartphones are becoming less protected from police searches, not more protected.

That’s why the assumption of every IT department, every enterprise and every business progressional — especially those of us who travel internationally on business — must be that the data on a smartphone is not safe from official scrutiny.

It’s time to rethink company policies, training, procedures and permissions around smartphones.

In today’s world cyber incidents activities such as data theft, insider threat, malware attack most are significant security risks and some it caused by the employees of the company both intentionally or unknowingly, also around 95% of threat and Activities with access to corporate endpoints, data, and applications.

Many of the security testing among the most alarming discoveries was that 95 percent of assessments revealed employees were actively researching, installing or executing security or vulnerability testing tools in attempts to bypass corporate security.

They are using anonymity tools like Tor,VPNsfrequently to hide who is Trying to breaking the corporate security.

Christy Wyatt, CEO at Dtex Systems said, “Some of the year’s largest reported breaches are a direct result of malicious insiders or insider negligence.

People are the weakest security link

Last year survey reported by Dtex Systems said, 60 percent of all attacks are carried out by insiders. 68 percent of all insider breaches are due to negligence, 22 percent are from malicious insiders and 10 percent are related to credential theft. Also, the current trend shows that the first and last two weeks of employment for employees are critical as 56 percent of organizations saw potential data theft from leaving or joining employees during those times.

Increased use of cloud services puts data at risk

64 percent of enterprises assessed found corporate information on the web that was publicly accessible, due in part to the increase in cloud applications and services.

To make matters worse, 87 percent of employees were using personal, web-based email on company devices. By completely removing data and activity from the control of corporate security teams, insiders are giving attackers direct access to corporate assets.

Inappropriate internet usage is driving risk

59 percent of organizations analyzed experienced instances of employees accessing pornographic websites during the work day.

43 percent had users who were engaged in online gambling activities over corporate networks, which included playing the lottery and using Bitcoin to bet on sporting events.

This type of user behavior is indicative of overall negligence and high-risk activities taking place.

Dtex Systems analyzed and prepared these risk assessments from 60 enterprises across North America, Europe and Asia with the industries like IT, Finance, Public Sector, Manufacturing, Pharmaceuticals and Media & Entertainment.

Please consider your cybersecurity posture when it comes to your employees, again people are the leading cause to “Risk”.

Pen Test Partners’ Ken Munro and his colleagues – some of which are former ship crew members who really understand bridge and propulsion systems – have been probing the security of ships’ IT systems for a while now and the results are depressing: satcom terminals exposed on the Internet, admin interfaces accessible via insecure protocols, no firmware signing, easy-to-guess default credentials, and so on.

“Ship security is in its infancy – most of these types of issues were fixed years ago in mainstream IT systems,” Pen Test Partners’ Ken Munro says, and points out that the advent of always-on satellite connections has exposed shipping to hacking attacks.

A lack of security hygiene

Potential attackers can take advantage of poor security hygiene on board, but also of the poor security of protocols and systems provided by maritime product vendors.

For example, the operational technology (OT) systems that are used to control the steering gear, engines, ballast pumps and so on, communicate using NMEA 0183 messages. But there is no message authentication, encryption or validation of these messages, and they are in plain text.

“All we need to do is man in the middle and modify the data. This isn’t GPS spoofing, which is well known and easy to detect, this is injecting small errors to slowly and insidiously force a ship off course,” Munro says.

They found other examples of poor security practices in a satellite communication terminal by Cobham SATCOM: things like admin interfaces accessible over telnet and HTTP, a lack of firmware signing and no rollback protection for the firmware, admin interface passwords embedded in the configuration (and hashed with unsalted MD5!), and the possibility to edit the entire web application running on the terminal.

They shared this with the public because all these flaws can be mitigated by setting a strong admin password, but they also found other issues that have to be fixed by the vendor (and so they disclosed them privately).

Electronic chart systems are full of flaws

ECDIS – electronic chart systems that are used for navigation – are also full of security flaws. They tested over 20 different ECDIS units and found things like old operating systems and poorly protected configuration interfaces. Attackers could ‘jump’ the boat by spoofing the position of the GPS receiver on the ship, or reconfigure the ECDIS to make the ship appear to be wider and longer than it is.

“This doesn’t sound bad, until you appreciate that the ECDIS often feeds the AIS [Automatic Identification System] transceiver – that’s the system that ships use to avoid colliding with each other,” Munro noted.

“It would be a brave captain indeed to continue down a busy, narrow shipping lane whilst the collision alarms are sounding. Block the English Channel and you may start to affect our supply chain.”

Tracking vulnerable ships

Pen Test Partners also created a vulnerable ship tracker by combining Shodan’s ship tracker, which uses publicly available AIS data, and satcom terminal version details.

The tracker does not show other details except the ship’s name and real-time position because they don’t want to help hackers, but it shows just how many vulnerable ships are out there.

Hacking incidents in the shipping industry

Hacking incidents affecting firms in the shipping industry are more frequent than the general public could guess by perusing the news. Understandably, the companies are eager to keep them on the down-low, if they can, as they could negatively affect their business competitiveness, Munro recently told me.

Some attacks can’t be concealed, though. For example, when A.P. Møller-Mærsk fell victim to the NotPetya malware, operations got disrupted and estimated losses reached several hundred millions of dollars.

That particular attack thankfully did not result in the company losing control of its vessels, but future attacks might lead to shipping security incidents and be more disruptive to that aspect of companies’ activities.

“Vessel owners and operators need to address these issues quickly, or more shipping security incidents will occur,” he concluded.

Consumer genealogy website MyHeritage said that email addresses and password information linked to more than 92 million user accounts have been compromised in an apparent hacking incident.

MyHeritage said that its security officer had received a message from a researcher who unearthed a file named “myheritage” containing email addresses and encrypted passwords of 92,283,889 of its users on a private server outside the company.

“There has been no evidence that the data in the file was ever used by the perpetrators,” the company said in a statement late Monday.

MyHeritage lets users build family trees, search historical records and hunt for potential relatives. Founded in Israel in 2003, the site launched a service called MyHeritage DNA in 2016 that, like competitors Ancestry.com and 23andMe, lets users send in a saliva sample for genetic analysis. The website currently has 96 million users; 1.4 million users have taken the DNA test.

According to MyHeritage, the breach took place on Oct. 26, 2017, and affects users who signed up for an account through that date. The company said that it doesn’t store actual user passwords, but instead passwords encrypted with what’s called a one-way hash, with a different key required to access each customer’s data. So we ask “Why did it take so long to declare a breach”

In some past breaches, however, hashing schemes have been successfully converted back into passwords. A hacker able to decrypt the hashed passwords exposed in the breach could access personal information accessible when logging into someone’s account, such as the identity of family members. But even if hackers were able to get into a customer’s account, it’s unlikely they could easily access raw genetic information, since a step in the download process includes email confirmation.

In its statement, the company emphasized that DNA data is stored “on segregated systems and are separate from those that store the email addresses, and they include added layers of security.”

MyHeritage has set up a 24/7 support team to assist customers affected by the breach. It plans to hire an independent cybersecurity firm to investigate the incident and potentially beef up security. In the meantime, users are advised to change their passwords.

Why would hackers “Criminals” want to steal and then sell DNA back for ransom? Hackers could threaten to revoke access or post the sensitive information online if not given money. This data could be very valuable to insurance companies (Medical, and Life), mortgage companies, and then you ask “why”? In a world where data is posted online, it could be used to genetically discriminate against people, such as denying mortgages or increasing insurance costs. (it doesn’t help that interpreting genetics is complicated and many people don’t understand the probabilities anyway.) This data could be sold on the down-low or monetized to insurance companies, You can imagine the consequences: One day, I might apply for a long-term loan and get rejected because deep in the corporate system, there is data that I am very likely to get Alzheimer’s and die before I would repay the loan. In the future, if genetic data becomes commonplace enough, people might be able to pay a fee and get access to someone’s genetic data, the way we can now access someone’s criminal background.

Case and point, Sacramento investigators tracked down East Area Rapist suspect Joseph James DeAngelo using genealogical websites that contained genetic information from a relative, the Sacramento County District Attorney’s Office confirmed Thursday.

The effort was part of a painstaking process that began by using DNA from one of the crime scenes from years ago and comparing it to genetic profiles available online through various websites that cater to individuals wanting to know more about their family backgrounds by accepting DNA samples, said Chief Deputy District Attorney Steve Grippi.

This really should not be big news, I’ve been stating it since Alexa came out. The MIC is open all the time unless you “Mute” it and data is saved and transmitted to Amazon. Make sure you understand the technology before you start adding all of these types of IoT devices in your home, as I call them “Internet of Threats”

The call that started it all: “Unplug your Alexa devices right now.”
Amazon confirmed an Echo owner’s privacy-sensitive allegation on Thursday, after Seattle CBS affiliate KIRO-7 reported that an Echo device in Oregon sent private audio to someone on a user’s contact list without permission.

“Unplug your Alexa devices right now,” the user, Danielle (no last name given), was told by her husband’s colleague in Seattle after he received full audio recordings between her and her husband, according to the KIRO-7 report. The disturbed owner, who is shown in the report juggling four unplugged Echo Dot devices, said that the colleague then sent the offending audio to Danielle and her husband to confirm the paranoid-sounding allegation. (Before sending the audio, the colleague confirmed that the couple had been talking about hardwood floors.

After calling Amazon customer service, Danielle said she received the following explanation and response: “‘Our engineers went through all of your logs. They saw exactly what you told us, exactly what you said happened, and we’re sorry.’ He apologized like 15 times in a matter of 30 minutes. ‘This is something we need to fix.'”

Danielle next asked exactly why the device sent recorded audio to a contact: “He said the device guessed what we were saying.” Danielle didn’t explain exactly how much time passed between the incident, which happened “two weeks ago,” and this customer service response.

When contacted by KIRO-7, Amazon confirmed the report and added in a statement that the company “determined this was an extremely rare occurrence.” Amazon didn’t clarify whether that meant such automatic audio-forwarding features had been built into all Echo devices up until that point, but the company added that “we are taking steps to avoid this from happening in the future.”

Amazon did not immediately respond to Ars Technica’s questions about how this user’s audio-share was triggered.

Update, 5:06pm ET: Amazon forwarded an updated statement about KIRO-7’s report to Ars Technica, which includes an apparent explanation for how this audio may have been sent:Echo woke up due to a word in background conversation sounding like “Alexa.” Then, the subsequent conversation was heard as a “send message” request. At which point, Alexa said out loud “To whom?” At which point, the background conversation was interpreted as a name in the customers contact list. Alexa then asked out loud, “[contact name], right?” Alexa then interpreted background conversation as “right.” As unlikely as this string of events is, we are evaluating options to make this case even less likely.

Amazon did not explain how so many spoken Alexa prompts could have gone unnoticed by the Echo owner in question. Second update: The company did confirm to Ars that the above explanation was sourced from device logs.