Sherman's Security Blog
I am Sherman Hand. (also known as Policysup) I have created this blog and will use a part of my day to write about what is going on in the world. I hope to discuss things in a down to earth and practical way. I hope to hear back from you on your thoughts. I do not in any way intend to speak for my employer. The content of this blog will be either opinions that are strictly mine, general observations,re posts, or information that is already in the public domain.

Monthly Archives: November 2017

A series of recently disclosed critical Bluetooth flaws that affect billions of Android, iOS, Windows and Linux devices have now been discovered in millions of AI-based voice-activated personal assistants, including Google Home and Amazon Echo.

As estimated during the discovery of this devastating threat, several IoT and smart devices whose operating systems are often updated less frequently than smartphones and desktops are also vulnerable to BlueBorne.

BlueBorne is the name given to the sophisticated attack exploiting a total of eight Bluetooth implementation vulnerabilities that allow attackers within the range of the targeted devices to run malicious code, steal sensitive information, take complete control, and launch man-in-the-middle attacks.

What’s worse? Triggering the BlueBorne exploit doesn’t require victims to click any link or open any file—all without requiring user interaction. Also, most security products would likely not be able to detect the attack.

What’s even scarier is that once an attacker gains control of one Bluetooth-enabled device, he/she can infect any or all devices on the same network.

These Bluetooth vulnerabilities were patched by Google for Android in September, Microsoft for Windows in July, Apple for iOS one year before disclosure, and Linux distributions also shortly after disclosure.

However, many of these 5 billion devices are still unpatched and open to attacks via these flaws.

IoT security firm Armis, who initially discovered this issue, has now disclosed that an estimated 20 million Amazon Echo and Google Home devices are also vulnerable to attacks leveraging the BlueBorne vulnerabilities.

If I split, around 15 million Amazon Echo and 5 million Google Home devices sold across the world are potentially at risk from BlueBorne.

Amazon Echo is affected by the following two vulnerabilities:

A remote code execution vulnerability in the Linux kernel (CVE-2017-1000251)

An information disclosure flaw in the SDP server (CVE-2017-1000250)

Since different Echo’s variants use different operating systems, other Echo devices are affected by either the vulnerabilities found in Linux or Android.

This Android flaw can also be exploited to cause a denial-of-service (DoS) condition.

Since Bluetooth cannot be disabled on either of the voice-activated personal assistants, attackers within the range of the affected device can easily launch an attack.

Armis has also published a proof-of-concept (PoC) video showing how they were able to hack and manipulate an Amazon Echo device.

The security firm notified both Amazon and Google about its findings, and both companies have released patches and issued automatic updates for the Amazon Echo and Google Home that fixes the BlueBorne attacks.

Amazon Echo customers should confirm that their device is running v591448720 or later, while Google has not made any information regarding its version yet.

Enterprise networks regularly see change in their devices, software installations, and file content. These modifications can create risk for the organization. Fortunately, companies can mitigate such risk by implementing foundational security controls.

For example, enterprises can monitor their important files for change using file integrity monitoring (FIM). This security measure enables IT security teams to determine when files change, how they change, who changed them, and what can be done to restore them if those modifications are unauthorized. Organizations can also use foundational controls to monitor for vulnerabilities potentially introduced by the addition of new physical and virtual devices. FIM won’t do the job, however. To obtain an accurate assessment of risk, minimize security threats, and maintain compliance, companies should turn to vulnerability management.

There are four stages to any effective vulnerability management program. These are as follows:

Vulnerability Scanning Process: Companies cannot adequately manage risk without first determining which of their IT assets need protecting. Towards that end, organizations should leverage factors such as physical or logical connection to higher classified assets, user access, and system availability to develop an asset’s risk factor. They should then identify the owners for each of those assets, set a scan frequency, (The Center for Internet Security recommends a frequency of at least weekly.) and establish timelines and thresholds for remediation.

Asset Discovery and Inventory: Once they have developed the vulnerability scanning process, enterprises must decide which assets they will subject to that procedure. They must therefore engage in asset discovery, another foundational control, and develop an inventory of all hardware and software installed on the corporate network. That inventory should include both authorized and unauthorized devices/software so that security teams can approve access and installation/execution for approved devices/software only. It should also record more granular details including possible connections with other assets, configuration, maintenance and replacement schedule, software installations, and usage.

Vulnerability Detection: The next step in a vulnerability management program is to apply the vulnerability scanning process to those assets recorded in the company’s inventory. This procedure generally takes the form of automated vulnerability scans. Upon completion, it might reveal weaknesses on certain discovered assets.

Reporting and Remediation: In the event a scan detects vulnerabilities, it’s up to the organization to report and remediate those weaknesses. Effective reporting and remediation usually involves prioritizing all discovered vulnerabilities and creating a patching schedule based upon those rankings. If a complete fix isn’t available, security teams should investigate if there are any workarounds available that they can use to mitigate the risk posed by an unpatched vulnerability.

You can learn more about the four stages of a vulnerability management program by reading this three–partguide.

Companies don’t need to stop there, however. They can augment the effectiveness of their vulnerability management program by investing in a tool that comes equipped with additional capabilities. Some add-on features to consider include the following:

Risk Scoring: Rather than just relying on quantitative vulnerability scoring systems like CVSS, businesses should be able to weigh vulnerabilities discovered on their networks based on their own unique requirements or specifications of their industry. Towards that end, they should choose a tool that uses risk scoring to customize vulnerability management data so that they can better protect themselves against digital threats.

Credentialed Assessment: Organizations should invest in a vulnerability management tool that uses administrative credentials to scan the file system, registry, and configuration files. These types of assessments aren’t always needed. However, they do provide a level of depth that non-credentialed assessments lack and can thereby yield more accurate vulnerability scanning results.

Identity and Access Management: Companies should integrate their vulnerability management system with their discovery service by investing in a tool that allows them to segregate vulnerability management data and partition user access. That way, only those who need access to such information can get it.

IT and Security Integrations: In addition to integrating their vulnerability management program with the discovery service, companies should configure their platform to work with IT operations and security teams. This makes it possible for organizations to optimize their resources in the pursuit of specific business goals.

Reporting: Once a vulnerability is discovered, authorized personnel should be able to use the vulnerability management tool to generate reports with an appropriate level of data for auditors, business executives, and a variety of audiences. They should also be able to customize those reports using filters and then distribute those analyses to users based on their roles. Those reports, in turn, can help organizations manage their security budgets and maintain compliance with relevant data security standards frameworks.

Apple has released a brand new ad for the iPad Pro. It features a young girl and a rose gold iPad Pro running iOS 11. And Apple’s pitch is quite clear here — the iPad is the future of computers. The company even thinks there will be a time when a young person doesn’t know what “computer” means.

There’s this meme that keeps coming back on Twitter. A young person discovers a floppy disk and calls it the save icon. Apple is using the same idea with this ad. When the mum asks her daughter what she is doing on her computer, she answers “what’s a computer?”

The character is never sitting at a desk. She’s always on the move, always using her iPad. It can be outside in the garden, in the bus, at a coffee shop or sitting at the top of a tree.

She’s using FaceTime and drawing on a screenshot using an Apple Pencil. She browses her photos, then drags and drops a photo into an iMessage conversation while FaceTiming. This is a good example of iOS 11’s new multitasking capabilities.

She types some text in Word, takes a photo, draws, illustrates handwritten notes with photos and reads a comic book. It’s clear that Apple wants you to realize that you can create stuff from an iPad.

It isn’t just a media consumption device. Apple isn’t just selling a device. The company is selling a new spontaneous lifestyle.

Germany’s Federal Network Agency (Bundesnetzagentur) issued a blanket ban on smartwatches aimed at children this week — and asked parents who’d already purchased such a device to destroy them, for good measure. The aggressive move is a response to growing privacy concerns surrounding devices aimed at minors.

“Via an app, parents can use such children’s watches to listen unnoticed to the child’s environment and they are to be regarded as an unauthorized transmitting system,” the agency’s president Jochen Homann said in a statement provided to the BBC. The FNA also urged educators to pay closer attention to students’ watches, as, “according to our research, parents’ watches are also used to listen to teachers in the classroom.”

Such concerns have been growing in recent years, as kid-targeted wearables have become more popular, along with their adult counterparts. Just last month, European watch dog group, Norwegian Consumer Council, issued a strongly worded report warning of safety concerns over GPS-enabled devices. That report went beyond tracking on the part of the parents, outlining the potential for simple hacking by outside parties.

“Any consumer looking for ways to keep their children safe and secure might want to think twice before purchasing a smartwatch as long as the faults outlined in these reports have not been fixed,” the NCC wrote.

That report specifically highlighted four kids’ smartwatch brands — Gator 2, Tinitell, Viksfjord and Xplora. The Federal Network Agency’s new rules, meanwhile, take things much further, banning the category at large. The decision follows a similar move last February, when the agency banned and ordered the destruction of the My Friend Cayla doll, after concerns were raised over the toy’s built-in microphone and Bluetooth connectivity.

Like that doll, the smartwatches have been classified as illegal spying devices by the agency.

A new proof-of-concept exploit, called AVGater, has found a way to abuse antivirus quarantines to attack systems and gain full control.

Security researchers described a proof-of-concept exploit that affects multiple antivirus products and can lead to a full system takeover.

Florian Bogner, a security researcher based in Vienna, disclosed the issue and named it AVGater, because, as Bogner wrote in his blog post, “every new vulnerability needs its own name and logo.”

Bogner said AVGater works by “manipulating the restore process from the virus quarantine.”

“By abusing NTFS directory junctions, the AV quarantine restore process can be manipulated, so that previously quarantined files can be written to arbitrary file system locations,” Bogner wrote in his blog post. “By restoring the previously quarantined file, the SYSTEM permissions of the AV Windows user mode service are misused, and the malicious library is placed in a folder where the currently signed in user is unable to write to under normal conditions.”

According to Bogner, he disclosed the AVGater vulnerability to Trend Micro, Emsisoft, Kaspersky Lab, Malwarebytes, Check Point and Ikarus Security Software, and all of those vendors have released patches for affected products.

Bogner did not specifically mention Symantec or McAfee in his post, and neither company responded to questions at the time of this article.

Bogner suggested that keeping software up-to-date is a good way to mitigate the risk of AVGater, but also noted there are limitations to the exploit.

“As AVGater can only be exploited if the user is allowed to restore previously quarantined files, I recommend everyone within a corporate environment to block normal users from restoring identified threats,” Bogner wrote. “This is wise in any way.”

Satya Gupta, founder and CTO at Virsec Systems, an application threat software company based in San Jose, Calif., said AVGater is yet another way an attacker could manipulate “legitimate processes to launch malicious code or scripts.”

“It’s also another nail in the coffin for conventional signature-based antivirus solutions. We’ve known for a while that fileless and memory-based exploits fly under the radar of most AV systems, but now hackers can use AV tools to essentially disable themselves,” Gupta told SearchSecurity. “Hackers are relentless and will inevitably find clever ways to bypass perimeter security. The battle has to move to protecting the integrity of applications for process and memory exploits.”

In its alert, DHS’s Industrial Control Systems Cyber Emergency Response Team describes how physicians use the portable cardiac rhythm management systems – or programmer – for implanted pacemakers and defibrillators. The vulnerabilities spotlighted in the alert involve the Boston Scientific device using “a hard-coded cryptographic key to encrypt protected health information prior to having data transferred to removable media.” Use of such a key significantly increases the possibility that encrypted data may be recovered. The alert also notes that the “device does not encrypt PHI at rest.”

Although that Boston Scientific cardiac programming device is not network accessible and the identified vulnerabilities are not remotely exploitable, the problems found by Rios and Butts could enable a potential attacker with physical access to the device to obtain patient data, the alert says.

More to Come?

The specific Boston Scientific PRM model that is the subject of the ICS-CERT alert is among a variety of vendors’ programming devices that Rios and Butts purchased from online auction sites for their security research, Rios explains in an interview with Information Security Media Group.

“The [vulnerable] PHI [cryptographic] key that we saw on the [Boston Scientific] programmer, that’s just the first of others to come,” Rios warns, adding that various vulnerabilities the researchers found on other vendors’ programming devices could also potentially result in additional government alerts.

“For some of the [resold] programmers, we actually found real patient data on them. So, when you look at the ICS-CERT advisory for the Boston Scientific programmer, you see that we basically have the key to decrypt the different pieces of data on [that] programmer.”

Breach Risk

The researchers’ finding of actual patients’ PHI – including names and Social Security numbers – on some of the examined resold devices suggests that there are not only weaknesses in the products’ design and features, but also point to sloppy practices by some healthcare entities that neglect to erase patient data before getting rid of the products, Rios says.

“That means anyone could have literally purchased these [used] devices and gotten this patient data off of these devices,” he says.

“So if you’re a hospital or a health delivery organization … when you go to the end of your device life cycle, when you turn the device in or dispose of it, you need to be sure your hospital’s or patients’ data is not on those devices,” he says. “If those devices end up on an auction website … or given to someone who’s not supposed to have it, and your hospital’s data is on there, that can put you at a lot of risk.”

Boston Scientific Responds

In a statement provided to ISMG, Boston Scientific says the company “rigorously” evaluates the security of its rhythm management devices through a comprehensive security risk assessment process, aligned with the Food and Drug Administration’s guidance.

“The ICS-CERT advisory highlights the importance of physical security in mitigating the risk of unauthorized users accessing patient data stored on a medical device – much like a laptop left in an open space is at risk of a security breach,” Boston Scientific says.

“The findings of the advisory do not impact patient safety, and in order to reduce risk of exploitation of protected health information, programmers and any related data storage drives should be physically secured and patient data should be removed from the device before it is retired.”

Whether the issues the two researchers identified are common to other types of medical devices;

The prospect of additional security or safety alerts from government agencies resulting from the research.

Rios is the founder of information security research firm WhiteScope, based in Half Moon Bay, Calif. His previous roles included director of vulnerability research and threat intelligence for Qualys, global managing director of professional services for Cylance, and “security ninja” for Google. He’s also served as an officer in the U.S. Marines and worked as an information assurance analyst for the U.S. Defense Information Systems Agency.

Some cloud providers reserve the right to scan your data for various violations, but few enterprises know if they or their employees have agreed to such terms of service.

On Halloween, Google told its Google G Suite users that “this morning, we made a code push that incorrectly flagged a small percentage of Google Docs as abusive, which caused those documents to be automatically blocked. A fix is in place and all users should have full access to their docs.”

That misfire reminded everyone that cloud providers have access to all your data. Many people worried that Google was scanning users’ documents in real time to determine if they’re being mean or somehow bad. You actually agree to such oversight in Google G Suite’s terms of service.

Those terms include include personal conduct stipulations and copyright protection, as well as adhering to “program policies.” Who knows what made the program that checks for abuse and other violations of the G Suite terms of service to go awry. But something did.

And it’s not just Google that has such terms. Chances are you or your employees have signed similar terms in the many agreements that people accept without reading.

The big concern from enterprises this week was not being locked out of Google Docs for a time but the fact that Google was scanning documents and other files. Even though this is spelled out in the terms of service, it’s uncomfortably Big Brother-ish, and raises anew questions about how confidential and secure corporate information really is in the cloud.

So, do SaaS, IaaS, and PaaS providers make it their business to go through your data? If you read their privacy policies (as I have), the good news is that most don’t seem to. But have you actually read through them to know who, like Google, does have the right to scan and act on your data? Most enterprises do a good legal review for enterprise-level agreements, but much of the use of cloud services is by individuals or departments who don’t get such IT or legal review.

Enterprises need to be proactive about reading the terms of service for cloud services used in their company, including those set up directly by individuals and departments. It’s still your data, after all, and you should know how it is being used and could be used.

Typically, these terms are not negotiable, so you have to be prepared to block cloud providers whose terms are unacceptable and provide users an alternative. But cloud providers might be willing to rewrite portions of their terms of service over privacy concerns if you enterprise is large enough—so ask!

Perhaps the scariest part of this is that you typically have no way of auditing the public cloud to determine if they are checking out your data or not, whether or not their terms of service give them that right. At the end of the day, this comes down to trust. But you should at least be aware of what your providers can do, so you can decide whom to trust.

Nearly nine in 10 respondents said they are confident about their cybersecurity posture and are in a position to protect their organization from an impending threat, and another 85 percent said they have changed or plan to change their security policies and procedures in the wake of widespread cyberattacks, which is good, because nearly half believe that their company will experience a major security incident within the next year.

However, you have to wonder if they are truly that confident or if they are exaggerating their security posture and their internal security skills. The report also said this:

Attackers that successfully get onto a network can move laterally if access to information is available. Yet surprisingly only 66 percent of U.S. organizations and 51 percent of EU organizations fully restrict access to sensitive information on a “need-to-know” basis. . . . As shown with the DNC and Equifax breaches, attackers can get onto a network and spend weeks or even months stealing sensitive information before anyone knows they’ve been compromised. Despite these dangers, 8 out of 10 respondents in the EU and the U.S. are confident or very confident that hackers are not currently on their network.

Unfortunately, we don’t know what they base that confidence on, and that could spell disaster if it is falsely placed.

Michael Patterson, CEO of Plixer, told me in an email comment that he sees the results of this survey as good news/bad news:

The good news from this is that these executives are asking their security teams questions relating to preparedness. The bad news from this is IT teams are often fearful to expose weakness. Unless there is a culture of openness and a willingness to invest more time, people, and money, nobody really wants to respond with anything other than “we are prepared.” IT teams are fearful that exposing vulnerabilities will reflect poorly on them. There must be a shift of attitude from the boardroom all the way to the security operations teams acknowledging that prevention is impossible.

To be truly prepared, Patterson added, organizations need to have a well-defined incident response process and access to forensic data from network traffic analytics so that when an incident does occur, organizations are able to quickly understand all of the logistics of the breach and return the company to normal functions as soon as possible.

So to answer my opening question, was the Equifax breach the wake-up called needed? I think the answer is mixed. Yes, security decision makers are forced to look more closely at their security posture, but I think there is still a long way to go to really understand how to best protect the network and data.

AT&T CFO John Stephens acknowledged at a conference on Wednesday that the timing of the deal is “now uncertain.” It was originally expected to close by the end of the year.

In a press release following the conference, AT&T said its “discussions with the U.S. Department of Justice regarding the company’s acquisition of Time Warner are continuing. Stephens said he couldn’t comment on those discussions but that there is now uncertainty as to when the deal will close.”

It’s typical for large deals to undergo antitrust review to avoid unfair competition or monopolies. It’s not typical for the president to weigh in on a deal due to a personal grudge against a company.

If CNN does not end up selling to AT&T, a long-time rumor is that it could sell to CBS. Earlier this year, CBS CEO Les Moonves said that he believes CNN could “enhance” CBS, “but I don’t think that’s on the table right now.”

LinkedIn, the social network for professionals that was acquired by Microsoft for $26.2 billion, is today rolling out the latest product in its deepening relationship with its owner. The two are unveiling Resume Assistant, a resume builder in Microsoft Word that will be powered by data from LinkedIn — letting you import information about yourself and the companies that you have worked for into your Word document, tapping into some algorithms and artificial intelligence to help suggest wording and other items to help fill out your experience.

The feature will start to go live Thursday, first to Office 365 subscribers on PC (part of the Office Insiders program) and then more widely to other Word users in future months.

The move follows several other products that have come out over the last couple of months that have seen the two companies finally working more closely together.

They have included LinkedIn integrations into Outlook to enhance contact info in your email inbox, which was the first step in a bigger strategy announced in September of this year to integrate more LinkedIn data into Office 365 products.

There are a number of areas where we have not seen collaboration, but that could be ripe areas for it — for example Cortana integration into LinkedIn’s new “smart replies” feature that suggests replies and wording to people sending messages to each other; or Skype integration into the same messaging service to allow for voice and video calling.

What’s interesting with this latest development is that it taps into pre-existing strategies for both Microsoft and LinkedIn.

On the side of Microsoft, the company has been offering templates to Word users for years already, giving them prompts to help them create prettier and more useful documents in a program that — let’s be honest — has over the ages become weighted down with so many features, that no number of Clippy iterations or help windows will help you out quickly.

This will be one of the first instances of Microsoft not only giving you help with the format of a document, but with the content that goes into it, and a resume is a pretty important and often foxing document at that.

On the part of LinkedIn, it has a long history of working on ways to essentially mimic or even replace the function of a resume for its members.

This has included trying to forge closer ties with universities and other places of learning to help users tie in these very earliest stages of their career development, and a way to allow people to apply for jobs using their LinkedIn profiles as resume proxies for people to share their resumes with each other when applying for jobs.

Although Microsoft and LinkedIn are not talking about this explicitly as an exercise in artificial intelligence, there will be some assistant-like features incorporated into Resume Assistant. They will include suggestions for how to word items in your resume.

For example, once you begin to enter information, the assistant will suggest “insights from millions of member profiles so you can see diverse examples of how professionals in that role describe their work,” notes Kylan Nieh, a product manager at LinkedIn who worked on the integration. The same will apply for what kinds of skills you can describe yourself as having: you’ll get suggestions for these based on skills “other successful professionals in your desired role and industry have, so you can add them if applicable.”

The other area where the Resume Assistant will be proactive is in, well, giving you an idea of where to target your resume in the first place. The feature reverse-engineers to read what you have in your profile to suggest job listings to you that are relevant. “Along with job openings, you’ll see details of what the job requires, helping you to tailor your resume to a specific role,” Nieh notes.

You will also be able to turn on Open Candidates, the feature that lets you signal to recruiters only that you’re open to getting approached for a job, signaling another way that the two companies are coming closer together.