Archive for May, 2017

Don Pezet has been working in the IT industry for over 18 years and training others for over 12 years. He is a certified trainer with many vendors, including Microsoft and Cisco. Tim Broom’s passion for everything technology-related and desire to provide others with the opportunity, led him to open ITPro.TV’s brick and mortar computer

Security experts have been saying for more than a decade that it is “not if, but when” an organization will be hacked. So, the more relevant question, posed in the title of a panel discussion at May 24’s MIT Sloan CIO Symposium is: “You Were Hacked: Now What?”

Indeed, given that there is no sure way to prevent every intrusion by so-called, “determined adversaries,” much of the defense playbook has shifted to incident response (IR). And that, said panelists, if done quickly and correctly, can mitigate the damage attackers can cause, even if they make it inside a network.

“Hacking is an action,” said Andrew Stanley, CISO of Phillips. “A breach is the outcome. So we spend more time on the hack than the breach. We want to know how, why – what was the intent – when and where. That’s what the C-suite wants to know more than the nature of the breach.” Answering those questions is what helps make the response, and therefore containing the damage, more effective, he added.

James Lugabihl, director, execution assurance at ADP, agreed that the key to limiting the damage of a breach is, “how quickly can you respond and stop it.” He said it is also crucial not to react without complete information. “It’s almost like a disaster scenario you see on the news,” he said. “It takes a lot of patience not to react too quickly. A lot of my information may be incomplete, and it’s important to get everybody staged. It isn’t a sprint, it’s a marathon. You need time to recognize data so you’re not reacting to information that’s incomplete.” With the right information, he said, it is possible to “track and eradicate” malicious intruders, plus see what their intentions were.

Both panelists said legal notification requirements can vary by country, or even by state, and if it is not a mandate, notifying law enforcement is something they will sometimes try to avoid. “Executives don’t like it, because it becomes a matter of public record,” Stanley said. “But it also can affect people’s privacy, and you don’t want to become an arm of the government.”

Aside from who needs to know and who legally must know, Stanley said collecting information that can help with the response is the most important thing to do. “It’s about intent,” he said. “If all (phishing) emails are going to one location, that’s an attack. So we need to ask: What do we do there? What’s the target?”

Both also said they conduct tabletop exercises, pen testing and simulated crises to practice their IR for when the real thing happens. But, as Lugabihl noted, “it takes perfect practice to make a perfect response. Bad practice makes bad response.”

To a question from moderator Keri Pearlson, executive director of the MIT Interdisciplinary Consortium on Improving Critical Cybersecurity Infrastructure, about how to cope with the reality that “people are the weakest link” in the security chain, Lugabihl said workers are not entirely at fault. “We haven’t fostered an environment that lets them do their jobs,” he said. “I’ve seen security professionals fall for phishing – those are getting more sophisticated. We just need to encourage them to report it. We need to help make things easier and more transparent.”

In my previous diary, I did a very brief introduction on what the ACH method is [1], so that now all readers, also those who had never seen it before, can have a common basic understanding of it. One more thing I have not mentioned yet is how the scores are calculated. There are three different algorithms: an Inconsistency Counting algorithm, a Weighted Inconsistency Counting algorithm, and a Normalized algorithm [2]. The Weighted Inconsistency Counting algorithm, the one used in todays examples, builds on the Inconsistency algorithm, but also factors in weights of credibility and relevance values. For each item of evidence, a consistency entry of I width:300px” />

Today, I will apply ACH to a recent quite known case: WCry attribution. There has been lots of analyses and speculations around it, lately several sources in the InfoSec community tied WCry strongly to Lazarus Group [3][4][5][6], while some others provided motivation for being skeptical about such attribution [7]. Therefore, it is a perfect case to show the use of ACH: several different hypotheses, facts, evidences and assumptions.

Digital Shadows WCry

ACH analysis About two weeks ago, Digital Shadows published a very well done post on ACH applied to WCry attribution [8]. Regarding possible attribution to Lazarus though, as stated on their post, At the time of writing, however, we assessed there to be insufficient evidence to corroborate this claim of attribution to this group, and alternative hypotheses should be considered. Therefore among the hypotheses considered is missing one specifically for Lazarus in place of a more generic nation state or state affiliate actor. The following are the four different hypotheses considered by Digital Shadows:

A sophisticated financially-motivated cybercriminal actor – H1

An unsophisticated financially-motivated cybercriminal actor – H2

A nation state or state-affiliated actor conducting a disruptive operation – H3

A nation state or state-affiliated actor aiming to discredit the National Security Agency (NSA) width:600px” />

Given the final scores computed, they have assessed that though by no means definitive, a WannaCry campaign launched by an unsophisticated cybercriminal actor was the most plausible scenario based on the information that is currently available. Just one note on my side, from my calculation seems they have made a mistake, and H2 score should be -2.121 rather than -1.414. This does not change the final result, but brings H2 and H4way closer.

My WCry ACH Analysis

Although the Digital Shadows analysis was a very good one, I felt something was missing, both on the hypotheses as well as on the evidences side. Particularly, in my opinion, I would add three more hypotheses.

When thinking about NSA being the final target of this, other than A nation state or state-affiliated actor aiming to discredit the NSA, I think that it should be considered also a (generic/unattributed) TA aiming at unveiling/exposing the extent of possible NSA network of compromised machines (H5). This is something one would expect from a hacktivist maybe, although it seems to be way more sophisticated than what hacktivist have got us used to. One difference with the H4 could be on the lack of supporting media narrative. While if one wants to discredit NSA would be ready to have a supporting media narrative, if the goal was simply to unveil and show to everyone the potential extent of NSA infected machines, the infection as it was would have been sufficient, given also the abundant media coverage it got. Although this may still be seen as too close to H4 to be a different hypothesis, I still do see a case for it.

The other hypothesis Im considering is Shadow Brokers being behind it (H6). This because they had collected some big failures in the previous attempts of monetizing their dumps, as apparently not much credit was given to them or to the quality of their claims. The WCry incident proved the high quality of their leak. As one of the arguments for this, by timely coincidence as soon as the first Lazarus attribution started to come up, SB announced their data dump of the month service [9]. How many people will now think more about buying their offer?

Finally, I believe a specific hypothesis for Lazarus, other than generic nation state actor, is needed given the number of reports and evidence attributing WCry to it (H7). If I consider Lazarus, I consider financial gain as the motivation behind it, since historically this has been its focus and the ransomware is indeed a lucrative market. However, H7 would be inconsistent with the failed of decrypting after ransom was paid. This does not serve as good advertisement, and fewer victims would start paying once the rumor that files won width:600px” />

Conclusions

While from the results above there seems to be a clear winner inH5, (generic/unattributed) TA aiming at unveiling/exposing the extent of possible NSA network of compromised machines, what I see in cases like this are three clear losers: H1, a sophisticated financially motivated attacker, H3, a nation state or state-affiliated actor conducting a disruptive operation, and H7, Lazarus Group. I would then focus on looking for other elements with regards to the hypothesis that are left in the refinement face.

Given that ACH is done better when multiple analysts contribute with their views, please share your feedback. As stated by the guys at Digital Shadows too, also my analysis is by no means definitive.

“Data.” Ask senior management at any major organization to name their most critical business asset and they’ll likely respond with that one word.

As such, developing a disaster recovery strategy – both for data backup and restoration – is a central part of planning for business continuity management at any organization. It is essential that your company and the vendors you work with can protect against data loss and ensure data integrity in the event of catastrophic failure – whether from an external event or human error.

Think about this: What would you do if one of your trusted database administrators made a mistake that wiped out all of your databases in one fell swoop? Could your business recover?

Backing up data at an off-site data center has long-been a best practice, and that strategy relates more to the disaster recovery (DR) component of business continuity management (BCM). DR and BCM go hand in hand, but there is a difference: BCM is about making sure the enterprise can resume business quickly after a disaster. Disaster recovery (DR) falls within the continuity plan and specifically addresses protecting IT infrastructure – including systems and databases – that organizations need to operate.

While replicating data off-site is smart, it doesn’t fully address human error, which can be an even greater risk for businesses than a major external catastrophe. The human error factor is why a two-pronged approach to disaster recovery makes sense. Backing up customer data off-site means it is protected from a major uncontrollable event like a natural disaster. But a local strategy is also essential to ensure there are well-trained people, defined processes and the right technology in place to reduce the risk of human error.

Think automation and consider the cloud

Automation of backups (making a copy of the data), replication (copying and then moving data to another location), off-site verification and restoration processes are the most effective ways to address the risk of human error.

Storage replication mirrors your most important data sets between your primary and DR site or service. Most, if not all, mainstream storage vendors provide this functionality out of the box or for a license fee. The replication should support scheduling replication events, mirroring data sets against your recovery point objective (RPO) and archival services that allow systems administrators to setup policies that match your business continuity objectives (e.g., six months of offsite monthly archives). And, for added protection, consider FIPs certified encryption solutions at the disk or controller level, which protects your most critical and sensitive data against accidental exposure by encrypting your data at rest.

You can also leverage WAN Acceleration technologies to accelerate your offsite replication and/or backups by maximizing the efficiency of your data replication or backup streams and saving you costs in both bandwidth as well as time to replicate your changes offsite. Used in combination with storage replication this makes for a very secure and resilient architectural approach to data protection, and in some cases, can help lower recurring expenses.

Another choice, in lieu of storage replication availability, is to leverage your persistent storage solutions (RDBMS or NoSQL) to replicate changes in real time as most best-of-breed technologies come with data replication and backup services by default. Spending the time upfront to understand which solution is most effective from a cost and execution standpoint is advisable as there are bound to be differences driven by compliance requirements.

In addition, investing in automation tools and services can greatly improve your response to an unplanned disaster, but does require a solid foundation of configuration management standards to successfully deploy and validate your configuration items (network hardware, storage appliances, server technology, etc.). A dedicated team of DevOps resources can be most effective in this area as Infrastructure as Code continues to gain widespread adoption. Imagine for a moment that instead of troubleshooting failures, you can simply re-provision to a previously certified configuration. Not only are you proving your ability to respond in the face of a disaster, but you may even benefit from automating your infrastructure builds, where applicable, by re-purposing valuable time and resources for other important work.

If you have the right automation in place, with an expected input and an expected output verified through repeatable processes, you mitigate the risk that an engineer or a database administrator will inadvertently push the wrong button and create a data disaster.

The traditional approach is to invest in server, network and storage hardware, and co-location. But you should also consider the major cloud services – Amazon’s AWS, Microsoft Azure or Google Cloud Platform – that allow you to back up your most important data straight to the cloud. It’s another way of investing in disaster recovery without necessarily incurring the cost of buying data centers or hardware.

Companies of all sizes face pressure from investors and customers who want assurance that sensitive data will be protected correctly no matter what happens – from credit card numbers to personally identifiable information (PII). As a cloud-based software provider, I know how important it is that customers have confidence that their data are protected at all times.

Following are some top questions to ask cloud vendors:

Are they investing in automation? Your vendor should be investing in automation to support its own DR plan. A vendor’s own fortified technology foundation and strong security framework can help you meet your own stringent data requirements.

Are they seeking third-party assessments? Ask about their verification processes. They should be engaging an independent assessor twice yearly to verify the efficacy of BCM and DR processes for both U.S. and non-U.S. operations. Testing them twice a year is important because the software space is always changing, and these assessments help to ensure that BCM and DR plans stay fresh.

Are they making assessor reports available to you? Any vendor should make the independent assessor’s reports available to customers – ask to see them. Documentation of specific security certifications can provide additional evidence that their BCM and DR processes are effective.

Are they focused on recovery time? A recovery point objective (RPO) is the maximum targeted period in which data might be lost due to a major incident. Ask where your vendor falls in its industry segment. Similarly, ask about their recovery time objective (RTO) – the targeted duration of time within which they can restore our service after a disaster. Many providers guarantee a two- to three-day average restoration time frame.

Just as you are concerned about data loss and integrity for your own business, you should seek the same from any vendor. Test and refine your own processes, and make sure your vendors do too.

Kids have been locking their diaries and hiding top secret shoe long before even Sandy Olssen had a crush on Danny Zuko. The need for more and more privacy as they mature is a natural part of growing up. Today, however, some kids hide their private lives behind locked decoy apps catapulting those harmless secret crushes to a whole new level.

A decoy app is what it sounds like; it’s a mobile app designed for the purpose of hiding something. Decoy apps are also called vault, secret, and ghost apps and make it tough for parents to know whether or not their kids are taking and sharing risky photos with peers since the apps are disguised as an everyday app.

A decoy app may look like a calculator, a game, or even a utilities icon, but it’s actually a place to tuck away content a phone user doesn’t want anyone to find. Kids use decoy apps to store screenshots of racy conversations, nude photos, pornographic videos, and party photos that are simply too risky to keep in a regular photo folder that mom or dad may find. One case in Pennsylvania documents vault apps at the center of sexting and cyberbullying case in a middle school.

Adults and Decoy Apps

Many adults are also well acquainted with decoy apps. It’s no surprise adults use these stealth apps to store private business activity, passwords to secret accounts, inappropriate photos, and content related to extramarital affairs. Apps such as Vaulty Stocks looks like a Wall Street stock market tracker, but in reality, it’s an app designed to keep private photos and videos hidden from nosey spouses.

How to Spot a Decoy App

If you want to get an idea of how many of these kinds of decoy apps exist go to your iOS or Android store app and search secret apps or decoy apps and you will get your fill of the many icons that are in place to hide someone’s private digital life.

Once you know to look for these apps designed to look like a calculator, a safe, a game, a note or even a shopping list app, you are well on your way.

A decoy app can’t be opened without a code or password specified by the original user. Some of these decoy apps such as Keep Safe Private Photo Vault actually have two layers of security (two passwords) designed to throw off a parent who can open the first level and find harmless content. According to the app description on the Google Play store, “Keepsafe secures personal photos and videos by locking them down with PIN protection, fingerprint authentication, and military-grade encryption. It’s the best place for hiding personal pictures and videos.” Further privacy is detailed with the promise of a face-down auto lock feature, “In a tight situation? Have Keepsafe lock itself when your device faces downward.” Another app, The Secret Calculator, description states: “Don’t worry about the icon. It will become a standard calculator icon. No one will ever notice.”

Connection first. Communication and a strong relationship with your child are the most cyber savvy tools you have to keep your child from making unwise choices online. So, take time each day to connect with your child. Understand what makes them tick, how they use technology, and what’s going on in their lives and hearts.

Monitoring. Weekly phone monitoring and using parental controls is always a good idea depending on the age of your child, your trust level, and the expectations that exist within your family. Know what apps your kids download.

Ask to Buy. Both Apple and Android have parental app purchase approval options on their websites you can set up to examine an app before it’s downloaded.

Get real. Talk candidly about the risks of sending, sharing, and even archiving risky photos on digital devices. Under the law, child pornography is considered to be any nude photograph or video of someone under the age of 18. It usually does not matter if the person possessing or distributing it is under the age of 18. Any offender can face fines and time behind bars. New laws that address juveniles caught possessing or distributing explicit photos are emerging every day and vary state by state.

Reality check. Nothing is private. Kids can share content directly from a decoy app, which means that their passcode is useless. Shared content is out of your hands forever. Sharing risky photos is never, ever a good idea.

It’s worth stressing to your kids that it’s not just about the technology you use, but how you use it that can create issues. None of the decoy apps we mentioned in this post are inherently “dangerous” apps, it’s the way the apps are used that make them unsafe for kids. The same mantra applies to social networks. And remember — give yourself grace as a parent. You can’t police your child’s online activity 24/7. It’s impossible. What you can do is educate yourself and know what these mobile apps do so you can address precarious situations that may come up.