Security @ AdobeWorking to keep your digital experiences secure2019-01-16T21:33:54Zhttp://blogs.adobe.com/security/feed/atomWordPressPeleus Uhleyhttp://blogs.adobe.com/security/?p=30512019-01-16T21:33:54Z2019-01-16T21:33:54ZMany of my recent talks on automation strategies have referred to a tool called “Marinus.” The tool is designed to help solve the challenges large organizations face in having an accurate view of their external facing infrastructure. “Marinus” can be a useful component of your broader security risk strategy and toolkit by helping you more quickly uncover potential problem areas. Today, Adobe is releasing an open-source version of Marinus enabling those in the security community to leverage it within their organizations.

Shadow IT, legacy systems, acquisitions, and other aspects of running a large infrastructure can make it difficult to keep track of your Internet-facing footprint. However, these are often the very systems that attackers will target as their first step to encroaching on your critical systems. For an organization to have an accurate risk assessment of their exposure, they first need to identify their “unknown unknowns.”

The name “Marinus” refers to Marinus of Tyre who was an early pioneer in geography. Similarly, the goal of the Marinus project is to assist organizations in creating complete maps of their networks. Marinus collects a myriad of data such as DNS records, reverse DNS records, TLS certificates, open ports, HTTP headers, and several other types of information that is publicly viewable from the Internet. Once collected, Marinus data can be used to identify risks such as sub-domain takeovers, internal services that are unintentionally exposed, VirusTotal detections, and much more. It can also provide a visual summary of the network. A very simple example of the domain adobesign.com is shown below:

In the picture above, the “www” and “trust” sub-domains share the same coloring as their parent domain “adobesign.com.” Domains that are unrelated to adobesign.com are given different colors and called out in the legend. In this case, both “www.adobesign.com” and “trust.adobesign.com” are CNAME references to “orgin.acrobat.adobe.com.” The AWS IP addresses used by adobesign.com are grouped together by color, as well. If a traditional data center IP had been identified, then it would have been colored differently and grouped by its Class C. These visual summaries can be a great help when dealing with more complex domains. The legend will show all of the Class C’s and third-party domains that are in use and how they relate to the core domain. The map will show you all the sub-domains that are viewable from the Internet and how they are related to IP addresses and third-party domains.

Marinus is capable of collecting data on any given network from a wide variety of free, commercial, and internal sources. Marinus uses numerous sources in order to create as complete a view of the network as possible. If Marinus were to rely solely on internal services that IT regularly uses, then it would be blind to issues such as shadow IT or forgotten legacy systems. By leveraging third-party data sources, Marinus is able to provide an unbiased view of the network. Marinus can also gain perspective on new acquisitions that haven’t yet been integrated into corporate management solutions.

Projects such as Rapid7’s Open Data project, VirusTotal, Certificate Transparency logs, and Common Crawl can provide free data on the hosts and services that are currently exposed. Marinus provides support for several commercial services, such as PassiveTotal and Censys. In addition, Marinus can collect data from internal DNS tracking services such as InfoBlox, UltraDNS, AWS Route53, and Azure DNS. By mixing internal and external data, Marinus can provide perspective on how much is known vs. unknown within the organization. Internal data can also be used to help identify which teams own the systems that require a follow-on internal investigation. By using data from public sources, Marinus does not need to conduct its own network scans. Although, there is an option for having Marinus collect data using tools from the ZMap project if it will help your organization.

The value of combining multiple data sources is not limited to gaining a complete set of DNS records for the entire organization. These sources often contain addition information such as TLS certificates, HTTP headers, and handshake information from open ports. This allows a Marinus user to quickly search their entire organization for best practice adoption, out of date environments, policy violations, and much more. For a central security team in a large organization, being able to quickly search a database of information that covers the entire company saves a lot of phone calls and emails. In addition, since this data was collected by external third-parties, there is no confusion about whether the data that you are searching would be visible to the outside world.

The Marinus suite requires three components: A Mongo database is required for storing collected data. One or more servers will be needed to run the individual Python scripts that will collect the data from the remote sources and store it in the Mongo database. Finally, Marinus requires a server to run the interactive NodeJS web site that is used for viewing the data. The NodeJS web server also provides REST APIs so that organizations can extract data from Marinus for their own custom reports or internal automation. The GitHub README files contain the details on the specifics of what is required in these environments.

Marinus has been in development for over two years. As with any project, there is always more that can be done. For instance, I am currently investigating possible integrations for Shodan and Google Compute. That said, the goal of Marinus is not meant to be all things to all people. The UI for the Marinus web site is designed to allow quick one-off searches that would otherwise be too onerous to write a script to find. There are also a few sample reports to inspire people on how the data can be turned into actionable information. However, the real power of Marinus is that it is a database of useful information on every host in the organization that is made available via REST APIs. Users can access the data via the REST APIs to power their existing automation, perform company-wide searches, and create their own reporting.

I would like to thank fellow Adobe team members Mayank Goyal and Bhumika Singhal for their contributions to the project during development. The Marinus project is now available on the Adobe GitHub portal.

Peleus Uhley
Principal Scientist

]]>0Chris Parkersonhttp://www.adobe.com/securityhttp://blogs.adobe.com/security/?p=30342018-12-07T09:36:16Z2018-12-07T09:35:38ZOctober was Cybersecurity Awareness Month. Everywhere across the globe OWASP holds various events supported by Adobe to further awareness and education in web application security. In Bucharest, Romania, the 6th edition of the annual Bucharest OWASP AppSec conference was organized at Hotel Caro with over 350 participants. It was a 3-day conference with training days, Capture the Flag (CTF) contests, and educational presentations and panels. Industry practitioners shared their experiences, knowledge, and projections.

This year’s Capture the Flag prizes and Speaker Dinner were sponsored by Adobe and our security team from Romania got actively involved as organizers and speakers at the event. Members of our team organized a “Women in AppSec” panel to discuss issues facing women in the field and possible activities to encourage more women to become cyber security professionals.

Adobe Security Team @ OWASP Bucharest 2018

There were multiple tracks this year at the conference, both containing a lot of industry-focused subjects. One track included talks on browser security controls, access control, REST APIs and sessions, use of cookies versus tokens, AV Evasion, incident response, and threat intelligence. Adobe’s own Isabella Minca also presented a talk in this track on “Automating Security Operations using Splunk’s Phantom.” The other track included a separate Adobe-led session on DevSecOps. It also included talks on Android & iOS security, penetration testing, data exflitration, GDPR, and general cloud security topics.

OWASP AppSec Bucharest 2018 Organizing Team

If you are in Romania next year for the OWASP conference, be ready to grab a coffee at the lunch break with the Adobe team to discuss our mutual security passions. We will also continue to highlight the efforts of our Romanian and other global security teams here on the Security@Adobe blog.

Daniel Barbu
Manager of Security, TechOps

]]>Chris Parkersonhttp://www.adobe.com/securityhttp://blogs.adobe.com/security/?p=29982018-11-14T19:51:52Z2018-11-13T16:00:28ZWe often say that software security a process, not a task that gets completed. The threat landscape is always evolving. Accordingly, Adobe works on not only providing updates to the software that our customers use, but also in updating the guidance that we provide to customers on how to configure their on-premise Adobe software.

With that in mind, I’d like to draw the attention of AEM customers toward a document we’ve just published that includes information on important updates that should be applied to your AEM installations, as well as configuration guidance that is included in our AEM Security Checklist. One area we want to highlight is the importance of using the latest Dispatcher version, and of applying the recommended configuration and security rules.

As always, it’s important to keep installations current with the latest and greatest AEM Security Updates. And if you have any questions, please contact your Customer Success Manager, or Adobe Customer Care.

Cybersecurity is often a game of cat and mouse – attackers are constantly trying to outsmart defenders. Attackers are keen to try and bypass security mechanisms, working to evade detection, and looking for the latest vulnerabilities to possibly leverage.

We face the same issues and concerns as most organizations. We constantly ask ourselves these questions: How do we help ensure that all assets are protected? How do we help ensure that our employees are as secure as possible from outside threats? How can we help mitigate future emerging threats?

Monitoring and reporting are important factors to success in any incident response program. However, we cannot simply rely upon counts of visits/visitors, endpoints, services, logins/authentications, etc. Malicious activities often come in subtle forms designed to evade technology and skilled professionals. Add this to the large number of monitored events and you have a potential situation in which white-noise could make you deaf to more genuine threats. Even more, today’s monitoring and reporting tools often rely on a static set of rules that trigger alerts when specific conditions are matched. Those rules are based on threat intelligence from multiple sources but also on the experience, ingenuity and maturity of the people behind the tools.

However, returning to the cat and mouse game – attackers will always try to find the next unconventional attack that could bypass security systems and mindset. In this case, how do we better protect ourselves from the unknown? This is where machine learning techniques can help. Machine learning can be applied to assist with a common attack vector – changes and insertions at the command line. Command-line interfaces are frequently used by system administrators, users and applications. Many software products launch console scripts to perform certain tasks such as checking system details or resources (‘net’, ‘wmic’), managing firewall rules, registering services, and so on. However, not all script patterns are common for all applications. Malware writers and more advanced attackers also like to leverage those native system capabilities.

Command lines are an interesting attack vector because they closely resemble human speech: (a) they make use of defined syntax and (b) they enforce “semantics” through ordering and dependencies between consecutive tokens and their role in the final outcome. Given this, depending on the tokens and their order, similar command lines can have different behaviors. For example:

Since we are dealing with “language” the first thing that comes to mind is employing some natural language processing techniques (NLP) to this use-case. Unfortunately, that will not work. While the basic principles are sound (document clustering/anomaly detection, etc.), it is fairly far-fetched to expect NLP techniques to work on such statements out-of-the-box. Command-lines resemble natural language, but they are definitely not natural language (at least not for humans). The syntax is way more constrained than that of natural language and there are not many open classes.

Clustering based on TF-IDF (term frequency-inverse document frequency) would also not be optimal. The issue is that the “bag of words” neglects the semantics (role) of the command line and its arguments. Consider “rm” and “cp” commands. If you use TF-IDF and measure the distance between each pair of commands, you will end up having really similar scores. However, there is a clear difference between `rm` and `cp`. This still holds for cases where replacing just one parameter or token would generate a totally different behavior.

Unlike the naive implementation of TF-IDF and clustering, BLEU (Bilingual Evaluation Understudy) is able to capture and model the dependencies between adjacent tokens in a command line. BLEU is a measure used in machine translation to establish how well an automated translation system performs against gold standard data. It works by counting occurrences of not just isolated tokens (“bag of words”) but also of bi-grams, tri-grams, four-grams, and so on. The similarity score between two sequences is a weighted and smoothed interpolation between multiple fractions of correctly “translated” n-grams. This score better reflects the syntax restrictions found in command lines.

Next, instead of treating each command line separately, we cluster them together using a different clustering technique than K-means. We prefer threshold-based clustering since it has a complexity of O(n). This enables us to process large amounts of data with less computational power and resources.What can we do with this?Two things: (a) we can keep track of past clusters and manually curate newly created clusters to check for malicious activity and (b) we can do outlier detection on composite vectors which are designed to help capture dependencies between the user, the processes, and the command-line clusters.

Outlier detection can be handled by standard “outliers” algorithms, like LOF (Local Outlier Factor), by using an “autoencoder” approach. An autoencoder neural network is an unsupervised learning algorithm. The network is trained to reconstruct its inputs which forces the hidden layer to try to learn good representations of the inputs. We use the reconstruction error of the autoencoder to pinpoint events that have a low statistical distribution in our data set/stream. In our implementation we train the autoencoder with information regarding the command line cluster, the parent name, and the process name that generated the event itself.There are additional details about this project which, for the sake of clarity and simplicity, we have omitted in this post. This process was submitted for patent review under P8075-US and is currently being evaluated. You can also learn more about this project in our recent webcast with the Cloud Security Alliance. We hope this process will help us better detect possible sophisticated threats before they become a more serious issue.

Andrei Cotaie
Senior Security Engineer

Tiberiu Boros
Computer Scientist

]]>Chris Parkersonhttp://www.adobe.com/securityhttp://blogs.adobe.com/security/?p=29742018-11-01T12:41:59Z2018-11-01T12:41:59ZIn the spirit of community involvement and open communication, members of our Incident Response team recently attended hack.lu 2018 in Luxembourg. The conference included a single presentation track and hands-on workshops across three days. The talks offered at the conference ranged from the technical, to the practical, to the purely theoretical. The presenters also represented a diverse background of researchers, practitioners (both attack and defense), and enthusiasts. Here are a few highlights from the conference:

Hypervisor-level debugger: benefits and challenges(Mathieu Tarral) – Mr. Tarral presented on the benefits of debugging at a hypervisor level and some of the issues and pitfalls with prior attempts at this. Mr. Tarral also presented on his current research into developing a better process and set of tools for debugging. His tools and research could prove to be very useful for our own efforts here at Adobe.

WHAT THE FAX?!(Eyal Itkin, Yaniv Balmas) – Mr. Itkin and Mr. Balmas gave a dynamic and entertaining presentation on the continual vulnerability presented by the use of FAX (facsimile) technology in an unsecured and often forgotten manner. Their presentation certainly raised awareness that the FAX technology is still a viable attack surface in many infrastructures and one that needs to be accounted for.

Breaking Parser Logic: Take Your Path Normalization off and Pop 0days Out!(Orange Tsai) – Mr. Tsai presented on his research into exploitation of modern web frameworks through a long standing set of vulnerabilities numerous developers have ignored. Unfortunately, this presentation presented a rather disturbing picture of how many platforms could still be vulnerable to this particular set of exploits. This knowledge is useful in helping our teams further reduce risk by being aware of the inherent issues still present in many web application frameworks and tools.

I think Hack.lu gave the team a good exposure to thoughts and ideas from around the world. The presentations were interesting, informative, and helpful and the conference was, overall, a valuable experience.

Nick Pachis
Security Engineer

]]>Chris Parkersonhttp://www.adobe.com/securityhttp://blogs.adobe.com/security/?p=29652018-10-18T17:07:57Z2018-10-18T17:07:57ZThe Common Control Framework (CCF) by Adobe represents the foundation of our company-wide compliance strategy. The CCF is a comprehensive set of simple control requirements, aggregated, correlated and rationalized from the vast array of industry information security and privacy standards. Adoption of the CCF has enabled Adobe’s cloud products, services, platforms and operations to achieve compliance with a host of security certifications, standards and regulations for example; SOC 2, ISO, PCI, FedRAMP and others.

As part of our ongoing efforts in knowledge sharing with the broader security community a generic version of the CCF was open-sourced in 2017. The 2017 release contained a baseline set of control activities meant to assist organizations in meeting the requirements of ISO/IEC 27001, AICPA SOC Common Criteria, AICPA SOC Availability, and the security requirements of GLBA and FERPA.

The Technology Governance, Risk and Compliance (Tech GRC) group at Adobe continually works on updating and improving the CCF, and are happy to announce the release of the CCF v2.0 through the Creative Commons licensing. The CCF v2.0 is open-sourced and includes the additional mapping of the control activities to FedRAMP Tailored and PCI DSS V3.2.1 requirements. These activities were determined by common industry requirements. They have been adopted by Adobe product operations and engineering teams to achieve compliance with the standards set forth by these regulatory bodies. The CCF is an illustrative example of common security controls that can be tailored to your organization’s specific security objectives.

Adobe is enthusiastic about sharing the updated CCF with the broader security and compliance community. Potential users should note that the CCF is more than simply a unified compliance framework. The aim is to help the industry to realize significant additional value by adopting more collaborative implementation strategies within their organization. Integrating the CCF into their compliance workflows will help enable more scalable security strategies, resulting in higher levels of compliance in engineering and operations processes that ensure continuing success.

We invite you to take the opportunity to download the CCF today and see how you can best utilize in your organization. We welcome feedback and questions about the framework. You can contact us directly at opensourceccf@adobe.com.

]]>Chris Parkersonhttp://www.adobe.com/securityhttp://blogs.adobe.com/security/?p=29522018-10-02T18:51:51Z2018-10-02T18:51:51ZMost companies are lured to the cloud with the objective of near-instant resource availability, which can mean teams sign up for cloud accounts before controls are in place to audit and monitor these environments. Combined with this new world of app deployment on third-party cloud infrastructures, there can be limited processes for network perimeter review, credential rotation, and access logging. Due to poor logging configurations, security analysts could be limited in their ability to perform forensics in accounts following notification by our cloud infrastructure providers of suspicious activity.

To help alleviate these potential issues, Adobe developed a tool called MAVLink. MAVLink enables us to audit public cloud accounts for set best practices and to collect information useful in identifying potential security incidents. MAVLink provides a more robust monitoring solution which can be set-up in a new environment quickly, providing consistent data availability and audit information to help improve the security posture of our cloud infrastructure accounts. We’ll walk through some of the steps an organization can take to make a transition to monitored, audited environments and help avoid potential cloud chaos.

The first step is to attempt to control account creation. This was accomplished using a Microsoft Exchange filter targeting registration emails. Registration emails can then be redirected to a responsible group or individual to help ensure that all new accounts are properly provisioned and inventoried. When the cloud account provisioning team provisions a new account on behalf of another team, they generate and store the MFA token and password for the root user. We use APIs available from our identity management system to automate necessary actions.

Configuration snapshots can mostly be a one-stop-shop to get information about created resources in the account. Do you need to know what public IPs are used, check ELB cipher suites, or get a list of users within an account? Configuration snapshots can help you answer all those questions. The account trust relationship mentioned earlier also allows our security team to query other services that may not necessarily be represented in a configuration snapshot such as IAM (identity & access management) credential reports. All this information flows into our MAVLink tool to help us better secure our cloud services.

If you decide to pursue the approach we are discussing here, the level of permissions you allot to your security or audit team depends on your organization’s requirements. Understand, however, that too many permissions could introduce operational risks and too few could require costly updates later to deployed services.

If you have a large number of accounts, it makes sense to templatize and automate the setup of the cross-account role. Tools are provided by cloud infrastructure providers to help automate this and we made use of them to set up MAVLink.

Figure 2 – Example Lambda Modules, Enabling Config Snapshots

Once necessary account trust relationships are in place, the ability to use MAVLink’s data collection and configuration enforcement modules exists. MAVLink integrates with AWS Lambda for our Amazon Web Services (AWS) accounts. This allows for quick updates to the codebase and helps reduce the administrative overhead of maintaining instances. Lambda functions trigger on a regular basis, iterate over each registered account, assume the role into the account, and then perform the configuration check or data retrieval. That action may be reviewing AWS CloudTrail configuration to make sure it is still enabled, delivering it to the correct bucket, and ensuring it has the appropriate global trail. Whenever data is collected, it flows into our Security Incident and Event Management (SIEM) system and logging tools for analysis by MAVLink. MAVLink then helps enable us to monitor our cloud service accounts in one place.

MAVLink is a very useful tool that is helping improve our security hygiene across cloud services. You can learn more about MAVLink in our recent webcast.

Scott Pack
Lead Cloud Security Engineer – Adobe Experience Cloud

]]>Chris Parkersonhttp://www.adobe.com/securityhttp://blogs.adobe.com/security/?p=29442018-09-06T20:39:21Z2018-09-06T17:53:48ZA significant portion of cloud services are coupled with a domain name service (DNS) component. The lifecycle of these cloud services, unfortunately, can sometimes exclude an enforcement of decommissioning checks when such services are no longer in use. DNS records pointing to deleted or expired cloud artifacts – ones not yet purged from name servers – create dangling DNS records. Subdomain takeover vulnerabilities may occur when these artifacts are able to be reclaimed and controlled by nefarious actors through manipulation of DNS entries.

Takeover Scenario

subdomain.example.com CNAME reference.example.com

In this scenario, reference.example.com was deleted or expired. As part of a decommissioning check, the DNS entry for subdomain.example.com was unfortunately not updated so now reference.example.com can be re-registered by anyone. Subdomain.example.com can then be controlled by a nefarious actor that took control of reference.example.com. There are three actors in this particular scenario – domain owner, reference owner, and the DNS record manager.

A subdomain takeover attack may go undetected by a domain owner if the reference owner simply claims the domain and decides to launch a phishing campaign mimicking previously hosted content. Other risks include cookie stealing and public resolver cache poisoning – diverting Internet traffic away from legitimate servers and towards fake ones. Cache poisoning can have a dangerous cascading effect as DNS servers often depend on each other for reconciliation of an entry. It is difficult to detect this attack after it happens unless the organization or domain owner has a monitoring system that checks for abnormalities, including content.

Project Hijack

Adobe’s “Project Hijack” is an internal, proactive effort to continuously monitor and identify expired domains to help avoid a subdomain takeover vulnerability. As part of this, we developed a tool codenamed “Jaeger”. Jaeger is able to leverage recognition patterns that were built by studying various publicly disclosed bounty reports. It also provides an efficient watchdog process and helps create feedback loops with teams who are trying to avoid such takeovers.

The tool, written in Python3, implements a monitoring scheme that uses:

An enumeration of domains and subdomains owned by a system/organization. Eg. <org>.<com|org|in|…>. The tool makes use of the open source “subjack” and “Sublist3r” libraries for discovery in generic scans with a top level domain (TLD) as input.

Evaluation of DNS query results on these domains and subdomains – CNAME, A, MX and NS. While CNAME references have an industry-observed high volume of subdomain takeover attempts, incomplete CloudOps practices may make NS, A and MX records equally susceptible. This step is also helpful in identifying any dead DNS entries that need to be removed from system records.

Checks against reference query results for expired cloud artifacts. Cloud providers generally have distinct “not available” signatures associated with their cloud services. This makes it easier to identify expired assets. The tool makes use of the open source library “Can I take over XYZ” for up-to-date signatures. For example, a query to an expired S3 bucket will return “NoSuchBucket” with a “404 Not found” http response.

Establishment of a follow up process to sanitize dangling records. A scan can be scheduled according to our cloud service lifecycle and reports can be generated in JSON, routed to Splunk, sent as an email, or sent as an alert in Slack.

At Adobe, Jaeger runs constantly using one of our established internal data sources called “Marinus” for domains and generates reports in Splunk.

Figure 1 – Jaeger Architecture

Mitigation, prevention, and avoidance of a subdomain takeover attack, does need to be coordinated between a domain owner and DNS record manager. DNS entries are known to be particularly fragile, so these steps must be taken manually by an experienced engineer. Once a vulnerable domain or subdomain is discovered, a subdomain takeover may be mitigated by deleting or updating the DNS entry after confirming the state of any associated cloud services.

]]>Chris Parkersonhttp://www.adobe.com/securityhttp://blogs.adobe.com/security/?p=29352018-08-28T22:43:33Z2018-08-29T16:00:16ZSecurity awareness is a process for educating, training, informing and socializing information systems security in the workplace and at home. How can a group of professionals talk about security awareness for two days straight? Is there really that much content to cover? Well, yes and yes, there is a growing group of professionals who need guidance in this relatively new area of cybersecurity. This group came together at the fifth annual SANS Security Awareness Summit in Charleston, South Carolina. Overall, the field of security awareness is in its infancy. In speaking with the other professionals who had attended the summit in previous years, it appears the attendance grows dramatically from year to year. In fact, in 2017 tickets were sold out.

It seems the cyber-threat to organizations keeps growing, the industry has spent years hardening computer operating systems (OS) but need to do more to ‘harden the human OS.’ Since technical hacking is getting harder, malicious attackers are starting to target the more vulnerable ‘human element.’ This comes in the form of social engineering, phishing attempts, or giving away free USB drives in the hopes one will get plugged in. Therefore, educating and training the workforce is critical.

So what works? Fun security awareness videos? You bet! A range of videos were on display, from low budget up to a much higher range. These videos delivered a dose of security messaging in a fun and memorable way. We found that the most effective videos were ones that recognized the human side. Instead of drilling facts, figures and fear into the workforce, they focused on getting a positive message across in an entertaining and engaging way. A big take away here is the power of video, instead of slides full of text and statistical details.

One talk highlighted an example of a larger campaign that played on some hacker stereotypes to create an edgy campaign. Even though use of stereotypes or humor can be risky, for this organization the bet paid off. They created a gameshow format, with security questions and hired actors to be the host and one of the participants to make it fun. The end result was that employees wanted to play, and probably learned about security while playing!

Keeping the message simple for the general workforce was another big take away. For example, trying to teach the general public every reason why they must not click on a suspicious link or open unsolicited attachments is not effective. Instead, keep the messaging simple, for example this poster could be placed in digital signage or hung up on a wall:

The message is brief, memorable, maybe humorous, and creates greater awareness then before. Exhaustive slides and explanations can lead to lack of awareness because the workforce doesn’t want to take the time out of their work day.

Throughout this summit they periodically held live surveys. One such survey asked, “How often does your organization conduct a phishing campaign”. A phishing campaign is when an organization will send fake phishing attempts to its employees to gauge how at risk their organization is to phishing attempts. Below are the results:

40% phishing monthly

34% phishing quarterly

5% every six months

7% annually

14% never

On this same note, some of the professionals in attendance commented that there was a huge decrease in users failing the test after they implemented a “Report Phishing” button to the company’s email application. What this says is that users tend to click the button because it’s easier to confirm if it’s a suspicious email by clicking the button than reporting a suspicious email. Reporting a suspicious email can require forwarding an email to a certain group within the organization. The key takeaway here is to make security easy and reduce the friction for employees to adhere to security best practices.

Another key takeaway is how important it is to consider and research what’s at risk for an organization. Phishing attempts may not be an issue for one organization but sending unencrypted emails with sensitive data might be. Resources need to be spent on reducing the risks that are unique to an organization. Don’t follow the crowd. Even better, work to get metrics from the security incident response team to identify what the trends are. For example, if you have data from before and after an awareness campaign, those metrics can be used to determine if the campaign was effective or not.

The Summit also had an extended workshop in how to Build, Maintain, and Measure a Security Awareness Program, which was a great chance to do some knowledge sharing with people facing similar (and different!) challenges. We were impressed by the different creative expressions of awareness materials there were – we saw comic books, rocks, security blankets and other swag, to elaborate game setups. Adobe’s own Security Awareness Training and Champions program held our own! We also came back with a ton of ideas.

As far as security summits go, this event was more engaging, entertaining, humorous, and human than most others. If this is a space you work in we recommend attending future security awareness summits or conferences, whether they are hosted by SANS or not. Much learning can be obtained!

Isaac Painter
Security Governance Lead

Serena Zhao
Sr. Program Manager – Security

]]>Chris Parkersonhttp://www.adobe.com/securityhttp://blogs.adobe.com/security/?p=29222018-08-28T17:06:53Z2018-08-23T18:58:32ZIf you were to ask me what exactly is DEFCON, we would say it’s a ‘perspective’. For some, it is a playground with “Capture the Flag (CTF)” contests 24 x 7, for others it’s about the talks and learning new attacks, and for most it’s about meeting people whose affiliations and interests vary wildly all in one single place. With over 20,000+ attendees, we have met people who are code breakers, security developers, government agents, and legal advisors, to name a few.

Unlike the previous year, this year’s official DEFCON badge was an electronic badge. The badge had a lot of hidden puzzles put together as a game. The game was interactive – the badge could be connected to a computer and a shell prompt could be used for commands or you could connect badges together. Upon connecting to the laptop, you got to play what felt like an early 2000’s text game. You had the ability to track your progress on the map that makes up the face of the badge – brilliant. Making sense of the blinky LED’s made us challenge ourselves to move through the game.

Moving through the hallways (while dodging the slow-moving streams of people) we were able to get to some interesting talks.

An Attacker Looks at Docker – Approaching Multi-Container Applications: This talk was intended for attackers, pen testers, and RED teams. It explored the container world and helped attendees become more familiar with the containerized environment. The talk showed how a Linux container exposes various services, describing and analyzing the exploitation of these services in order to gain a foothold in the container. The key takeaway was that an attacker unfamiliar with containerized platforms would still be able to identify and effectively attack them in real-world offensive engagements if the components within the container are not properly protected.

Last mile authentication problem – Exploiting the missing link in end-to-end secure communication: This session discussed how vulnerabilities inside the computer can cause the security of communication over networks to be useless. The focus was on the security of various inter-process communication (IPC) mechanisms. The vulnerabilities associated with IPC could allow a non-privileged process to gain access to privileged information such as passwords. The key take-away from this presentation is for network administrators and defenders to be more aware of the existence of these issues and protect their environment through additional hardening and patching measures.

With an increase in the use of and interest in cryptocurrencies, there has been significant growth in the number of people trying their luck with investment in them. Thus, it should be no surprise that crypto exchanges would appeal to attackers. The talk on Protecting Crypto Exchanges aimed to highlight the major issues with crypto exchanges from a Man-in-the-Browser (MITB) perspective. The talk illustrated many incidents where known malware families can attack popular crypto exchange websites. It also discussed currently available defenses such as multi-factor and strong SSL encryption and recommended additional measures that may be needed to limit these attacks.

Automating DFIR: The Counter Future: this talk successfully planted the idea of automation as the future for DFIR (Digital Forensics and Incidence Response) in all of our minds. It was interesting to hear about the speaker’s experience when it comes to DFIR in the cloud – how to do it in a robust way along using some handy automations and open source tools.

Cloud Security Myths: highlighted the progress of the enterprise toward multi-cloud/hybrid-cloud and the security challenges that come with it, including security in what is technically a “serverless” world. This presentation debunked common myths starting with the basic shared security responsibility model and going all the way to Cloud Access Security Brokers (CASB) and modern incident response.

Common myths include:

The Cloud is not secure

The Cloud is perfectly secure

Cloud security is too complex to maintain

All Cloud Service Providers are the Same

On-premise systems are so much safer

However, reality is more like this:

Perimeter Based Security doesn’t apply

Distributed Threat Surface

You will need new tools

You will probably need new policies, procedures, etc.

Lots of “Cloud” options means lots of “Cloud Security” options

Elsewhere at the conference:

One of the villages that attracted a lot of attention was the one dedicated to hacking machines used as part of the election process in the US. The same scenarios and equipment were made available to participants during DEFCON and r00tz, the DEFCON event for kids. The results of these hacking exercises were shared with interested government professionals to provide knowledge about how to better protect our election systems.

Ongoing contests where teams participate in multiple hacking “games” including Capture the Flag and Hacker Jeopardy

The AI village, in particular, had an interesting angle towards security – bringing together the use as well as potential misuse of artificial intelligence in traditional security. The talk Machine Learning for Network Security focused on how to create your own machine learning (ML) model and then test it against other models created by talk participants. It was interesting to take a piece of malware, deduce the features of that malware, and then create a working Python ML model to detect the malware.

Even getting into the parties is a hacking quest. You have to make sense of various clues and solve riddles to get some party locations. Others required you to use social engineering techniques to get your name on to invitee lists.

Overall DEFCON was a great opportunity to mingle with and learn from the broader security community. They help keep our knowledge and skills up-to-date and we in turn share our discoveries and best practices with them. The event also provides invaluable tools and insight to help us better mitigate threats and continue to evolve our own Adobe SPLC (Secure Product Lifecycle) process.

]]>Chris Parkersonhttp://www.adobe.com/securityhttp://blogs.adobe.com/security/?p=29122018-08-14T22:08:52Z2018-08-14T20:26:10Z“Containers do not always contain.” Without proper hygiene, containers may not always keep their contents constrained and secure. This is sometimes the unfortunate but fundamental truth of containers. Recently there has been a significant increase in the adoption of containers by our industry. With the advent of Docker and orchestration tools such as Kubernetes, many of us have shifted code deployment platforms to containers. The ‘easy to build, package, and promote’ promise of containers is one primary reason for their rapid adoption. However, as is true with any nascent technology, using containers at scale can come with its own set of security concerns.

Recently a number of CVEs have targeted container infrastructure. It is essential to be aware of these possibilities to help improve security posture. The need is more vital now because platform providers have started to allow third-party code to be hosted using containers. If code is not effectively sanitized, an attacker can more easily plant malicious code in a container and attack other containers or the host kernel itself. This summer I worked as a Cloud Operations Security Researcher Intern at Adobe and one of my primary projects was working to help strengthen security around running untrusted code in containers.

Linux provides various features to help sandbox Docker containers such as namespaces, cgroups, capabilities, SELinux/Apparmor, and Seccomp. All of these security features address different aspects of container behavior such as access control, resource consumption, and kernel interaction. When applied efficiently, they can help provide defense in depth. However, effective utilization of these features requires awareness of the best practices around their use. The developer needs to be aware of detailed nuances of his/her application to come up with rules and profiles. However, it is difficult to expect all developers to know and recognize these nuances. Thus, there is a need for tools which can examine containers and help employ more detailed defense mechanisms.

During my time here at Adobe I developed a tool which generates customized seccomp profiles for containers to try and address these problems. Seccomp decides the syscalls that a container is allowed to make. In Docker, seccomp is implemented as a json file which is provided as a part of the ‘–security-opt’ runtime argument. If the host kernel is built with seccomp, then all docker containers are spun up with a default seccomp profile. This default profile blocks 44 system calls and allows more than 300. In most cases, containers are used for specific tasks such as a web server, a database, etc. and do not require all 300 syscalls. Hence, allowing the container to make these syscalls even when they are not required just adds unnecessarily to the attack surface. The tool I developed generates slimmer seccomp profiles customized according to the container’s use case. This reduces the attack surface to a great extent without affecting its functioning.

Figure 1 – SecComp Profile Tool Flow

The tool generates these profiles by monitoring the syscalls made by the container using diagnostic tools strace and sysdig. These open source tools, unfortunately, do not always capture certain calls which are necessary for a container to start up. Therefore, a custom list of these required syscalls is added in the profiles by default to help increase accuracy and to help avoid any hinderance in the functioning of the containers. This custom list was generated by monitoring auditd logs.

The tool can be implemented across platforms which use Docker containers and can be extended to include capabilities and other Linux based security features. Another possible area of extension is to include IDS/IPS functionality which alerts/blocks a container that is using a vulnerable syscall. The tool also currently requires the container to be run once before generating the custom profile and is currently limited to Linux host OS – all areas being investigated for future improvement.

Furthering my research efforts in container security, I also did a study on gVisor – a new container sandboxing solution recently open-sourced by Google. It is a user-space kernel written in Go which implements a large portion of the Linux API. It includes an OCI runtime called runsc.

Figure 2 – gVisor Components

gVisor uses paravirtualization and acts as an intermediary kernel to isolate the containerized application from the host system. The Go kernel intercepts all the syscalls made by the container application and instead of redirecting them to the host kernel, it executes most of these calls within the userspace. Runsc is split into two processes: 1) Sentry – includes the kernel and is responsible for running user code and handling syscalls and 2) Gofer – the file system proxy that passes host files to Sentry through a 9P connection.

gVisor is trying to hit a middle ground between machine level virtualization (KVM, Xen) and rule-based isolation (seccomp, selinux etc). Each container has its own kernel and network stack. Containers do not have direct access to the host file system and the kernel is written in Go which makes it type and memory safe. In terms of usability, gVisor requires zero modification to the application and integrates with Docker and Kubernetes. However, it is not suitable for syscall heavy workloads and does not yet have complete syscall coverage. Applications which require hardware access are not supported by gVisor. In Docker, gVisor also does not yet support all options, such as –network, –volume, etc. In terms of container security, gVisor seems to be a promising technology which could provide strong isolation with lower resource footprint.

In my time here at Adobe, I have gained useful technical knowledge by working on these interesting projects. However, my internship was much more than just work. I was lucky to be part of a super fun and welcoming team where every day lunch was filled with interesting conversations. I also got the opportunity to be part of meaningful interactions with senior officials including the CEO, CTO, and the CSO – I also attended DefCon – which I think is one of the best security conferences in the world, enjoyed a day at the Santa Cruz boardwalk, and even went to a Giants game with the other interns! All this together made for an amazing summer and I think that is what #AdobeLife truly stands for!

]]>Chris Parkersonhttp://www.adobe.com/securityhttp://blogs.adobe.com/security/?p=29072018-08-07T19:19:16Z2018-08-07T18:41:51ZThis year members of our security team will again travel to Las Vegas to attend two of the largest security conferences – BlackHat 2018 & DefCon 2018. The talks can range from “cool bugs” to “conceptual issues that require long term solutions” – all information valuable to our teams in attendance for keeping up with the latest trends in threat intelligence and defense. These events also give our security team an opportunity to network with peers and customers to share the latest knowledge and best practices.

A big part of this knowledge sharing effort will come from one of our security researchers, Vaibhav Gupta, who will be leading a workshop at DefCon 2018 with Sandeep Singh from NotSoSecure on understanding attacks and defense in Amazon Web Services (AWS) environments. Attendees at this workshop will learn about the delta in attack surface when moving to cloud and how to better architect and build defenses to handle this delta. If you are attending DefCon this year, please find your way into this workshop – the information will be helpful to those looking to build cloud apps.

Our team looks forward to meeting you at either BlackHat or DefCon this week.

]]>Chris Parkersonhttp://www.adobe.com/securityhttp://blogs.adobe.com/security/?p=29022018-07-31T17:10:10Z2018-07-31T17:10:10ZAdobe recently held a Security Champions Summit in Bucharest. This was a multi-day event with the goal of building up the skills of our current security champions and encouraging new ones to join the program. Part of the efforts during the summit were to look at ways to consolidate processes, improve visibility for the work our champions do, and to provide training sessions and activities. Additionally, the summit provided a dedicated event for champions to meet each other face-to-face, trigger new ideas by stepping out of the ‘comfort zone’ of their own product teams, and actively be involved in hands-on trainings that will help build better partnerships with engineers and managers.

In addition to our global Security Champions, site reliability engineers, computer scientists, security analysts, cloud security engineers, and security managers from various Adobe teams were present and actively participated in discussions and brainstorming sessions. These sessions covered technologies including our own recently open-sourced tools like HubbleStack and key topics such as threat intelligence, forensics, disaster recovery, and compliance. There were also exercises around log analysis and container security. The Summit brought together the voices of security from within the product teams to share Adobe and industry best practices, as well as the latest guidance on maintaining a strong security posture across all our products.

Events like these are part of Adobe’s ongoing commitment to constantly help spark new ideas that challenge the status quo and build the broadest “security aware” community we can across the company. This effort is also part of our continued evolution around “SecDevOps” processes where security is built into the DevOps process and infrastructure from the get go. Day-to-day activities by our development teams not only contribute to the achievement of operational and development goals but also keep high levels of security, data integrity and privacy, and availability. This improved set of processes are also much easier to audit and ensure adherence to established compliance controls. With product teams engaging with security champions and researchers as early as possible, we have been able to begin shifting from reactive to proactive approaches, better integrating defensive practices throughout the product lifecycle. Activities like the Summit help to better evangelize and cement these “security first” practices to the benefit of teams across the company.

The Security Champions Summit in numbers:

2 days

11 speakers

2 workshops

1 panel

9 talks

Attendees from 4 countries

Daniel Barbu
Manager, Security, Adobe Bucharest

]]>Peleus Uhleyhttp://blogs.adobe.com/security/?p=28892018-06-27T17:11:25Z2018-06-27T17:10:25ZWhile Adobe announced its plans to stop updating and distributing Flash Player at the end of 2020, we continue to work hard to find and implement tools to help protect users until that time. Over the last few product releases we have deployed several new security controls for our customers to help ward off potential attacks.

When Adobe issues a zero-day (0-day) security update, we always issue security updates for all browsers out of an abundance of caution. We do not assume that the initial exploit sample we see is the only variant that may exist, and we also know that new variants may be introduced as details of the exploit become more widely known. However, when reviewing the recent history of Flash Player 0-day vulnerabilities, the initial 0-day samples we have received consistently targeted either Internet Explorer or Office.

To help provide enterprises using these applications with options, Adobe introduced a control in Flash Player version 27 in November 2017 that allows administrators to make Flash Player click-to-play in Internet Explorer via the mms.cfg file. Administrators can apply this setting either globally or selectively for specific domains. This allows enterprises with legacy software that requires Internet Explorer to help limit their exposure to Flash Player attacks. Details of this setting can be found in the Flash Player Administrators Guide. Modern browsers, such as Microsoft Edge, have a click-to-run experience built-in and can therefore better protect you from these types of attacks.

In addition, Adobe has made Flash Player click-to-play across all versions of Office with the latest Flash Player release (version 30). This will help prevent Flash Player exploits from automatically running when a document is opened and offers an opportunity for the viewer to realize that a document may be malicious. For users of Office 365, Microsoft plans to begin blocking Flash Player and other plugins altogether. This change will help protect users going forward, and Adobe’s change will help to protect people using older versions of Office.

In addition to the changes discussed above, the Flash Player team has been closely watching the research around the recent vulnerabilities in modern processors. Adobe has made several changes in an effort to help reduce the risk of Flash Player being used as a potential platform for these attacks. One of the changes made is disabling SharedByteArray functionality. We have made additional changes which mirror what the browser community has done to reduce the accuracy of timers within our platform.

These protections aim to reduce the attractiveness of Flash Player as a target for attackers. These efforts are all part of our ongoing commitment to keep our customers as safe as possible as we wind down use of Flash Player.