]]>Healthcare is outpacing many other vertical market segments when it comes to cloud adoption, and for good reason – namely, to reduce IT complexity, slash costs and stay ahead of increased regulatory scrutiny.

That said, many small and mid-sized enterprises – not healthcare-specific, but certainly inclusive of healthcare – are struggling to find people with the necessary skill sets as well as the security tool sets to secure their cloud systems and manage them using on-premises security. And it’s even more of a challenge for healthcare organizations when security isn’t centrally managed by anyone, but instead is managed by the CIOs, operations, development and remote office teams.

Under such pressures, public cloud computing provides a way to meet these objectives while also improving the security of IT infrastructure. Security improvements are always relative, of course, to organizational ability to execute. That said, healthcare organizations with significant restraints on resources and lacking dedicated security expertise on staff have a better chance at improving security in the cloud than managing their own on-premises systems.

Let’s put things into perspective.

According to the HIPAA Journal, “Between 2009 and 2018 there have been 2,546 healthcare data breaches involving more than 500 records. Those breaches have resulted in the theft/exposure of 189,945,874 healthcare records. That equates to more than 59% of the population of the United States. Healthcare data breaches are now being reported at a rate of more than one per day.”

Not to mention there are significant fines that come along with it. “2018 was a record breaking year for HIPAA fines and settlements, beating the previous record of $23,505,300 set in 2016 by 22%. OCR [Office for Civil Rights] received payments totaling $28,683,400 in 2018 from HIPAA covered entities and business associates who had violated HIPAA Rules.”

Cloud security is a shared responsibility. No excuses.

Being tight on staff and resources is certainly a reason for rising data breaches and system availability problems – but it’s not an acceptable excuse. This is especially true for healthcare providers. Guidance from the Department of Health and Human Services Office for Civil Rights made it clear – healthcare providers and business associates are the ones responsible for making certain that their cloud environments and cloud service providers are secure and compliant with security and privacy mandates.

There’s no one way for healthcare providers to succeed at managing and securing cloud environments, but there are certainly tactics that don’t work. Those tactics include doing what too many businesses have focused on for too long: ad hoc security and reviews, attempting to secure systems based on checklists, and building “security” programs that focus on compliance rather than mitigating real risks.

Don’t worry – there’s good news.

The good news here is that the cloud can be used to help simplify these efforts through automation and continuous monitoring, both for new systems that may arise as well as systems that fall out of compliance with regulatory and security policies or otherwise become vulnerable. Cloud systems exist in a constant state of flux, where misconfigurations and vulnerabilities can creep in at any time. Continuous monitoring helps identify these anomalies and then automatically respond and remediate them. Automation is also especially beneficial for any enterprise with tight limits on resources. You can learn more in our new eBook, Continuous Monitoring and Compliance in the Cloud. I’d also encourage you to check out the recent automation webinar we hosted with SANS, Delivering Infrastructure, Security & Operations as Code.

]]>https://blog.paloaltonetworks.com/2019/05/cloud-healthcare-orgs-move-cloud-secure/feed/0How to Stay Secure in a Multi-Cloud Environmenthttps://blog.paloaltonetworks.com/2019/03/stay-secure-multi-cloud-environment/
https://blog.paloaltonetworks.com/2019/03/stay-secure-multi-cloud-environment/#respondThu, 14 Mar 2019 13:00:47 +0000https://blog.paloaltonetworks.com/?p=97628How do security leaders design a strategy that effectively addresses the processes and tools required to manage the new risks and threats cloud presents?

]]>“Products provide some protection, but the only way to effectively do business in an insecure world is to put processes in place that recognize the inherent insecurity in the products. The trick is to reduce your risk of exposure regardless of the products or patches.”

– Bruce Schneier

Bruce Schneier penned these insightful words in April of 2000. Scroll forward 19 years and we now find ourselves in a world where disruption and innovation are a daily occurrence due to the low barriers to entry created by public cloud. How do security leaders design a strategy that effectively addresses the processes and tools required to manage the new risks and threats cloud presents?

First, let’s start with a definition of what we mean by multi-cloud. When we say “multi-cloud” we simply mean the parallel usage of two or more cloud service provider (CSP) platforms. And “cloud” generally describes a computing platform that falls into three categories: IaaS, PaaS & SaaS. While each of these represent their own unique security challenges, we’ll stay laser focused on IaaS & PaaS where there are currently three ruling titans: Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform.

Challenges of a Multi-Cloud Environment

In our conversations with clients there is almost always one universal thread no matter where they are in their cloud journey: how do we enable the business to operate with freedom in the cloud but also put the proper guardrails in place to prevent them from taking unnecessary risks?

We believe a fundamental understanding of the shared responsibility model is key as this is the main differentiator when compared to legacy on-prem environments. Once this model is understood and clearly documented and agreed to in an organizational RACI, we recommend customers conduct a risk assessment informed by a thorough understanding of security in the cloud.

Critical to the cloud risk assessment is understanding your current security processes and how the tools you’ve already invested in help manage risks today. Unfortunately, we see a lot of clients skip this step and move directly to design and build phases, which is a fatal mistake. Why? Because it inevitably leads to security teams rebuilding their on-prem security model in the cloud and completely misses the opportunity to transform their security program and “shift left” their security, aka DevSecOps.

When companies are planning an all-in approach to cloud, they typically focus on one of the three major players: Google, AWS or Azure. Each of these providers offer rock solid services with every major security and compliance certification to boot.

Invariably several months into the cloud migration process a business unit will pop up (or security teams will discover) a new cloud requirement: “Provider X just launched a new feature which directly addresses our business requirement–can we get access this week?” The IT and security teams then scramble and try to figure out how anything they’ve purpose built for their primary cloud can be utilized with the new provider.

For security teams who are relying on legacy tools or only native security features of their primary cloud platform, this is a major challenge. How does AWS GuardDuty or AWS Config help you to secure Google or Azure clouds? Simple answer? They don’t. So how should a security team proactively address the multi-cloud security challenge while not getting caught up in the morass of ever-changing individual cloud provider offerings?

Standards are the Precursor to Automation

Staying secure in a multi-cloud environment can be challenging given the radically divergent APIs between cloud providers. The best place to start is with a trusted security standard. Rather than trying to design a standard from scratch, we highly recommend starting with the Center for Internet Security’s Benchmarks. The AWS benchmark has been around for several years and both benchmarks for Azure and Google Cloud were released in 2018. While standards may not be the most exciting part of security they do have the added benefit of being the precursor to automation. Put simply, you cannot automate what you have not standardized upon. Once you’ve agreed upon a standard you can then begin to measure yourself against it over time and work to automate as your cloud security program matures.

Moving from Theory to Execution

Security leaders can design a strategy that effectively addresses new risks and threats presented by public cloud. This can only be done with a deep understanding of the shared responsibility model and a sharp focus on dissecting the process by which development and business teams are utilizing public cloud. In my next post we’ll dig deeper into how this can be done as well as how simplicity is key to your multi-cloud security strategy.

]]>Palo Alto Networks CEO Nikesh Arora appeared on the RSA Conference keynote stage Wednesday, sharing his thoughts on cybersecurity, including why it’s an amazing time to be in the industry. Arista Networks CEO Jayshree Ullal joined him for a chat on issues including disruption, embracing change and inclusion & diversity.

“Products are the most important thing in what we do for a living. If you have a winning product you will find the customers,” said Arora. But a great product is just the beginning. As an industry, there is a significant opportunity to make life easier for our customers by focusing on integration.

Watch the keynote highlights below to learn more about where the industry is headed and why integration is more important than ever.

]]>https://blog.paloaltonetworks.com/2019/03/rsa-2019-watch-nikesh-aroras-keynote/feed/0Security Teams Deserve a Better Approach to Detection and Responsehttps://blog.paloaltonetworks.com/2019/02/security-teams-deserve-better-approach/
https://blog.paloaltonetworks.com/2019/02/security-teams-deserve-better-approach/#respondWed, 20 Feb 2019 14:00:40 +0000https://blog.paloaltonetworks.com/?p=97147For many organizations, security teams are the first line of defense against all known and unknown threats. The core function of these teams is to identify, investigate and mitigate threats across their entire digital domain. As adversaries become more automated and complex, security teams are relying on a layered approach to prevention. This approach involves deploying technologies such as Endpoint, Detection and Response (EDR), User and Entity Behavioral Analytics (UEBA) and Network Traffic Analysis (NTA) to gain visibility across their environment. In addition, security teams typically use alert and log …

]]>For many organizations, security teams are the first line of defense against all known and unknown threats. The core function of these teams is to identify, investigate and mitigate threats across their entire digital domain. As adversaries become more automated and complex, security teams are relying on a layered approach to prevention.

This approach involves deploying technologies such as Endpoint, Detection and Response (EDR), User and Entity Behavioral Analytics (UEBA) and Network Traffic Analysis (NTA) to gain visibility across their environment. In addition, security teams typically use alert and log aggregation technologies like Security Information Event Management (SIEM) tools to set policies, correlate events and prioritize issues. Finally, there is a need to somehow link the alerts generated with the data to help them investigate and mitigate threats faster.

This layered prevention approach comes at the cost of time and expertise. Many security teams are not capable of handling a large amount of alerts. The average SOC might see 174,000 alerts per week. With a finite security team, the math doesn’t work. Security teams may depend on 40+ narrowly focused tools to investigate and mitigate attacks, which can produce 200+ cases per day. This creates a swivel-chair effect that causes security teams to spend time piecing together data and reacting to alerts, rather than being proactive and preventing attacks in the first place.

Security teams deserve a better approach that bypasses the complexity and limitations of siloed tools like EDR, UEBA and NTA. An approach that can break the silos and aid the security team at all stages– anomaly detection, alert triage, incident investigation and threat hunting. This ideal approach would:

This is an entirely new approach called XDR, a dramatic departure from the traditional detection and response category. The “X” stands for any data source, be it network, endpoint or cloud, with a focus on force-multiplying the productivity of every member of the security operations team through automation. The ultimate goal is to ensure products within this category reduce mean time to detect and respond to threats without increasing effort somewhere else in the team.

To learn more about this groundbreaking approach, download this whitepaper and get details on how XDR can help you redefine your security operations.

]]>https://blog.paloaltonetworks.com/2019/02/security-teams-deserve-better-approach/feed/08 AWS Security Best Practices to Mitigate Riskhttps://blog.paloaltonetworks.com/2019/02/8-aws-security-best-practices-mitigate-risk/
https://blog.paloaltonetworks.com/2019/02/8-aws-security-best-practices-mitigate-risk/#respondThu, 07 Feb 2019 14:00:49 +0000https://blog.paloaltonetworks.com/?p=96871There are a lot of benefits that come with having Amazon Web Services (AWS) as your cloud platform, alone or as part of a hybrid or multi-cloud environment. The agility and flexibility of AWS’s platform as a service (PaaS) and infrastructure as a service (IaaS) make it possible for your organization’s network to be responsive, innovative, and ready for change. But there are security considerations. Outlined below are these considerations, along with security best practices to help keep your AWS environment properly configured and secure. 1. Visibility Cloud resources are …

]]>There are a lot of benefits that come with having Amazon Web Services (AWS) as your cloud platform, alone or as part of a hybrid or multi-cloud environment. The agility and flexibility of AWS’s platform as a service (PaaS) and infrastructure as a service (IaaS) make it possible for your organization’s network to be responsive, innovative, and ready for change. But there are security considerations. Outlined below are these considerations, along with security best practices to help keep your AWS environment properly configured and secure.

1. Visibility

Cloud resources are ephemeral, which makes it difficult to keep track of assets. According to our research, the average lifespan of a cloud resource is two hours and seven minutes. And many companies have environments that involve multiple cloud accounts and regions. This leads to decentralized visibility, and since you can’t secure what you can’t see, this makes it difficult to detect risks.

Best practice: Use a cloud security solution that provides visibility into the volume and types of resources (virtual machines, load balancers, security groups, users, etc.) across multiple cloud accounts and regions in a single pane of glass. Having visibility and an understanding of your environment enables you to implement more granular policies and reduce risk.

2. Exposed root accounts

Your root accounts can do the most harm when unauthorized parties acquire access to them. Administrators often forget to disable root API access.

Best practice: Root accounts must be protected by multi-factor authentication and used sparingly. Not even your top admins should have access to your AWS root account the vast majority of the time, and never share them across users and applications.

3.IAM access keys

IAM access keys are often not rotated. This weakens IAM’s ability to secure your user accounts and groups, giving cyber attackers a longer time window to acquire them.

Best practice: Rotate or change your access keys at least once every 90 days. If you have given the users the necessary permissions, then they can rotate their own access keys. Plus, it ensures that old keys aren’t being used to access critical services.

4. Authentication practices

According to Verizon’s annual Data Breach Investigations Report, lost or stolen credentials are a leading cause of cloud security incidents. It is not uncommon to find access credentials to public cloud environments exposed on the internet. Organizations need a way to detect account compromises.

Best practice: Strong password policies and multi-factor authentication (MFA) should be enforced in AWS environments. Amazon recommends enabling MFA for all accounts that have console passwords. First, determine which accounts already have MFA. Then, go into IAM and select “MFA device” for each user. Smartphones and other devices can be used for an extra factor of authentication.

5. Access privileges

AWS IAM can be deployed to manage all of your user accounts and groups, with policies and detailed permission options. Unfortunately, admins often assign overly permissive access to AWS resources. Not only does that enable users to make changes and have access they shouldn’t be allowed to have, but if a cyber attacker acquires their account, more harm can be done.

Best practice: Your configuration of IAM, like any user permission system, should comply with the principle of “least privilege.” That means any user or group should only have the permissions required to perform their job, and no more.

Security groups are like a firewall that controls traffic to the AWS environment. Unfortunately, admins often assign security groups IP ranges that are broader than necessary. Research from Unit 42’s cloud research team found that 85% of resources associated with security groups don’t restrict outbound traffic at all.

Adding to the concern, increasing numbers of organizations were not following network security best practices and had misconfigurations or risky configurations. Industry best practices call for restricting outbound access to prevent accidental data loss or data exfiltration in the event of a breach.

Best practice: Limit the IP ranges you assign to each security group in such a way that everything networks properly, but you aren’t leaving a lot more open than you’ll need.

7. Audit history

Organizations need oversight into user activities to reveal account compromises, insider threats, and other risks. The virtualization that’s the backbone of cloud networks and the ability to use the infrastructure of a very large and experienced third-party vendor afford agility as privileged users can make changes to the environment as needed. The downside is the potential for insufficient security oversight. To avoid this risk, user activities must be tracked to identify account compromises and insider threats as well as assure that a malicious outsider hasn’t hijacked those accounts. Fortunately, businesses can effectively monitor users when the right technologies are deployed.

Best Practice: AWS CloudTrail is a web service that provides event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. It must be used. Enabling CloudTrail simplifies security analysis, resource change tracking, and troubleshooting.

8. Unpatched hosts

It is your responsibility to ensure the latest security patches have been applied to hosts within your AWS environment. Unit 42 provides insight into a related problem. Traditional network vulnerability scanners are most effective for on-premises networks but miss crucial vulnerabilities when they’re used to test cloud networks.

Best practice: Make sure hosts are frequently patched and apply any necessary hotfixes that are released by your OEM vendors. To do so, you need third-party tools that can map the data from your host vulnerability feeds, such as Amazon Inspector, to gain cloud-specific context.

Amazon has developed some very useful security measures and controls that organizations should take full advantage of, including AWS CloudTrail, IAM, and permissions on cloud resources, which can be configured in a very specific way. However, that’s only the first step. Organizations must be able to quickly prioritize risks and maintain agile development to effectively fulfill their obligations in the Shared Responsibility Model.

]]>Docker is turning six this year. Yet many security teams lack a distinct strategy for addressing containers and oftentimes address container security separately from cloud. For security teams, it’s critical to understand the intrinsic link between public cloud and containers.

Computing has gone through several evolutions over the past two decades. I’ll save the history lesson for a future post, but suffice it to say, developers grew tired of dealing with operating system and application dependencies. Containers addressed this issue but then created a whole new set of security challenges. Market demand grew quickly and point security products – from commercial to open source – began to spring up. While these point products narrowly addressed some of the security challenges with containers, they completely missed the bigger picture. The majority of apps developed on containers utilize a mix of PaaS services like AWS Redshift, GCP Cloud Datastore and Azure SQL (see the diagram above, and you’ll get my point). Without a complete view into your cloud’s API layer, which knows exactly which cloud-native services are in use, how can your teams accurately assess the total risk containers add to your enterprise?

Consider a scenario where a fleet of containers are vulnerable to the latest CVE, but your AWS security group is not open on the port required to attack. Having this full stack security knowledge dramatically changes the risk equation but would be otherwise missed by container security point products. Why? Because they often do not have visibility into the cloud providers all-important API layer. This full stack knowledge allows your teams to address this vulnerability later while first remediating other vulnerabilities that are publicly exposed. The question then becomes, what are we missing with our current strategy that looks at container security in isolation from the rest of the cloud architecture?

Holistically Addressing Containers

The first step in correcting this is setting a clear goal so that your team knows what they are trying to achieve. This could be as simple as “enabling the secure use of containers through a combination of developer education, agreed upon security standards, automated enforcement of best practices and close integration with cloud provider native APIs.” The key here is that you are not limiting teams to buying yet another security tool but rather encouraging them to take a holistic approach that includes people, process and technology.

What does this really mean? Embrace DevOps and shift security left. Until your teams have integrated both security process and tools into the development lifecycle, it’s not possible to have DevSecOps (the union of DevOps, security and shifting security as early in the development process as possible). And given the speed and velocity at which containers and cloud operate, there really is no other successful path forward for security teams. Once you’ve set a clear goal, teams will want to unleash their energy on the three key areas within the container lifecycle: build, deploy and run, each of which presents unique challenges and deserves close attention.

Wrapping It Up

With predictions of container adoption reaching 90% of enterprises in 2019, it’s critical that your team address them as a holistic part of your cloud security strategy. If you don’t have someone on your team focused on developing and implementing a cloud security strategy with containers as an integral part, you are likely looking at a skewed picture of the risks containers are adding to your organization. While it’s always tempting to bolt on yet another point security product, the most mature organizations see cloud and containers as one and the same. Addressing container and cloud security separately will leave you blind to risks that an otherwise-integrated strategy would address.

]]>https://blog.paloaltonetworks.com/2019/02/the-hole-in-your-container-security-strategy/feed/0What Does It Mean to Be “5G-Ready”?https://blog.paloaltonetworks.com/2019/01/what-does-it-mean-to-be-5g-ready/
https://blog.paloaltonetworks.com/2019/01/what-does-it-mean-to-be-5g-ready/#respondMon, 28 Jan 2019 14:00:04 +0000https://blog.paloaltonetworks.com/?p=96660With regard to security, it’s critical. We keep hearing about products and technologies that are “5G-ready.” But what does that mean? Mobile Service Providers will undoubtedly require 5G equipment that is scalable in terms of capacity and throughput, but does that alone mean the networks will be 5G-ready? In late February at Mobile World Congress 2019, we can certainly expect to see demos of 5G core networks, network slicing, New Radios (5G-NR), and other 5G-ready network components. But what about security? Mobile networks will not be 5G-ready unless the necessary …

We keep hearing about products and technologies that are “5G-ready.” But what does that mean? Mobile Service Providers will undoubtedly require 5G equipment that is scalable in terms of capacity and throughput, but does that alone mean the networks will be 5G-ready?

In late February at Mobile World Congress 2019, we can certainly expect to see demos of 5G core networks, network slicing, New Radios (5G-NR), and other 5G-ready network components. But what about security? Mobile networks will not be 5G-ready unless the necessary security capabilities are baked into these networks by design.

Tom Wheeler, former chairman of the Federal Communications Commission, accurately points out in a recent NY Times op-ed: “Leadership in 5G technology is not just about building a network, but also about whether that network will be secure enough for the innovations it promises.” Wheeler goes on to state, “The simple fact is that our wireless networks are not as secure as they could be because they weren’t designed to withstand the kinds of cyberattacks that are now common. This isn’t the fault of the companies that built the networks, but a reflection that when the standards for the current fourth-generation (4G) technology were set years ago, cyberattacks were not a front-and-center concern.”

A New Approach for Security Is Needed

With 5G, everything changes. Critical applications like remote healthcare, remote monitoring and control over our power grids, and self-driving automobiles will all rely on 5G technologies. The networks will become more distributed, and many critical applications will be hosted at the edge of 5G networks and across edge clouds. Opportunities for threat actors will emerge if they are allowed to go unchecked, as they will use automation to wage multi-stage attacks and find the least secure portions of the 5G networks to exploit. For mobile networks to be 5G-ready, a new approach for security is required.

Even though standards and network architectures are still being defined, mobile operators not only have the opportunity to build the right set of security capabilities into these network evolutions by design, they have no choice but to do it. Today’s cyberattacks are already capable of evading mobile networks, and their continued evolution is indeed a front-and-center concern.

Complete visibility, inspection, and controls that are applied across all layers of the network – application, signaling, and data planes.

Cloud-based threat analytics – powered by machine learning (ML) – that are leveraged across the different mobile network locations and environments.

A cloud-ready platform that ensures consistent security enforcement across all network locations.

With these necessary security capabilities in place, mobile networks will be able to evolve as 5G-ready with a data-driven threat prevention posture that provides contextual security outcomes. Mobile operators will be able to automate processes to proactively identify infected devices and prevent device-initiated attacks. They will be able to capture advanced multi-stage attacks that will naturally look to leverage different signaling and control layers across the 5G networks. They will be able to automatically identify advanced threats, correlate these with specific devices/users, and isolate/remove infected devices from their networks. They will also be able to differentiate themselves as “secure business enablers.”

These 5G networks are set to become the backbone of transformational services that will positively alter our lives for generations to come. Whether it’s autonomous vehicles, remote surgery, smart utilities, or the multitude of other technological advancements that will enable us to benefit from 5G, as Wheeler states: “Innovators, investors and users need confidence in the network’s cybersecurity if its much-heralded promise is to be realized.”

]]>https://blog.paloaltonetworks.com/2019/01/what-does-it-mean-to-be-5g-ready/feed/0You Want Network Segmentation, But You Need Zero Trusthttps://blog.paloaltonetworks.com/2019/01/you-want-network-segmentation-but-you-need-zero-trust/
https://blog.paloaltonetworks.com/2019/01/you-want-network-segmentation-but-you-need-zero-trust/#respondThu, 17 Jan 2019 14:00:17 +0000https://blog.paloaltonetworks.com/?p=96593In this blog series, I’ve been giving sufficient commentary on Zero Trust in order to dispel much of the mythology that has started to surround the topic recently. I talked about the fundamental issues with the failed trust model and how trust is a vulnerability. Then, I provided clarity as to what Zero Trust is (and isn’t). And most recently, I reviewed the concept of a “protect surface”. Now, I want to talk about the concept of a Segmentation Gateway (SG) – the technology that protects the protect surface. Years …

]]>In this blog series, I’ve been giving sufficient commentary on Zero Trust in order to dispel much of the mythology that has started to surround the topic recently. I talked about the fundamental issues with the failed trust model and how trust is a vulnerability. Then, I provided clarity as to what Zero Trust is (and isn’t). And most recently, I reviewed the concept of a “protect surface”. Now, I want to talk about the concept of a Segmentation Gateway (SG) – the technology that protects the protect surface.

Years ago, before the advent of the Next-Generation Firewall, I wrote about the concept of an SG when I was at Forrester Research. I foresaw the need for a segmentation gateway that collapses all network security technologies into a single gateway for the purpose of segmenting networks based upon users, applications, or data. Today, an SG can be delivered either physically (PSG) or virtually (VSG), and can granularly control what traffic moves in and out of a micro-perimeter in a Zero Trust network.

Network segmentation is top of mind in organizations around the world. I have found that when customers ask for segmentation, what they really mean is that they NEED a Zero Trust network. This isn’t surprising, as a fundamental struggle in the cybersecurity industry is the tendency to think tactically and not strategically. Network segmentation is a tactic and a tool, not a strategy for building secure networks. This is where Zero Trust comes in. Adopting a Zero Trust architecture provides business resonance, defines the business use of segmentation, and provides a methodology for building a segmented network.

How to Segment a Network Properly

As I work with customers, I show them how applying Zero Trust principles provides two very important answers around how to segment a network properly:

Zero Trust answers why you are segmenting. Segmentation should be done from the inside out. You first determine what you are protecting. This is typically data, applications, assets, or services that are sensitive, regulated, or in other ways, important to your company. This defines the protect surface, which is the smallest possible outcome of our mandate to reduce the attack surface.

Zero Trust answers how you are enforcing the segmentation all the way up to Layer 7. Every attacker worth their salt knows how to get past Layer 3 controls. Network segments must be secured at Layer 7. This should be non-negotiable and intuitive.

When I work on Zero Trust network designs, I use a Next-Generation Firewall either in a physical or virtual form factor to function as the SG in a Zero Trust environment. This is imperative as policy must be enforced at Layer 7. Most attackers know how to bypass Layer 3/4 technologies, which is why NGFWs had to be developed in the first place.

By having access to the Layer 7 traffic, we can now create more effective and granular policy controls enforced in real time by the SG. As I noted in my last post, there is a very limited number of users or resources that actually need access to sensitive data or assets in an environment. In an SG, by creating policy statements that are limited, precise, and understandable, you limit the ability of the adversary to execute a successful cyberattack. You just need a Zero Trust architecture as the starting point to achieve this. In my next post, I’ll discuss how Zero Trust enables a new way to create Layer 7 policy.

]]>https://blog.paloaltonetworks.com/2019/01/you-want-network-segmentation-but-you-need-zero-trust/feed/0Differentiating With 5G Security: How Mobile Service Providers Can Become Secure Business Enablershttps://blog.paloaltonetworks.com/2019/01/differentiating-5g-security-mobile-service-providers-can-become-secure-business-enablers/
https://blog.paloaltonetworks.com/2019/01/differentiating-5g-security-mobile-service-providers-can-become-secure-business-enablers/#commentsTue, 08 Jan 2019 14:00:32 +0000https://blog.paloaltonetworks.com/?p=96487Mobile Network Operators (MNOs) are set to invest billions of dollars over the next several years to build 5G networks. What differentiates 5G from previous generations of mobile technology evolutions (2G to 3G, 3G to 4G) is the opportunity to enable new and transformative enterprise use cases. Whether it be leveraging 5G for on-site factory automation, self-driving cars, remote surgery, or any number of centrally or remotely deployed massive IoT use cases, 5G creates opportunity to build new enterprise services that simply aren’t possible with today’s technology. Industry digitization will …

]]>Mobile Network Operators (MNOs) are set to invest billions of dollars over the next several years to build 5G networks. What differentiates 5G from previous generations of mobile technology evolutions (2G to 3G, 3G to 4G) is the opportunity to enable new and transformative enterprise use cases. Whether it be leveraging 5G for on-site factory automation, self-driving cars, remote surgery, or any number of centrally or remotely deployed massive IoT use cases, 5G creates opportunity to build new enterprise services that simply aren’t possible with today’s technology.

Industry digitization will generate an estimated US$619 billion in new revenue opportunity for telecom operators by 2026. MNOs that gain most from 5G will be those that look well beyond the construction of faster networks and work to differentiate themselves with disruptive business models around the enablement of this massive industry transformation. Building differentiated 5G networks will be critical. Enabling network security capabilities for industries will be the fundamental differentiator for these MNOs.

One of the defining features of 5G networks that MNOs will look to leverage for enabling industry transformation is their ability to serve up custom “slices” of their 5G networks to individual enterprise customers. These will be configured end to end across the mobile core, transport, and radio access network (RAN) domains and operate as “virtual 5G enterprise networks” with each slice being uniquely tailored to suit the requirements of specific verticals and/or enterprise applications. Organizations across multiple industries will leverage 5G network slices to roll out transformative business applications and will require multilayered security functions when configuring these slices.

The Opportunity

MNOs will have the opportunity to become “secure business enablers” in this 5G world by delivering real value beyond connectivity. The potential here is that organizations don’t select from a fixed menu of network slicing options handed down to them by the operator. Instead, organizations could have the means to reach into the 5G network themselves and self-provision their own customized and secure network slices. Advanced network security capabilities will differentiate one MNO from the next and also enable the MNO to build value-based 5G business models.

Becoming a premium provider of 5G network slices requires a new security approach for the operator that allows them to build brand equity over time while protecting their own network. Organizations will need to trust that their sensitive data and applications are protected in the 5G operator’s network. Likewise, the operator needs to be able to trust that the rest of its network resources – and its obligations to other customers – are fully protected against misuse by any one enterprise user.

In the initial phase of network slice offerings, organizations will require a universal security framework that ensures the total isolation of network slices from one another. They will also require an initial menu of security features with the slice that they buy. Subsequent evolutions in network slicing will require additional security features and greater flexibility in the way different security features can be attached to a given slice. But they will require a lot more than that, too. Whether it’s in the cloud, on their own premises, or in a telecom operator’s cloud or 5G slice, organizations increasingly want an end-to-end security posture in which global as well as local threat intelligence and behavioral analytics are shared across their estate of information and communication technology (ICT) infrastructure. This requires MNOs to build their networks with a security architecture that is highly automated, software-driven, and allowing of security instances to be spun up anywhere in the network with consistent capabilities being deployed where needed.

Organizations will focus heavily on the network security that is offered and tied to their 5G applications as a critical factor in their selection of service and/or slice provider. But they will look beyond that, too. In addition to the isolation that’s provided between slices and the security features attached to their slice, organizations will take into account a 5G operator’s broader security posture across the whole of its network. In a 5G environment in which new applications, devices, and cyberthreats are growing exponentially, the confidentiality, integrity, and availability of a network slice will be easier to defend in a 5G network that is subject to the very highest levels of overall cybersecurity hygiene.

]]>https://blog.paloaltonetworks.com/2019/01/differentiating-5g-security-mobile-service-providers-can-become-secure-business-enablers/feed/1The EU’s New Telecom Code Requires Heightened Cybersecurity Effortshttps://blog.paloaltonetworks.com/2018/12/eus-new-telecom-code-requires-heightened-cybersecurity-efforts/
https://blog.paloaltonetworks.com/2018/12/eus-new-telecom-code-requires-heightened-cybersecurity-efforts/#respondMon, 17 Dec 2018 19:30:41 +0000https://blog.paloaltonetworks.com/?p=96027Today, the European Union (EU) took its final step in enacting its new telecommunications legislation, the European Electronic Communications Code (the “Code”), which overhauls the existing EU legislative framework for telecommunications, dating from 2009.

]]>Today, the European Union (EU) took its final step in enacting its new telecommunications legislation, the European Electronic Communications Code (the “Code”), which overhauls the existing EU legislative framework for telecommunications, dating from 2009. The Code was published in the Official Journal of the European Union today, and EU Member States will have two years to transpose relevant clauses of the Directive into national laws, regulations and administrative provisions necessary to comply with it. The deadline for that transposition (and the date from which the national laws should be enforced) is therefore December 2020.

The Code’s overall aim is, as described by the European Commission, to “put the EU at the forefront of internet connectivity by 2025 – to create a Gigabit Society.” The European Council further stressed that this “is the cornerstone of EU efforts to ensure very high quality fixed and mobile connectivity for everyone, which is considered a key factor for a globally competitive economy and a modern inclusive society.” To achieve this goal, the new Code is multifaceted – it includes measures to stimulate investment in and take-up of very high capacity networks, new spectrum rules for mobile connectivity and 5G, as well as changes to governance, the universal service regime, end-user protection rules, and numbering and emergency communication rules.

Notably, it also includes provisions on security: providers of electronic communications networks and services are to put in place mechanisms and technology to minimize and manage security incidents. These rules build upon security requirements in the existing law, with new provisions related to security incident notifications and other areas.

The Code’s security provisions convey that, to truly benefit from 5G, the EU understands these networks must be secure, particularly to drive the user confidence (businesses and EU citizens) in online activities that are expected to flourish from infrastructure investments. With 5G, more devices and critical services will move onto these networks, and cybersecurity threat actors will likely follow.

Palo Alto Networks has been working closely with our service provider (SP) partners around the world – including in Europe – to understand the unique security challenges 5G will present for them. We know SPs are facing a major mobile infrastructure transition from 4G to 5G technology, and we plan to be there to help them succeed. Recently, we announced the coming availability of a dedicated service provider product series of our next-generation firewalls (NGFWs), which are ideally suited for service provider 4G/5G network evolutions and IoT use case scenarios.

Like many others, Palo Alto Networks will be looking to understand the real world implications of the Code over the coming months. In addition, further details on its security and other provisions will have to be developed by Member States and the competent telecommunication authorities from now through 2020. We will continue analyzing the law in more depth to assess how we can help our EU SP customers with their security needs and compliance journey. Expect to hear more from us on this topic in the future.