Although ACLs and Bucket policies are great for protecting against leaky buckets for those who understand AWS and IAM very well, they are not something one would understand in 30 seconds – so I think some people avoid learning them to simply get their jobs done fast. Also, I think there is a general lack of Security awareness in the wild, so people do things to get a job done quickly without thinking through the implications of what they are doing… making buckets public. I believe that these two reasons are at dead center of all of the bucket leaks you read about in the papers.

First, let’s talk about detecting public buckets.

AWS has done a good job adding additional methods to detect public buckets. A year or so ago this little icon appeared in the console next to any public bucket when looking at your buckets in S3 menu:

Also, to take a quick detour into the inventory aspect of Info Security, think about leveraging Amazon Macie for detecting certain data types (PII) are in S3 buckets and how that data is being accessed.

That’s nice, but let’s take some actions . .

Next, you can integrate Trusted Advisor with CloudWatch, so you can take actions on Trusted Advisor’s checks, but this will only fire AFTER Trusted Advisor has ran….so still not quite enough to stop bucket leaks that would occur in-between the times TA is run.

Next level up, and one of my favorites, is using AWS Config to monitor and respond to public buckets – this is powerful because it requires no human interaction…. almost. The Lambda script in the linked tutorial only notifies if a bucket permission has been changed. To fully automate I really recommend that you customize the script and add some teeth to it, so it will remediate the bucket policy. An example script is here.

Problem still not solved you say, a user should not even be allowed to set their own bucket permission? Could not agree more.. so…

I’d rather just prevent S3 Public Access in the first place.

Now, there is Amazon S3 block public access which gives the Account Administrator more power to block users from introducing ACLs with open permissions onto a bucket in the first place, by blocking this at the Account level.

And Last – would sit a Corporate Security Policy about creating any kind of Public sharing on any Cloud or third party service without explicit permission from the Security Team. Education and acknowledgement of this policy by every employee yearly is a must.

I hope this helps! Stay Safe! Stay Secure!

The opinions of this blog do not necessarily reflect that of Amazon. This blog is not an official publication of Amazon or associated with Amazon.com.

Greetings, Programs! I saw this story and had to share some insight. I believe you will find it valuable! Here we have a real world example of an employee who was fired from his job and then leveraged a co-worker’s credentials to log in to the his former employers AWS account and delete critical infrastructure. You can prevent this!

1. MFA. The primary mitigation for this type of attack is using Multi-Factor Authentication on all IAM accounts; or at a minimum, any account that has any permissions above READ. Had the stolen account ( that had the

ec2:TerminateInstances"

permission had an MFA token associated, this most likely would have not have occurred. The Sophos guys pointed this out as well, but I had to mention it since it is critical. In addition to what Sophos mentioned, I offer the following thoughts:

2. Password rotation comes to mind here as well. It is not clear how the suspect obtained the credentials; nor how old the credentials were, but having a strong rotation policy can mitigate against the stolen credential attack, also leverage rotation with API access keys as well. CIS 1.0 Benchmarks call for 90 day API key age.

3. Policy. I know this is not a popular, shiny Security tool, but a good old fashioned strict corporate policy strictly prohibiting sharing of credentials between employees; audited and enforced could create a culture where employees don’t share, period.

4. AWS Guard Duty could have picked this up! How you ask? The ex-employee used a valid credential, right? Yes, but that account likely never logged in from the IP address used in the attack. There are two Guard Duty alarms that could have made Security Operations Personnel aware within 5 mins / or automation could shut down access within 5 minutes. Add the two alerts below to your Operations playbook for investigation; or if you are more bold and sure there are not false positives – create automated that immediately locks the account from these.

UnauthorizedAccess:IAMUser/ConsoleLoginSuccess.B – This finding informs you that multiple successful console logins for the same IAM user were observed around the same time in various geographical locations. Such anomalous and risky access location pattern indicates potential unauthorized access to your AWS resource.

UnauthorizedAccess:IAMUser/ConsoleLogin – This finding informs you that a specific principal in your AWS environment is exhibiting behavior that is different from the established baseline. ( different IP used to login ! ) This principal has no prior history of login activity using this client application from this specific location. Your credentials might be compromised.

5. The ex-employee was the threat that manifested itself. So much Security hype is on China, Russia and APT, so it is important implement controls that monitor for and mitigate against internal threats.

The opinions of this blog do not necessarily reflect that of Amazon. This blog is not an official publication of Amazon or associated with Amazon.com.

Greetings, Programs! Happy 2019! I’ve updated the list I made last year of resources for the AWS Networking Specialty Exam. Besides the FREE resources listed below which includes many great re:Invent videos as well as links to specific Amazon literature for each topic!

UPDATE: March 7, 2019 – The story of this exam continues. I took my first run at it on Jan 30, 2019 and miss by what I believe to be 2 questions, based on the score report I was mailed. I blame 90% of that fail on me, not studying in depth enough and 10% percent to some very badly worded questions. 🙂 As such, in addition to the guide below, I will add in some links here that I wished I had read + critical notes at the bottom of the page. I am taking my second run later this month!

BGP Med influences incoming traffic from Peers. Lowest MED is preffered. Influence a way into your AS when multiple entry points exist. MED is not transitive beyond one AS.

BGP Local Preference Influences outbound routing. Highest Local Pref Preferred. Prefer an exit point from an AS is multiple exit points exist. This is passed to AWS to prefer a path back to customer. LOCAL PREF is considered BEFORE AS PATH in BGP Route selection. AWS uses BGP communities to set tag for local Pref:

7224:7100 = low, 7224:7200 = med, 7224:7300 =high

BGP Route Selection:

1. Longest Prefix
2. Local Preference (Highest Preferred)
3. Shortest AS PATH
4. Lower MED
5. then if all of these are the same, ( equal cost load sharing )

BGP: Route control with communities

Tag your routes -> TO AWS with:

7224:9100 = your routes will stay in local AWS Region

7224:9200 = your routes will propagate to all regions in the continent

Routing with VPC EndPoints

Multiple routes to different services in a route table is OK
Multiple routes to same service is different route table is OKNO Multiple routes to SAME service in SAME route table!
Routetable APIs do not work with VPC EndPoints at this time

Direct Connect and AWS VPN

Direct Connect Gateway( DX GW) Rules:

No transit communications, No hub communications ( only communications between Private VIF and VGW), Direct Connect Gateways are ‘account scoped’

When using ICMP, the destination unreachable codes are important:

AWS WorkSpace necessities. ( I dont understand why the emphasis on Workspaces in this Exam, as its not part of core AWS networking, it feels like the exam is being used as a marketing tool for this product. btw – I fumbled this on my first run through the exam )

Each Workspaces implementation has two ENIs
ENI(eth0) mgmt and streaming
ENI(eth1) Directory

Workspaces ports

443 Auth session
4172 UDP / TCP Health Checks
53 UDP DNS

Workspaces requires two Private Subnets + One Public Subnet
For access control, Workspaces uses a concept called IP access control Group, limit of 25 IP addresses
For MFA with Workspaces, an on-prem RADIUS server is required

AWS Appstream

Each AppStream 2.0 streaming instance has the following network interfaces:
The customer network interface provides connectivity to the resources within your VPC, as well as the internet, and is used to join the streaming instance to your directory.

The management network interface is connected to a secure AppStream 2.0 management network. It is used for interactive streaming of the streaming instance to a user’s device, and to allow AppStream 2.0 to manage the streaming instance.

The management network interface IP address range is 198.19.0.0/16. The following ports must be open on the management network interface of all streaming instances:
Inbound TCP on port 8300. This is used for establishment of the streaming connection.
Inbound TCP on port 8443. This is used for management of the streaming instance by AppStream 2.0.

AppStream Port 443 is used for HTTPS communication between AppStream 2.0 users’ devices and streaming instances.
Appstream Port 53 is used for communication between AppStream 2.0 users’ devices and your DNS servers.

ClusterPlacementGroups
Single AZ only / 10 Gbps Flow
“for applications that benefit from low network latency and high throughput”
If you receive ‘capacity errors, stop + start all instances and try launch again’
Max network speed is limited by the slower of the two instances.
Network traffic to the internet and over an AWS Direct Connect connection to on-premises resources is limited to 5 Gbps.

SpreadPlacementGroups
Can span AZs! MAX of 7 members.
“recommend for apps that have a small # of servers that need to be kept separate”
does not share same underlying hardware.
Spread placement groups are not supported for Dedicated Instances or Dedicated Hosts.

PartitionPlacementGroups
Spread Accross multiple partitions, limts failure domain to only one partition
7 partitions per AZ, YES can span AZs. The number of instances that you can launch in a partition placement group is limited only by your account limits.
Partition placement group with Dedicated Instances can have a maximum of two partitions.
Partition placement groups are not supported for Dedicated Hosts.
Partition placement groups are currently only available through the API or AWS CLI.

VPC Networking and Services

VPC EndPoints (PrivateLink)

InterfaceEndPoints: Connect to Services powered by private link, services hosted by other AWS partners in thier own VPCs.
Instances DO NOT require Public IPs
Interface EndPoints CAN be accessed through Direct Connect
Choose one subnet per AZ
Up to 16 GB per AZ
Supports only TCP|
REgionally Scoped
No tags, IPv4 only

Gateway EndPoints ( S3/DynamoDB)
EndPoints are supported in same Region
Gateway EndPoints cannot be extended out of VPC, NOT reachable via Direct Connect, VPN or Peering!
Must use DNS / must be enabled in VPC
You cannot use a prefix list in an outboud ACL to allow/deny traffic to endpoint, you use the CIDR in ACL.
Default Endpoint Policy allows for access any any access to S3 resource, to lock down:
Use routetables and Bucket Policies to restrict access to a specific VPC EndPoint (not SRC IP). Bucket Policies for EndPoints cannot use private IPs
Gateway Endpoint must be in subnet route table

Route 53

You need these enabled to enable DNS in your VPC:enableDnsHostnames- Indicates whether the instances launched in the VPC get public DNS hostnames. If this attribute is true, instances in the VPC get public DNS hostnames, but only if the enableDnsSupport attribute is also set to true.

enableDnsSupportIf this attribute is false, the Amazon-provided DNS server in the VPC that resolves public DNS hostnames to IP addresses is not enabled.

You can create a delegation set of four servers in your hosted zone, you can use across multiple zone. You create a Delegation set with AWS CLI or API only.

DNS Hybrid VPC to on-prem
Create a DHCP Options set that includes Directory Services, set to the VPC the directory is in; any instances that resolve DNS in that VPC point to the domain the directory is in and resolve names. Directory should also have a conditional forwarder for on-prem DNS server for domains that are on prem + forward to route 53 for any non-authoritative answers

DNS Hybrid on-prem to VPC
Configure a DNS forwarder on-prem to forward requests to Simple AD (over DX or VPN)
Simple AD receives the request and (if need-be) points to Route53 for address.
Route 53 responds to Simple AD
Simple AD replies back to on-prem
“Simple AD is one of the Easiest ways for on prem devices to access private hosted zones”

Ec2 DNS and VPC Peering

Resolution of Public DNS hostname to private IP when queried from the peered VPC:
Modify peering connection and enable ” allow dns resolution from accepter VPC (vpc-id) to private IP” + enableDNSSupport and EnableDNSHostnames must be on on both VPCs

Route 53 HealthChecks

HealthChecks of other health checks
HealthChecks that monitor an endpoint
HealthChecks that monitor CloudWatch Alarms

http, https – status code of 2xx or 3xx within two seconds after connecting.
TCP- Route 53 must be able to establish a TCP connection with the endpoint within ten second

HTTP and HTTPS health checks with string matching – As with HTTP and HTTPS health checks, Route 53 must be able to establish a TCP connection with the endpoint within four seconds, and the endpoint must respond with an HTTP status code of 2xx or 3xx within two seconds after connecting.

After a Route 53 health checker receives the HTTP status code, it must receive the response body from the endpoint within the next two seconds

Transit VPC / Hybrid VPC Connectivity Scenarios / Rules

VPC Peering
No A – > B -> C routing. ( no transitive routing )
No overlapping CIDR ( you can do part of VPC CIDR tho if it does not overlap )
Inter-region peering is “secure communication”

Transit VPC reasons / properties
Reduce the number of Tunnels required
Use a Security Layer
Address overlap of IP between on-prem and VPC
Highly Available
Scale Globally

Transit VPC Design 1
Typically done with a pair of SoftwareVPNs ( Cisco CSR 1000v ) in transit VPC
Tunnel between CSR 1000v and on-prem
OR Direct Connect can be used with Transit VPC using a Detached VGW.
Tunnels between CSR 1000v and each VPC’s VPG to which you need to connect

Why use detached VGW? leverage a detached virtual private gateway (VGW) to conceptually attach a VGW to a data center. In this approach, a customer creates a VGW, then adds a spoke VPC tag (default tag key transitvpc:spoke, default tag value true) without attaching the VGW to a specific VPC. This will cause the VGW to be automatically connected to the transit VPC CSR instances, which will start broadcasting any routes they have learned to the new VGW.

Transit VPC Design 2
For inter-spoke ( just the spoke VPCs) VPC peering can be used to communicate between just the spokes instead of sending traffic back to the transit VPC … IF spokes are trusted.

Accessing AWS Public Services in a remote Region:

Direct Connect gateway in any public Region. Use it to connect your AWS Direct Connect connection over a private virtual interface to VPCs in your account that are located in different Regions. For more information, see Direct Connect Gateways.

Alternatively, you can create a public virtual interface for your AWS Direct Connect connection and then establish a VPN connection to your VPC in the remote Region

CloudFormation

Collection of resources in AWS is a single unit called a stack
‘StackSets’ extend functionality, allowing you to create, update or delete stacks across multiple accounts.
For templates, The only required top-level object is the Resources object, which must declare at least one resource.
For template, Parameters object. A parameter contains a list of attributes that define its value and constraints against its value. The only required attribute is Type, which can be String, Number, or an AWS-specific type.
Nested stacks are stacks that create other stacks. To create nested stacks, use the AWS::CloudFormation::Stack resource in your template to reference other template

Change sets allow you to see how proposed changes to a stack might impact your running resources before you implement them.

As I ramp up for my second run at that AWS Network Specialty Exam, I want to re-iterate this major difference between VPC Interface EndPoints and VPC GatewayEndPoints:

An interface endpoint can be accessed through AWS VPN connections or AWS Direct Connect connections. Interface endpoints can be accessed through intra-region VPC peering connections from Nitro instances. Interface endpoints can be accessed through inter-region VPC peering connections from any type of instance.

(Gateway)Endpoint connections cannot be extended out of a VPC. Resources on the other side of a VPN connection, VPC peering connection, AWS Direct Connect connection, or ClassicLink connection in your VPC cannot use the endpoint to communicate with resources in the endpoint service

This makes sense because Interface endpoints are exactly that, a virtual interface(NIC card) in your VPC, whereas Gateway EndPoints are more of a routing device to get you to a Public service, [ s3, Dynamo DB ]

TO access S3 from DirectConnect, use a Public VIF, as recommended here:

The opinions of this blog do not necessarily reflect that of Amazon. This blog is not an official publication of Amazon or associated with Amazon.com.

As a huge advocate of AWS GaurdDuty Intrusion detection system, I cannot recommend it enough to people. Like all of Security and Security products, no one solution or one vendor should ever be considered a ‘Silver Bullet’.

To close that gap, I recommend using multiple EndPoint Software Security Solution(s) as part of your Cloud Defense in Depth strategy. There are various players in this space, but the key aspect is that you pick your EndPoint solutions based on your Architecture and your environment.

For example, if you run a lot of Linux workloads, I recommend deploying OSSEC client for realtime intrusion detection at the host level. (OSSEC does have a windows client as well). For Linux, OSSEC can be configured for ‘active response‘ as well, running custom commands in response, or automatically adding a rule to the local iptables to block attacks. Again, this is all real-time event processing. OSSEC also provides File Integrity Monitoring(FIM) requirement that can be customized.

For the Windows, I have been successful using CB – CarbonBlack Defense, ( you use what solution works best for your environment, I am not selling CarbonBlack here) With Carbon Black Defense, you can do some very fine grained tuning about what types of applications run on your windows systems, from only allowing signed code to run, custom code application paths, checking trusted apps and shutting down any other process, plus many more buttons and triggers. CarbonBlack is also an ‘Active Defense’ product where it stops malicious code REAL TIME. I have used the MalwareBytes Enterprise Desktop EndPoint client for Windows – and although not as Cloud friendly as CB from an API perspective, the MB detection and response engine work well for REAL TIME.

Last, for a more advanced type of Security EndPoint Solution.. check out Guardicore. In addition to threat standard threat detection and FIM, thier CentraCore Product provides application flow visibility, Container Security (in the build process too). The Real time threat response re-routes attacks to a ‘deception engine'(honeypot) and attempts to gather intel. Guardicore also does some reputation analysis for IP and Domain.

The list goes on… The key premise here is Defense in Depth. You can use all of those endpoints together. Even as continuous development improves the GuardDuty Product, (if AWS makes it REAL TIME), I would still recommend multiple IDS/ IPS solutions where their core functions would overlap if drawn out into a nice Venn – that’s what you want – the failure of one system should never negatively affect your Security posture.

The opinions of this blog do not necessarily reflect that of Amazon. This blog is not an official publication of Amazon or associated with Amazon.com.

Hello fellow GaurdDuty enthusiasts. As promised from my earlier post, I wanted to share CloudWatch Events triggers that parse GuardDuty alerts by finding type. I have been using GuardDuty since it was announced in late 2017, and for the bulk of that time, I was parsing GuardDuty alerts in CloudWatch Events using severity to invoke my Security Automation Lambda Functions.

I now recommend NOT using GuardDuty severity in the CloudWatch Event trigger for any automation you want to invoke. Why? Simply, you don’t control the finding type to severity level mapping and thus, you could invoke automation when you really don’t want to do so.

I found that what I believe to be ‘low-level/low-threat’ Guard Duty alerts were coming in with higher severity types, and an example of this: as of 02/20/2019, the Guard Duty finding: UnauthorizedAccess:EC2/TorIPCaller is still mapped to Severity 5. If you have automation that stops an EC2 based on severity 5, then every TOR visitor would invoke this.

What else comes in as Severity 5? Backdoor, Trojan, etc.. stuff you actually may want to act on. If someone hits my website from a TOR IP; I am going to treat that different than if my web server resolves an IP to a Command and Control.

Amazon does not publish a definitive mapping of Finding Type to Severity type that I have been able to find. The only way I have found to get the mapping for each alert is to generate each finding type with the api:

For the example above, simply replace “Backdoor…” with the FindingType of your choice.

For each unique CloudWatch event you create based on individual GuardDuty finding type, select a different target (Lambda Security automation) to invoke the specific remediation action that is appropriate!

Last bit of advice… ALWAYS test your CloudWatch triggers when introducing a new alert or new syntax into an existing alert. AWS has this Github repo for testing GuardDuty alerts based on real events, but I spun this up and I found that my GuardDuty did not actually show alert on any of the DNS queries included in their script, so if no alert is generated, no CloudWatch Rule can be triggered. For 100% certainty for triggering CloudWatch Rules you write, just use the GuardDuty API:

Hi Friends, its been a little while since I’ve written. It’s been a busy 2019! I started my new gig as an AWS TAM and I’ve STILL been studying diligently for the AWS Networking Speciality Exam.

Security Automation can do a lot of good on your AWS account. The bad guys are automating their stuff, so you need to as well. Its nice to know you have automation working for you during the day and at night while you sleep. One of my chief aims is to continue my own professional development in Cloud Security Automation.. so stay tuned for more posts like this!

For now, here are some links great AWSLabs scripts in GIT for automating certain aspects of your AWS Security Infrastructure; and then below that, I’ve added links to my own repo where I have done work in this area.