A simple flat folder with no hierarchy
Note: For your convenience, the Amazon S3 console and the Prefix and Deliter feature allow you to navigate within an Amazon S3 bucket as if there were a folder hierarchy.
However, remember that a bucket is a single flat namespace of keys with no structure.

Immediately after an update, a read may return stale data. This is applicable to:

PUTs to existing Objects

Object Deletes

Updates are Atomic - Partial updates cannot occur

Access Control

S3 is secure by default. Initially only creator has access.

Coarse-grained access control:

S3 ACLs

READ, WRITE, or FULL_CONTROL

Bucket or Object level

Best Use Cases

Enabling Bucket Logging

Hosting a static website

Fine-grained access controls:

S3 Bucket Policies

Recommended access control mechanism

Similar to IAM policies

Access Control over who, from where, and when

AWS IAM

Query String Authentication

Static Website Hosting

Very common use case

Every S3 Object has a URL

Configure the bucket

Create a bucket with the same name as the desired website hostname.

Upload the static files to the bucket.

Make all the files public (world readable).

Enable static website hosting for the bucket.
This includes specifying an Index document and an Error document.

The website will now be available at the S3 website URL:<bucket-name>.s3-website-<AWS-region>.amazonaws.com

Create a friendly DNS name in your own domain for the website using a DNS CNAME, or an Amazon Route 53 alias that resolves to the Amazon S3 website URL.

The website will now be available at your website domain name.

upload the static content

Advanced Features

Prefixes and Delimiters

While Amazon S3 uses a flat structure in a bucket, it supports the use of prefix and delimiter parameters when listing key names. This emulates a file and folder hierarchy within the flat object key namespace of a bucket. For example: logs/2016/January/server42.1oglogs/2016/February/server42.1oglogs/2016/March/server42.1og

Supporting products

REST API

Wrapper SDKs

AWS CLI

Amazon Management Console

Amazon S3 is not really a file system.

Storage Classes

Amazon S3 offers a range of storage classes suitable for various use cases.

Object storage differs from traditional block and file storage. Block storage manages data at a device level as addressable blocks, while file storage manages data at the operating system level as files and folders. Object storage manages data as objects that contain both data and metadata, manipulated by an API.

Amazon S3 buckets are containers for objects stored in Amazon S3. Bucket names must be globally unique. Each bucket is created in a specific region, and data does not leave the region unless explicitly copied by the user.

Amazon S3 objects are files stored in buckets. Objects can be up to 5TB and can contain any kind of data. Objects contain both data and metadata and are identified by keys. Each Amazon S3 object can be addressed by a unique URL formed by the web services endpoint, the bucket name, and the object key.

Amazon S3 has a minimalistic API—create/delete a bucket, read/write/delete objects, list keys in a bucket —and uses a REST interface based on standard HTTP verbs—GET, PUT, POST, and DELETE. You can also use SDK wrapper libraries, the AWS CLI, and the AWS Management Console to work with Amazon S3.

Amazon S3 is highly durable and highly available, designed for n nines of durability of objects in a given year and four nines of availability.

Amazon S3 is eventually consistent, but offers read-after-write consistency for new object PUTs.

Amazon S3 objects are private by default, accessible only to the owner. Objects can be marked public readable to make them accessible on the web. Controlled access may be provided to others using ACLs and AWS IAM and Amazon S3 bucket policies.

Static websites can be hosted in an Amazon S3 bucket.

Prefixes and delimiters may be used in key names to organize and navigate data hierarchically much like a traditional file system.

Amazon S3 offers several storage classes suited to different use cases: Standard is designed for general-purpose data needing high performance and low latency. Standard-IA is for less frequently accessed data. RRS offers lower redundancy at lower cost for easily reproduced data. Amazon Glacier offers low-cost durable storage for archive and long-term backups that can are rarely accessed and can accept a three- to five-hour retrieval time.

Object lifecycle management policies can be used to automatically move data between storage classes based on time.

Amazon S3 data can be encrypted using server-side or client-side encryption, and encryption keys can be managed with Amazon KMS.

Versioning and MFA Delete can be used to protect against accidental deletion.

Cross-region replication can be used to automatically copy new objects from a source bucket in one region to a target bucket in another region.

Pre-signed URLs grant time-limited permission to download objects and can be used to protect media and other web content from unauthorized "web scraping."

Multipart upload can be used to upload large objects, and Range GETs can be used to download portions of an Amazon S3 object or Amazon Glacier archive.

Server access logs can be enabled on a bucket to track requestor, object, action, and response.

Amazon S3 event notifications can be used to send an Amazon SQS or Amazon SNS message or to trigger an AWS Lambda function when an object is created or deleted.

Amazon Glacier can be used as a standalone service or as a storage class in Amazon S3.

Amazon Glacier stores data in archives, which are contained in vaults. You can have up to 1,000 vaults, and each vault can store an unlimited number of archives.