Amazon Simple Storage Service (S3) is the distributed storage component of the AWS platform. It can read, write, and delete objects representing data ranging from 1 byte to 5 gigabytes. You can use S3 to store, replicate, and persist an unlimited amount of objects in the cloud. However, you should not think of S3 as a local disk and attempt to run your database from S3. S3 simply stores "objects" or files, in "buckets" (folders). Since there are no directories in S3, each bucket is given a unique identifier. You can also have multiple buckets under one account. Many customers serve static files such as images or video directly from S3 instead of having them stored on a local disk. This gives them virtually infinite storage capacity for their files without purchasing any hardware. For more information visit: http://aws.amazon.com/s3.

refers to specialised companies who provide a very specific stack and support for that stack so as to get rid of the need for any technical headaches, e.g. where the cloud gets rid of the need for predicting growth rates (aka server purchases), SaaS gets rid of the need to maintain, update and support the specific piece of software you are running. While it does not get rid of the need for the local IT department, it does get rid of the need to call your local IT department anytime you want an update done or a bug fixed. In terms of the stack, SaaS offers technical support for all the component layers beneath it. Accordingly, In the case of Fedorazon, we have preconfigured a PaaS stack so anyone who wanted to provide the human stack component layer atop could call themselves a repository SaaS provider. Examples of SaaS: GDocs, WordPress.com, etc.

A high-speed sub-network of shared storage devices. In large enterprises, a SAN connects multiple servers to a centralized pool of disk storage. Compared to managing hundreds of servers, each with their own disks, SANs reduce system administration overhead. By treating all the company's storage as a single resource, disk maintenance and routine backups are easier to schedule and control. In some SANs, the disks themselves can copy data to other disks for backup without any processing overhead at the host computers.

One of many benefits AWS offers to its users. If a user’s cloud computing usage is projected to grow, migrating to AWS might be a valid consideration as it allows users to scale without incurring an upfront cost of purchasing servers and other related expenses.

Scale Out is the term usually applied to scaling an application or service through the use of multiple service component instances, which typically resolves to additional operating system instances and/or servers too (plus clustering frameworks of various forms). This is synonymous with Horizontal Scaling. A typical example of a service that scales out is a web server tier of a multi-tier service. See also: Horizontal Scaling

Scale Up is usually applied to scaling an application or service by increasing service performance and/or capacity through making more resources available to an instance of a service or service component, typically within a single instance of an operating environment and/or server. This is synonymous with Vertical Scaling. See also: Vertical Scaling

A process that changes the size, configuration, or makeup of an Auto Scaling group by launching or terminating instances. For more information, see Auto Scaling Concepts in the Auto Scaling Developer Guide.

Term used to describe a job scheduler mechanism to which GRAM interfaces. It is a networked system for submitting, controlling, and monitoring the workload of batch jobs in one or more computers. The jobs or tasks are scheduled for execution at a time chosen by the subsystem according to an available policy and availability of resources. Popular job schedulers include Portable Batch System (PBS), Platform LSF, and IBM LoadLeveler.

The Scheduler Event Generator (SEG) is a program which uses scheduler-specific monitoring modules to generate job state change events. Depending on scheduler-specific requirements, the SEG may need to run with privileges to enable it to obtain scheduler event notifications. As such, one SEG runs per scheduler resource. For example, on a host which provides access to both PBS and fork jobs, two SEGs, running at (potentially) different privilege levels will be running. One SEG instance exists for any particular scheduled resource instance (one for all homogeneous PBS queues, one for all fork jobs, etc). The SEG is implemented in an executable called the globus-scheduler-event-generator, located in the Globus Toolkit's libexec directory.

A named set of allowed inbound network connections for an instance. (Security groups in Amazon VPC also include support for outbound connections.) Each security group consists of a list of protocols, ports, and IP address ranges. A security group can apply to multiple instances, and multiple groups can regulate a single instance.

A security group is Amazon's version of a firewall that includes some additional features. It allows you to specify certain security settings on an instance specific basis. You have the ability to filter traffic based on IP's (a specific address or a subnet), packet types (TCP, UDP or ICMP), and ports (or a range of ports). You can also grant access to an entire security group. This allows your trusted machines to access each other without having to open ports to the public.

Modern Language Association (MLA)

In case of failure, there will be a hot backup instance of the application ready to take over without disruption (known as failover). It also means that when I set a policy that says everything should always have a backup, when such a fail occurs and my backup becomes the primary, the system launches a new backup, maintaining my reliability policies.