The Official Rackspace Blog » APIhttp://www.rackspace.com/blog
Learn more about the #1 Managed Cloud company. Read recent and most popular posts on subjects like the cloud, our customers and partners, our products and the famous Rackspace culture.Fri, 31 Jul 2015 20:29:11 +0000en-UShourly1http://wordpress.org/?v=4.1.5Build Rich Network Services With OpenStack’s Neutron APIhttp://www.rackspace.com/blog/build-rich-network-services-with-openstacks-neutron-api/
http://www.rackspace.com/blog/build-rich-network-services-with-openstacks-neutron-api/#commentsThu, 20 Nov 2014 16:00:18 +0000http://www.rackspace.com/blog/?p=37022Providing users with programmatic control of their infrastructure has always been one of the primary value propositions of the Rackspace Cloud. The ability to deploy and manage a wide array of cloud resources with a few lines of code has brought new levels of automated efficiency to the IT industry and revolutionized how we think about managing our applications and workloads.

Today, we make it easier for Rackspace Cloud users to build rich network services with the availability of the OpenStack Neutron Networking API. This milestone greatly increases your ability to create and manage networking services and capabilities and makes it possible to add a number of immediate and future improvements. The API will be in Limited Availability for our Cloud Networks users only. If you are already a Rackspace Cloud Networks user, you may immediately start taking advantage of this API.

Integrating OpenStack’s Neutron API into the Rackspace Cloud introduces three new top-level resources: /networks, /ports and /subnets – all of which are available today.

The Neutron API also introduces the following new features:

Create and manage Cloud Networks via Neutron API

Assign routes to Cloud Servers at boot-time (Host Routes)

Configure allocation pools for subnets (CIDRs) on Cloud Networks to control the dynamic IP address assignments on your Cloud Servers

Provision an IP address of your choice on isolated networks ports

Dual stack your isolated networks so that you can have IPv4 and IPv6 addresses on the same port

The integration of the OpenStack Networking API provides an alternative to the /os-networksv2 Cloud Servers extension that was the only option when provisioning networks in the public cloud.

While the existing networking API based on the /os-networksv2 extension will continue to function, we encourage you to start using the new API in order to take advantage of the new features and services that are only available within Neutron.

If you are a heavy consumer of Rackspace’s networking API, here are a few points to take note of:

You will still need to use the Cloud Networks virtual interface extension (/os-virtual-interfacesv2) to attach and detach networks at this time

The Neutron client will not be available to use along with the new API at this time, although we plan to make this available very soon.

The API is not currently available to RackConnect v3 users

The Neutron API and all of the related improvements are only available via API at this time. We expect to start integrating some new functions into the Cloud Control Panel soon.

The documentation for the new API can be found here and the getting started guide can be found here.

As always, we greatly value your feedback and look forward to more announcements about new features for cloud users.

]]>http://www.rackspace.com/blog/build-rich-network-services-with-openstacks-neutron-api/feed/0The Copyrightability Of APIs In The Land Of OpenStackhttp://www.rackspace.com/blog/the-copyrightability-of-apis-in-the-land-of-openstack/
http://www.rackspace.com/blog/the-copyrightability-of-apis-in-the-land-of-openstack/#commentsFri, 16 May 2014 17:01:21 +0000http://www.rackspace.com/blog/?p=35640Last week, the Federal Circuit overturned the District Court judgment in Oracle v. Google, finding that the Java API is copyrightable. This move overturns the expectations of businesses and developers and is likely to negatively impact how they leverage APIs going forward. We have been thinking a lot about the ruling since it came down, putting together our thoughts.

To start with, we are very disappointed with the ruling. The Federal Circuit very clearly got it wrong. While Rackspace has no stake in the fight over Android, we do have a stake in the legal status of APIs. As developers, we consume APIs of all kinds every day. As a company, almost all of our products are exposed to the world only as APIs. As we wrote in our brief to the court last year, we think that APIs are inherently functional – as the name suggests, they are just “interfaces” between two different pieces of software. Copyrighting APIs makes no more sense than copyrighting the little bumps on the top of Lego bricks.

Second, this decision validates our longstanding position that OpenStack needs its own APIs. For some time there have been elements in the OpenStack community that have tried to build OpenStack interfaces (and businesses) on top of AWS APIs. We have always thought that was a bad idea from an engineering perspective: As a community, we don’t want to cede control over what we do in OpenStack to other cloud vendors. As developers, we don’t want to burden ourselves with having to worry about subtle semantic differences and bug-for-bug compatibility with a platform we don’t control.

But last week’s decision gives us a new reason. As GigaOm has already pointed out, using Amazon APIs is now a legal risk, one which we don’t have to take.

]]>http://www.rackspace.com/blog/the-copyrightability-of-apis-in-the-land-of-openstack/feed/0Customize Configuration Settings For Cloud Databaseshttp://www.rackspace.com/blog/customize-configuration-settings-for-cloud-databases/
http://www.rackspace.com/blog/customize-configuration-settings-for-cloud-databases/#commentsTue, 01 Apr 2014 14:00:34 +0000http://www.rackspace.com/blog/?p=35152As promised last year, we are committed to simplifying database management with Rackspace Cloud Databases. Starting today you can customize configuration settings for Cloud Databases instances using the API or Trove command line tool. This capability allows you to optimize configuration settings based on the needs of your workload.

With every Cloud Databases instance that you create, we already optimize it for peak MySQL performance, but workloads do differ and MySQL provides endless ways to configure based on your specific workload. You may want to change the ft_min_word_len to optimize full text searches, increase the number of concurrent connections allowed, or even set lower_case_table_names to optimize how case sensitivity is handled for table names. Whatever your case may be, this feature simplifies the process and puts the power in your hands.

Let’s take a look at a quick example. You can create and manage different configuration groups with custom configuration values and the configuration group can be associated to your Cloud Database instances.

Using max_connections from the example mentioned earlier, let’s try customizing the max_connections for a 1GB Cloud Database instance:

This is just one small example of how to customize your Cloud Database instances. For more details, check out the Knowledge Center article describing how you can use easily use the Trove Command Line tool for creating and managing configurations or head over to the Configurations API documentation for in depth instructions on how to use the Cloud Databases REST API. For users who prefer a graphical user interface, we will be integrating this capability in Control Panel soon.

As always, we value your opinion so please let us know if you have any questions or feedback or head over to the community site to share ideas with others.

]]>http://www.rackspace.com/blog/customize-configuration-settings-for-cloud-databases/feed/0Cloud Orchestration: Automating Deployments Of Full Stack Configurationshttp://www.rackspace.com/blog/cloud-orchestration-automating-deployments-of-full-stack-configurations/
http://www.rackspace.com/blog/cloud-orchestration-automating-deployments-of-full-stack-configurations/#commentsWed, 12 Mar 2014 20:40:03 +0000http://www.rackspace.com/blog/?p=35067Over the past few months we have been working hard to provide more automation capabilities for the Rackspace Cloud. In November we told you about Auto Scale, which allows you to define rules to grow or shrink your cloud. Back in August we told you about Deployments, a Control Panel capability that allows you to deploy common applications with a point-and-click interface and based on our best practices.

A common request we have heard from you that you want a programmatic approach to create and deploy full stack configurations. Today we are releasing the Cloud Orchestration API to help configure and deploy your cloud stack topologies and applications.

WHAT IS CLOUD ORCHESTRATION?

Automating your deployment helps you be more efficient, saves you time that you can use more productively, and helps you reduce the possibility of manually introducing configuration errors. Many of you are familiar with the Nova API. There is really no easier way to programmatically create a single server on the Rackspace Cloud. But what about deploying more complex configurations? What if you need to install software or provision load balancers, servers and databases, and wire them all together?

This new Cloud Orchestration API makes automation easy for you. Cloud Orchestration is a service that allows you to create, update and manage groups of cloud resources and their software components as a single unit and then deploy them in an automated, repeatable fashion via a template.

Before today, you had to worry about separate service API call code to get your application up and running. You also had to worry about the order in which you instantiate and connect cloud resources. With the Cloud Orchestration API, now you can simply declare the resources you want, how you want them configured, and what software to install by simply editing a text file (the configuration template) instead of editing API call code. The Cloud Orchestration service implements automation for you. Configuration updates can be done by simply editing the configuration template and making a single API call to Cloud Orchestration, regardless of how many changes you made (say, adding more nodes to your auto scaling policy or adding a new node to your database tier). Even deleting the entire stack can be done with a single API call.

WHAT IS SUPPORTED?

Cloud Orchestration today supports declaration and configuration of:

Cloud Servers

Cloud Load Balancers

Cloud Databases

Cloud Block Storage

Cloud DNS

Auto Scaling

Bash scripts

Chef Cookbooks and Berkshelf

We are working on additional features in the open, jointly with other contributors to OpenStack. Here is a list of some future features that are in development here at Rackspace or are in planning within the open community:

Support for additional cloud resources, including Cloud Monitoring and object storage through Cloud Files, among others.

Automatic error handling and retry logic to ensure you always get exactly what you declare despite underlying system errors.

Self-healing, for automatic repair or re-provisioning of new resources when a stack becomes unhealthy or a resource is not performing as expected.

Integration with additional software configuration management tools, such as Ansible.

Multi-region deployment from within a single API call or template.

Catalog functionality to help you deploy Rackspace’s or the community’s best-practice templates.

BENEFITS OF CLOUD ORCHESTRATION

Declarative Syntax and Portability

With Cloud Orchestration, you first specifydeclaratively the set of cloud resources that needs to be deployed, using the OpenStack Heat Orchestration Template (HOT) format. A declarative template format (as opposed to other imperative approaches) ensures that you don’t have to be concerned with how the provisioning will happen. You just specify what needs to happen. Your declaration is also separated from any input you provide on the how (e.g. scripts and recipes). This principle of Separation of Concerns helps simplify the maintenance of your infrastructure and allows you to more easily port your templates to other OpenStack clouds running the Heat service.

That simple example shows how we specify that the deployment requires a 1GB server instance with CentOS 6.4.

More productivity with reusable, repeatable and intelligently ordered resource provisioning

Cloud Orchestration takes care of sequencing the provisioning operations (“orderability”) of your stack. For example, imagine that you have to setup three servers and one load balancer. Cloud Orchestration will ensure that the servers are up and working and the appropriate IP addresses get added to the load balancer before completing the stack. Cloud Orchestration has the intelligence to determine what needs to be provisioned first and in which order each task in the provisioning workflow must be executed. You don’t have to worry about ordering tasks.

Cloud Orchestration templates ensure repeatable deployments. New configurations based on the same template are deployed in exactly the same manner, reducing errors by avoiding potential configuration deltas.

Finally, Cloud Orchestration templates are reusable, nestable, and portable, which can help improve your productivity. You can utilize templates that you have previously created or those created by the community. The rich orchestration capabilities will ensure that you always get a full working stack. Cloud Orchestration templates are also portable across private and public cloud deployments of OpenStack.

HOW ARE HEAT ORCHESTRATION TEMPLATES (HOT) DIFFERENT FROM CHEF AND PUPPET?

Cloud Orchestration is not a replacement for server configuration tools such as Puppet and Chef. They are very complementary. You will continue to use Chef and Puppet to “template-ize” your server software configurations, while Cloud Orchestration and its HOT templates will help you create a full stack that includes all the infrastructure resources required for your stack. Cloud Orchestration allows you to quickly bootstrap your preferred software configuration management solution. With Chef, we have gone several steps further and provided direct support to specify the cookbooks and Berksfile you want deployed.

Heat is the future of automation in our cloud

We are making a big commitment to the OpenStack Heat project here at Rackspace. We have been making great strides with Heat as a community over the past few months. Today, we are making the capabilities of Cloud Orchestration available via API, but we are already working to provide this capability directly in the Control Panel. You will hear from us soon. We will also be integrating the current Deployments feature of the Control Panel into Cloud Orchestration.

Best Practices from Rackspace, backed by Fanatical Support

One of the greatest benefits of helping thousands of customers through our support and DevOps services is that the templates we produce here at Rackspace represent best practices that you can take advantage of. We see each template we produce as the implementation of months and years of experience for a specific application or scenario. You can just “borrow” these templates and customize them for your own purposes. With Cloud Orchestration, you get reliable and repeatable deployments every time.

We have created a GitHub organization at http://github.com/rackspace-orchestration-templates. In there you will find a list of our available orchestration templates. These templates contain tried-and-true application and resource topologies to provide an optimized experience for a particular type of application or infrastructure configuration. While these templates may not be suitable for every use case, they serve as a trusted foundation for many common use cases and are an ideal reference from which you can build a custom template or use as-is. To start, you will find templates for WordPress (Single and Multi-node options), Minecraft server, Ghost, and PHP. We will continue to add new templates regularly. Please check back often. If you would like to see something there that you don’t see today, leave a comment below or in the Cloud Orchestration community post.

Finally, in the Rackspace Orchestration Templates organization in GitHub, you will also find the sample-templates repository, which includes examples that you can use when learning how to write your own templates. These templates may not always capture best-practice application or resource topology configurations, but will serve as a frame of reference on how applications and resources can be constructed and deployed with Cloud Orchestration templates.

]]>http://www.rackspace.com/blog/cloud-orchestration-automating-deployments-of-full-stack-configurations/feed/0Big Data At SXSW: The API Is The New Hot ‘App’http://www.rackspace.com/blog/big-data-at-sxsw-the-api-is-the-new-hot-app/
http://www.rackspace.com/blog/big-data-at-sxsw-the-api-is-the-new-hot-app/#commentsMon, 10 Mar 2014 22:00:26 +0000http://www.rackspace.com/blog/?p=35038After attending SXSW Interactive for the past two years, I’ve noticed the industry seems to be shifting away from the Age of the Apps to the Dawn of Data. Rather than focusing on that next killer application, it appears that more people are trying to tap into the data that is being gathered to create the next big thing—and companies are actively encouraging developers to do so.

The Mashery, an entire room on the ground floor of the Austin Convention Center, is devoted to link up developers with companies that have exposed their API. Companies such as MapMyFitness, Edmunds and Beats Music are all looking for developers to tap into their data and platforms to create new value for the end user.

“We think that the value proposition now is in the data and the consumers of the data,” David Carr, Senior Product Manager for Platform at MapMyFitness, said. “Being able to share that around with our partners and letting them create value on top of our platform creates value for the consumers and for the partners. Everyone profits from that ecosystem.”

This data revolution is not lost on television ratings giant Nielsen, which announced at a rooftop party last night that it would be opening up its viewer data to developers. Given the treasure trove of this information and the evolution of the connected TV, it will be interesting to watch what happens in this space. You know a revolution is happening when a data collection service with the pedigree of Nielsen opens up their data to developers.

Data Versus Information

Companies are pushing to get programmers to consume their API because data alone is not valuable. Rather, the value is derived by what a person does with that data to create actionable intelligence.

“We have to distinguish between data and information,” Ben Alamar, Professor of Sports Management at Menlo College, said in a panel entitled The Future of Sports Can Be Found in the Data. “Raw data by itself is just rows and rows in a spreadsheet, and nobody can get anything useful. But when we take that raw data and visualize it, summarize it and analyze it… then we have information.”

Consider all the data that the National Weather Service collects around the clock. There is most certainly a database with rows and rows of information. If a meteorologist were to show that much data on the screen viewers wouldn’t know what to do with it. Instead, we get something actionable on our screen: the temperature and chance of rain. Giving data access to developers enables them to find the best way to apply it to create information.

As Steve Haro, Director of Marketing at Boeing, said during the Nano Size Me panel, “If you don’t have clear objectives, and you don’t understand what you are trying to achieve, then it is just data.”

Data as a Profit Center

The relationship between those that provide data APIs and those that consume them is symbiotic. Many companies are sitting on valuable data that results from consumer usage of their services or pure observation. Often, these companies may not have the resources to do anything with this information.

Opening up the firehose of data to developers can result in new applications and products consuming their API, which can in turn make their service more valuable to the end user. The company also stands to make a decent profit by access to the data. Developers are often initially given a small level of access—or API calls—for free. This helps the developer create and test a minimally viable product without having to pay for data access. However, if the service or application were to take off, there is often a charge as there is an increased demand for that data.

Data, Data Everywhere

The Universe of Big Data is expanding at a mind-boggling rate. Furthermore, with the increased access of smartphones and wearable computing, consumers will be generating a significant amount of Little Data in the coming years. This massive quantity of data that is accessible for developers to interpret is resulting in new and exciting information for us to act on. While that information may be presented in the form of an app, make no mistake, the API is now king.

]]>http://www.rackspace.com/blog/big-data-at-sxsw-the-api-is-the-new-hot-app/feed/0Cloud Databases: New Features, Better Valuehttp://www.rackspace.com/blog/cloud-databases-new-features-better-value/
http://www.rackspace.com/blog/cloud-databases-new-features-better-value/#commentsTue, 19 Nov 2013 20:53:36 +0000http://www.rackspace.com/blog/?p=33865Since we launched Rackspace Cloud Databases in August 2012 we have experienced great commitment from our customers who have chosen it as their database platform. In this time, we have expanded the availability of Cloud Databases from three to six regions worldwide, keeping the pricing simple, improving reliability and increasing performance.

As the holiday season kicks into high gear, we want to share an early gift with you: we’ve added a host of new capabilities to Cloud Databases and are passing along some of the savings we have achieved as the product has scaled.

First, let’s highlight some of the newest Cloud Databases features:

New Developer Command Line Tools

At Rackspace, we are committed to simplifying the user experience and empowering developers to successfully consume and build on the open cloud. Graphical user interfaces are great, and the Rackspace Control Panel is best in class, but many of our customers are command line focused and prefer the power of a simple development-coding tool.

In addition to the SDKs we provide currently, we are introducing an open Command Line Tool (CLI) called “Trove Client” based on the popular OpenStack initiative. Trove is a fully open source CLI that is being actively developed in the OpenStack community and, on top of that, it is compatible with Cloud Databases.

The CLI provides command line access to Cloud Databases users and is simple to install and use. Refer to our developer documentation for more details on installing the Trove Client and a summary list of all the available commands. This is a great resource for customers who would like to use features that are only available through the Control Panel or for easily managing a large number of instances and databases.

Manual Backups via API or CLI

Since we launched last year, Cloud Databases has relied on a robust storage system that provides increased performance, scalability, availability and manageability. Applications with high I/O demands are automatically performance optimized and your data is protected with multiple levels of redundancy. And starting today, Cloud Databases supports manual backups and restores to and from Cloud Files via the Cloud Databases API and the Trove command line tool (CLI). Control Panel support is coming soon.

This feature greatly simplifies the management of your database backups utilizing a simple and low-cost cloud storage platform. During the backup process, the resulting database files are directly streamed to your Cloud Files account for storage (Cloud Files storage fees will apply). And restoring from those backups is easy! You can restore a backup anytime by simply creating a new database instance and specifying the backup you would like to restore from.

Monitoring Support

Lastly, we have added Rackspace Cloud Monitoring as a fully supported feature in Cloud Databases. Monitoring is available for all cloud database instances via monitoring checks including Load Average, CPU, Memory, Storage (Disk), Network and a number of MySQL metrics. On top of that, all Cloud Databases instances now come preconfigured with an adjustable monitoring alert for storage utilization to help you avoid inadvertently reaching your storage limits. You can leverage the API or Rackspace Cloud Monitoring CLI to get usage statistics. Check out the Rackspace Cloud Monitoring API docs and Knowledge Center articles on how to configure and use Cloud Monitoring through API and CLI. Control Panel support is coming soon.

In addition to the API and CLI tools, we have Cloud Intelligence, to engage the users who prefer interaction with a monitoring UI. Please keep sending us feedback and watch for future updates

Whether you are an existing Cloud Database customer or just getting started, we hope that these enhancements allow you to operate even more effectively with our platform. We will check back in periodically as we will be making more announcements over the next several months.

Shared Savings

As more customers use a technology, it becomes less expensive for us to provide, so we’re now pleased to pass along savings of as much as 53 percent.

Below is a simple price comparison for each region and instance type. If you are a current Cloud Databases customer, you will automatically receive the new pricing on November 19, 2013.

We’re excited about delivering more value to Cloud Databases users and we’re committed to bringing more improvements and delivering the most Fanatical database experience!

]]>http://www.rackspace.com/blog/cloud-databases-new-features-better-value/feed/6OpenStack Marconi APIhttp://www.rackspace.com/blog/openstack-marconi-api/
http://www.rackspace.com/blog/openstack-marconi-api/#commentsThu, 29 Aug 2013 20:00:36 +0000http://www.rackspace.com/blog/?p=32626Rackspace Cloud Queues is backed by the open source project, Marconi. Below, Oz Akan, Development Manager for Rackspace Cloud Queues and an active contributor to Marconi, walks us through the project. Want to try out Marconi without managing your own environment? Want to explore open source code with the comfort of Fanatical Support behind you? Rackspace Cloud Queues is currently accepting Early Access participants. Sign up here.

What is Marconi?

Marconi is an open source message queue implementation that utilizes a RESTful HTTP interface to provide an asynchronous communications protocol, which is one of the main requirements in today’s scalable applications. Using a queue as a communication layer, the sender and receiver of the message do not need to interact with the message queue at the same time. As a result, these can scale independently and be less prone to individual failures.

Marconi supports publisher-subscriber and producer-consumer patterns. I will focus on producer-consumer pattern and under the section “Python Way” I will give an example using python requests library. First, let’s look at the terminology and our old friend, curl samples.

Terminology

Queue is a logical entity that groups messages. Ideally a queue is created per work type. For example if you want to compress files, you would create a queue dedicated for this job. Any application that reads from this queue would only compress files.

Message is stored in a queue and exists until it is deleted by a recipient or automatically by the system based on a TTL (time-to-live) value. Message stores meaningful data for the application.

Worker is an application that reads one or many messages from the queue

Producer is an application that creates messages in a queue.

Claim is a mechanism to mark messages so that other workers will not process the same messages.

Publisher – Subscriber is a pattern where all worker applications have access to all messages in the queue. Workers can’t (shall not) delete / update messages.

Producer – Consumer is a pattern where each worker application that reads the queue has to claim the message in order to prevent duplicate processing. Later, when work is done, worker is responsible from deleting the message. If message isn’t deleted in a predefined time, it can be claimed by other workers.

Message TTL is time-to-live value and defines how long a message will be accessible.

Claim TTL is time-to-live value and defines how long a message will be in claimed state. A message can be claimed by one worker at a time.

cURL Way

Since there is nothing abstracted in curl and available on Linux servers, I find curl is a good tool to practice restful interfaces and like keeping these commands handy.

Get Authentication Token

If you run Keystone middleware with Marconi then you will have to get authentication token in order to use with the following calls.

First, I assign username, api key and endpoint to shell variables to make getting a token just a copy-paste.

Get Node Health

Request

Response

HTTP/1.1 204 No Content

Here, we get a 204 response, even though it may seem like something is wrong with the service, this call is just to see if service can reply back. So, 204 is good. Maybe this is an indication that Marconi is brief, to the point and doesn’t like chatter.

Create a Queue

We will have to create a queue in order to be able to post messages into. Queues are not created with the first message, so we need to send the request below to create a queue named “samplequeue.”

Above, if you check the response, you will see that Marconi returned two ids. It is always a good practice to post messages in batches as network latency will be a smaller factor in overall performance compared to sending one message at a time.

Claim Messages

Claiming a message is pretty much like marking a message, so it will be invisible when another worker wants to claim messages. By default, 10 messages are claimed. In the sample request below, we will get two messages claimed as we use pass two as limit.

Response

HTTP/1.1 204 No Content

204 is a valid response which validates that there isn’t a message with the given message and claim id. It doesn’t necessarily say that message is deleted though.

Python Way

Curl provides a convenient way to test the marconi restful interface, but likely not the tool to develop an application. Now let’s see how these requests would be used in an application written in Python.

Most of the applications will have a logic similar to this:

One or many producers posts messages to a queue

Many consumers read the queue, and claim one (or more) message(s) when available

Consumer processes the message(s)

Consumer deletes the message(s)

Below, I created three classes. Queue_Connection handles http calls. Producer handles queue creation and posts messages to the queue. Consumer claims messages from the queue and deletes afterwards.

This is a very primitive example. A real application would require handling exceptions but still it wouldn’t be very far from this.

I believe the effort to start using queues in an application is well worth the benefits. With Marconi, we are going to have queue as a service, which means someone will manage it for us and we will just enjoy the benefits. In a matter of weeks there is going to be a python client ready and it will get just easier to talk to Marconi.

Happy queuing.

]]>http://www.rackspace.com/blog/openstack-marconi-api/feed/2Join The Cloud Queues Early Access Program Todayhttp://www.rackspace.com/blog/join-the-cloud-queues-early-access-program-today/
http://www.rackspace.com/blog/join-the-cloud-queues-early-access-program-today/#commentsMon, 12 Aug 2013 19:39:50 +0000http://www.rackspace.com/blog/?p=32047Starting today, you can begin testing and exploring the Rackspace Cloud Queues service, a new tool designed to address the needs of large, distributed applications. Cloud Queues gives you a simple API to manage both a producer-consumer and a publisher-subscriber queuing and notifications service.

Backed by the OpenStack Marconi project, we built Cloud Queues to be highly scalable and highly available. Messages are sent to queues via a Rest API and replicated to two secondary databases for extra security and redundancy. While Cloud Queues can be accessed by clients hosted anywhere, Rackspace Cloud customers can get extra benefit by connecting to Cloud Queues via Rackspace’s internal service network, which shields you from the latency and cost associated with the public Internet.

Cloud Queues provides you with an interface to create queues, post messages to those queues, get basic stats on the queues, claim messages and watch messages in the subscriber mode.

We expect Cloud Queues to be in General Availability later this year, so we ask Early Access participants to give feedback on performance, API design, documentation and more. At this time, the Cloud Queues service is only available via an API from our ORD (Chicago) or LON (London) datacenters. The product is free to users while it is in Early Access, but will be billed for outgoing bandwidth fees later this year. Those of you participating in the Early Access program can use your normal support channels during this time – you can open tickets via your Rack control panel, chat with our online team or call the Support line.

Want to join the Early Access program? Just fill out the application, tell us about your needs and we’ll get you set up.

If you have any questions about this product or the Early Access phase, simply respond to this post, contact our Support team or email me directly at megan.wohlford@rackspace.com.

]]>http://www.rackspace.com/blog/join-the-cloud-queues-early-access-program-today/feed/0The Expertise To Help Developers In The Cloudhttp://www.rackspace.com/blog/the-expertise-to-help-developers-in-the-cloud/
http://www.rackspace.com/blog/the-expertise-to-help-developers-in-the-cloud/#commentsWed, 17 Jul 2013 16:30:45 +0000http://www.rackspace.com/blog/?p=31324The dedicated world and cloud world can perform differently. Fortunately developers building on the cloud have an excellent resource in the Cloud Launch Team. Architecting an application that takes full advantage of the scalability and flexibility of the cloud is key. Our Launch Managers are here to help with any questions that you may have, advising not only on architecture but how to incorporate the Cloud API, different database solutions and types of storage options. At Rackspace, we have the expertise (and the SLAs) to support developers. Check out this video to find out more.
]]>http://www.rackspace.com/blog/the-expertise-to-help-developers-in-the-cloud/feed/0The Human API Helps Mailgun Stay Nimblehttp://www.rackspace.com/blog/the-human-api-helps-mailgun-stay-nimble/
http://www.rackspace.com/blog/the-human-api-helps-mailgun-stay-nimble/#commentsWed, 19 Jun 2013 20:00:45 +0000http://www.rackspace.com/blog/?p=30540One of the key ideas in computer engineering is encapsulation. In short, it is removing the unneeded dependencies between unrelated components of a software program.

Dependencies (or references) are easy to accumulate. Their growth stems from inertia, lack of planning and sometimes from laziness. The fewer dependencies you have, the easier and cheaper it is to evolve parts of the product. Dependency-free components are especially pleasant to work with. Consequently, dependency-tangled components are slow and expensive to improve.

But sometimes a dependency is necessary. Products and components do not exist in isolation. They usually talk to each other. This is done via dependencies of a “good kind” called APIs, or Application Programming Interfaces.

In an ideal case, a software product is a collection of independent components that are easy to modify, that talk to each other via small and elegant API.

Mailgun itself was founded to be one of those components, used by developers via an API to control email. We were one of the first companies that embraced the “API as a product” business model. The notable pioneer was Twilio.

But I feel like the power of encapsulation and minimal APIs extends beyond just software.

Large software products can bear an uncanny resemblance to groups of humans – how they grow and evolve and how decisions are made. Minimizing the dependencies between groups of people achieves similar results: more autonomous, independent (and preferably small) teams are usually more productive, can change direction more freely and are able to react to the changing business environments more quickly.

I feel that we could use an API in person-to-person communication, which is why I am a big proponent of having a “human API” to connect different teams. In fact, we have employed this practice since our Mailgun team joined Rackspace.

Our team has remained small and encapsulated so we can tirelessly work to improve our service and add features. At the same time, we are now part of a larger organization and have to interact with a variety of new departments, including human resources, finance, support, marketing and the data centers. To help get each department the information it needs while maintaining a nimble working environment, my co-founder Taylor Wakefield has become the human API.

When a question comes up about Mailgun, it is funneled directly to Taylor. This “call” is then routed to the appropriate person on our team to provide him the information. This keeps our developers from getting bogged down with similar requests (if the question has already been asked before, Taylor can provide the answer) and out of meetings. This allows them to focus on the code.

This system has worked well so far, but I know that having a single endpoint on our API will eventually fail under an increased load. As the needs and the demand continue to grow, it will be important to scale out the API horizontally by adding more people. Ensuring that there is a shared-state, however, is important to prevent “data fighting.” You want to make sure that each particular human API is empowered to make decisions without stepping on the feet of others. At Rackspace, our human resources department does this incredibly well. Rackers are load balanced across the different parts of the business, providing that API to the HR department.

So how can you identify who would make a good human API? If a person finds enough time to explain and document their point of view (and the point of view of the team) in a consistent manner, that person would be an excellent candidate. When a person does this, it demonstrates not only a passion for the product, but also the willingness to connect with others to advance it further.

This requires some discipline too. Sending an email with a question, one performs an API call, almost like sending a message in the “actor” model. When you do this, make sure only “API people” are included in the recipients list. I wish email clients had some kind of “type checking” built-in to prevent the all too common inclusion of unneeded recipients.

And on the receiving end, it helps sometimes to raise an “Unsupported Interface” exception when you find yourself on a list of unrelated (to you) topics. As long as the “human API” idea is shared by everyone in the office, your “exception” will not be misunderstood.