APIs are the lynchpin to the success of these digital businesses. All applications use APIs to access application services and data through APIs. These services can be microservices, cloud workloads, legacy SOAP services, or IoT. To ensure that applications and developers can effectively use these services to build partner, consumer, and internal apps, companies need to deliver secure, scalable, easy-to-use modern APIs.

Over the last few years, we’ve participated in hundreds of enterprises’ API-led digital transformation initiatives. The new eBook, "API Design: Managing the API Lifecycle," distills our learnings from these customer engagements and shares best practices about managing APIs across the lifecycle.

Previously, we discussed the key features of developer portals and considerations for the organizations who use them. Here, we'll cover how analytics are key for the success of different stakeholders in an API program.

As an API provider, you need to measure, analyze, and act on metrics associated with your APIs and your API program. Most API programs typically involve four types of users with unique needs when it comes to analyzing API metrics.

API producers

API developers care about building APIs using best practices based on learnings derived from other API developers who are doing similar things (such as applying specific types of policies to their API proxies). In addition, API developers need visibility into the step-by-step behavior of all the APIs they build in order to diagnose latency problems and improve performance of those APIs.

Operations admins

Operations teams care about maintaining peak performance and availability of their APIs. They want to see the throughput, latency, and errors associated with those APIs. In addition, they expect to get alerted in near real-time to quickly identify and resolve any issues that affect the quality of service of those APIs. These teams also care about protecting their APIs against malicious bots that could compromise their data and services.

Product owners

Product managers are responsible for the success of API programs, and thus need to measure the adoption and usage of the published APIs across various dimensions such as products, developers, apps, channels, and locations. Product managers also want to measure the business impact and financial value of those APIs by capturing the transaction or business metrics related to them.

API consumers

App developers want to understand the volume of API traffic and the quality of service (success rate, response times, and response codes, for example) for the APIs they build their apps against. App developers also need to track business metrics (such as money exchanged with the API producer), based on the API product pricing plans.

Analytics solves different problems for each of the user types discussed above and leverages data related to APIs, app developers, applications, and end users.

How API developers optimize APIs

As an API developer, you apply a set of policies to your APIs to ensure seamless and robust app functionality, while protecting your back-end systems. You must ensure that once implemented, your APIs are functioning as expected and performing with minimal latencies. This is enabled by visibility into the step-by-step flow with timing information for each API request as it flows through the API proxy.

Here’s an example of a real-time trace capability that helps API developers diagnose their APIs:

Implement the wrong policy, and your API won't be used by app developers. For example, putting an OAuth policy in a product catalog API will force end-users to log in to the mobile app before getting generic information about the company’s products. This adds friction to that API’s adoption. So, by anonymously analyzing APIs across a wide population of customers, the analytics platform can provide insights into best practices on the most common policies implemented across a cross-section of APIs.

How API operations admins monitor APIs and SLAs

Once deployed, APIs become the conduit—and potentially the gating factor—for all user interactions that depend on information exchanged via those APIs. Therefore, your operations teams need the ability to monitor various traffic metrics in near-real time to ensure the desired operation of those APIs.

In addition to keeping track of total traffic volume and throughput for each API, the following additional metrics serve as first-level indicators for the overall health of the published APIs:

Response times for both the API proxy as well as the back-end systems at multiple call distribution levels (median, TP95, and TP99, for example)

Availability measurements based on error rates at each of the various tiers (client tier, API proxy, and the back-end systems)

Cache performance for measuring response times and hit rates for each API enabled with local cache

The diagram below shows the benefit of using a caching policy as part of the API where over 90% of the API calls were addressed from that cache. This resulted in a net improvement of over 3.5x in response time.

Another concern is identifying and blocking malicious users (typically automated bots) from hitting APIs to either steal valuable information or consume resources. Analyzing incoming traffic for patterns associated with API call frequency, location, and sequences can give operations teams the power to optimize operation of their APIs for all their consumers.

How product owners measure an API program’s success

To measure the success of any API program, product managers must be able to analyze the following types of metrics and reports:

API traffic trends broken down by products, app developers, and apps

Trends in signups of new app developers and apps registered for each product

Revenue or business value delivered for each published API

Revenue generated from app developers for subscribing to published APIs

Most prolific or highest-value developers

Developers who consistently exceeding their quotas

Developers who use APIs for free and are candidates for paid offerings

How API consumers see their apps’ API usage

App developers who subscribe to API products through an organization’s developer portal expect visibility into their API usage as well as the quality of service delivered for each of those APIs. Some of the metrics that app developers care about include:

Traffic volume, response times, and errors for each of the APIs called over time

Overall availability for each of the APIs for valid calls that don’t contain client-side errors

The diagram below shows an example of the total availability of the published APIs as seen by consumers, with a breakdown of each of the tiers (API proxy tier, gateway, and back-end systems) and their contribution toward the APIs’ availability.

If the app developer has subscribed to specific pricing plans for using those APIs, then it’s necessary to provide some of the following reports for those developers as part of the developer portal:

Revenue shared (if applicable) by the API publisher for calls made by the API subscriber’s apps

Organizations use API management platforms to provide various types of users fine-grained visibility into API usage and performance. As enterprises adopt modern software practices like microservices, multi-cloud, and platform-as-a-service, gaining deep visibility into how their APIs perform and how developers use them is critical to success.

We’re pleased to announce the general availability of Apigee Edge Private Cloud version 4.16.09. This release, which brings our customers the latest innovations in Edge, is centered around the themes of developer productivity, DevOps productivity, and performance.

Geo map dashboard

The geo map dashboard is now available in the Edge Private Cloud management UI. It enables you to track and assess important information about traffic patterns, error patterns, and quality of service across geographical locations. The dashboard can be accessed from the GeoMap item in the analytics menu.

Improved SOAP wizard

The new release also includes an improved SOAP wizard for building SOAP pass-through and SOAP-REST mediation services.

With pass-through SOAP, where the proxy simply passes through a SOAP request payload as is. All WSDL operations are now sent to the proxy base path "/" rather than to proxy resources (such as "/cityforecastbyzip"). Operation names are passed through to the target SOAP service. This behavior matches the SOAP specification.

The generated proxy no longer supports JSON in the request—it supports only XML. The proxy ensures SOAP requests have an envelope, body, and a namespace.

With REST to SOAP to REST, the proxy converts an incoming payload, such as JSON, to a SOAP payload and converts the SOAP response back to the format the caller expects. In the latest release, the proxy lets you POST JSON data instead of FormParams. The proxy has better support for CORS (cross-origin resource sharing) and offers better namespace and AbstractType recognition.

Monetization enhancements

The new release also includes enhancements to our monetization features. You can create webhooks, which define an HTTP callback handler that is triggered by an event, and configure them to handle event notifications as an alternative to using the monetization notification templates. This makes it easy to notify any custom endpoint (user-defined) for monetization related events. It can be accessed by selecting “webhooks” in the admin menu.

Public rate plans are visible to app developers, who can subscribe to them through the developer portal. This option works great for external rate plans.

Private rate plans are not visible to app developers. You can add app developers to them using the Edge management UI or using monetization APIs. This kind of rate plan can be used for scenarios involving either internal use cases or when a workflow or manual intervention is required before an app developer can subscribe.

A new "adjustable notification with custom attribute" rate plan lets you add to a developer's transaction count using the value of a custom attribute. With the standard adjustable notification rate plan, each successful API call adds one to a developer's transaction count. But with the new rate plan, the value of the custom attribute is added to the developer's transaction count.

For example, if custom attribute "small" has a value of 0.1 in the response, the transaction count is incremented by 0.1; or if custom attribute "addressTotal" has a value of 50, the count is incremented by 50.

Policies enhancements

With our JSON payload enhancement, no workarounds are needed to ensure proper JSON message formatting, and variables can be specified using curly braces without creating invalid JSON.

Other improvements include:

The ability to configure a policy to treat some XML elements as arrays during conversion, which puts the values in square brackets '[ ]' in the JSON document.

The ability to configure a policy to strip or eliminate levels of the XML document hierarchy in the final JSON document.

The ability to include wildcards in multiple places in a resource path when defining resource paths in API product. For example, /team/*/invoices/** allows API calls with any one value after /team and any resource paths after invoices/. An allowed URI on an API call would be proxyBasePath/team/finance/invoices/company/a.

The ability to configure API proxies to time out after a specified time (with a 504 gateway timeout status). The primary use case is for Private Cloud customers who have API proxies that take longer to execute than the timeout configured on the load balancer, router, and message processor.

Monitoring tools

On the DevOps productivity front, we’ve released a beta version of a new monitoring dashboard to provide monitoring for Apigee Edge infrastructure. It helps you understand the health of various components (routers and message processors) as well as HTTP error codes for various orgs and environments in your deployment.

You can also snapshot these details and share them with Apigee to resolve support incidents. This can significantly shorten the time needed to capture important information about your environment during your support case.

Analytics collector utility

We’ve released a beta version of the analytics collector utility as a Node package, which can be deployed on your machines locally. It helps private cloud customers collect useful API traffic metrics and push them into Apigee 360.

Apigee 360 offers one convenient place to open and manage support cases, track traffic and usage metrics, and receive important notifications. It provides customers a 360-degree view of their relationship with Apigee.

Performance

This version supports Postgres 9.4.5, which offers tremendous performance improvements: generalized inverted Indexes are now up to 50% smaller and up to 3X faster; materialized views are concurrently updatable for faster, more up-to-date reporting; and parallel writing to the PostgreSQL transaction log is faster.

High performance crypto JavaScript functions. The new release also includes a set of high-performance crypto JavaScript functions that support MD5, SHA-1, SHA256, SHA512.

How to upgrade

We strongly encourage customers to upgrade to this new release as soon as possible. For customers already on 4.16.01 or 4.16.05 it’s easy to upgrade. If you’re on an older release, you will need to migrate to 4.16.01 first and then upgrade to 4.16.09 (check out these upgrade instructions).

Hope you’re as excited as we are about this new release. There’s a lot more to share than what can fit in here; additional details can be found in our official release notes.

When we first started to build out the analytics layer for Apigee, we made a set of important decisions based on expertise that turned out to be pretty good:

Our team was very familiar with Postgres (and so was I—let’s say I am biased :) ).

We were able to do a good multi-tenant design. Customers got their own tables—all on a shared database infrastructure.

We knew how to balance writes and reads, and built what turned out to be a wonderful, index-less model for the key API traffic “fact table.” This also led to very predictable query performance, which was only dependent on the time range requested in the query.

We set up both aggregation jobs and Amazon Redshift to enable fast query responses.

We could not get to sharding; we had a design, but we threw more hardware at the problem and avoided creating a parallel, multi-tenant, sharded Postgres system. This was the core design for our analytics capability over the past few years.

And then came the obvious issues:

As we grew, we needed to scale. Scale requires sharding.

Redshift prevents our on-premises customers from getting the same benefit, and it ties us to Amazon.

We started to have a ton of downstream uses—tying the infrastructure only to analytics would not do. We needed to separate the pipeline (collect data, clean it, lightly aggregate it) from downstream of custom analytics, security analytics, and mobile analytics.

We needed a different architecture in order to grow with our customers’ needs. As we explored, we noticed that Apache Spark has come a long way. We decided to shift to the following architecture:

All our cloud customers were recently onboarded to this architecture. The new platform provides important capabilities for all of our users:

Data scalability The platform scales fairly naturally to handle more data from API traffic, from any deployment pattern. We can scale to handle petabytes of data.

Analytics scalability Likewise, the platform will scale naturally as more processing, query, or analytics are added. New processing capacity can be added independently without affecting the existing workload.

Dynamic scalability The platform adjusts to load and data variability more gracefully. One-time or regular patterns of variability in data volumes can each be accommodated.

Data and query availability Robustness is built into the different components of the platform to ensure very high availability.

Most importantly, this new architecture provides the foundation for extending the variety of analytics we can offer to our customers. The diagram above illustrates that we can now provide basic dashboards, custom reports, and a specialized security dashboard using bot detection analytics.

Going forward, we intend to provide: business analytics by combining API traffic data with other relevant information like products and plans; operational analytics by combining event and periodic metrics data to track the availability of resources and endpoints; developer analytics that narrows down objects of interest to developers; and extensions to support customers’ analytics.

These are exciting new capabilities that should provide a rich palette of analytics for your different user roles and help you obtain detailed insights from the data.

Apigee is happy to announce the availability of the Apigee Edge API platform in Amazon Web Services Marketplace (AWS Marketplace). AWS Marketplace provides customers an online store that helps them find, buy, and immediately start using the software and services that run on Amazon Elastic Compute Cloud (Amazon EC2).

Apigee is already trusted by hundreds of enterprises to provide API management for heavily used mobile apps, such as those from Burberry and Walgreens. The AWS Marketplace Mobile Factory page highlights Apigee Edge and other products for mobile initiatives.

Apigee Edge can be used with AWS in multiple ways, such as providing an added layer of security and analytics required for mobile applications. If you build a custom backend or microservice on Amazon EC2 or AWS Lambda without following security best practices, you might inadvertently expose your backends through APIs connecting to your mobile application.

Apigee’s developer portal makes it easy to onboard new mobile developers through intuitive, interactive documentation and a self-service sign-up flow for getting new API access keys. Apigee Edge also operates in the majority of AWS regions—leveraging best-in-class performance and availability.

Apigee Edge can also be used by SaaS and software vendors providing services in AWS Marketplace. An example vendor is BitFusion, a company that provides machine learning AMIs in AWS Marketplace. Bitfusion’s AMIs expose RESTFul API to access its machine learning libraries. However, exposing these APIs directly to a mobile app would not be secure. By using Apigee Edge, mobile applications can consume Bitfusion’s REST API in a secure way.

As Forrester analyst Randy Heffner wrote in a recent report, APIs are the underpinning of digital business platforms. They help enterprises prepare for an unpredictable future. So what goes into the evaluation of an API management platform?

It's critical to carefully define all the requirements of building an API-powered digital business platform. This is time-consuming, however; there's a lot to consider:

what's the vendor's track record in API management?

what kind of architecture and deployment options does the vendor offer?

how can the platform leverage your existing technology assets?

what kind of analytics, security, and developer portal does the platform offer?

And there's much more. We've reduced the amount of time it takes to create an RFP for API management from hours to minutes with this RFP template. We hope it helps you on the path to building your API-powered digital business platform.

Ticketmaster’s mission is to bring "moments of joy to fans of live entertainment everywhere,” and APIs play a critical role in this.

We sat down with Ismail Elshareef, vice president of open platform and innovation at the ticket sales and distribution company, during I Love APIs 2015 to discuss how APIs remove friction from innovating and fostering partnerships at the company.

Ticketmaster partners like Groupon and Walmart are able to easily provide their customers with access to live entertainment inventory and tickets via Ticketmaster APIs, Elshareef said. APIs are also key to surmounting one of the biggest challenges facing the live entertainment industry: event discovery.

“Event discovery is such a big problem that no one has cracked yet,” Elshareef said. “Offering our APIs for … developers to play around with and create experiences where event discovery can be accomplished with ease is something that drives innovation big time.”

Having recently joined Ticketmaster, Elshareef said he’s looking forward to expanding the company’s use of Apigee technology. One capability that has been a boon for Ticketmaster over the past few years: API analytics.

“I’ve looked at your competitors and no one has an analytics suite as sophisticated and as easy to use as yours,” he said. "I love that about Apigee."

About two years ago, digital mapping company MapQuest decided to become an API business. With that shift, the Denver-based company became “hyperfocused” on creating a much better developer experience, said MapQuest general manager Brian McMahon. A major part of that push involved simplifying the process of using its APIs.

“Our sole purpose in life is to create great APIs,” said McMahon, who sat down with us at I Love APIs 2015. The transformation has been succesful so far, with one measure being the Digital Accelerator Award that MapQuest won for “Best Developer Experience.”

The company, which has 40 million monthly users, has come a long way from the days when developers actually had to send MapQuest a fax in order to receive an API key, McMahon said.

Thanks to Apigee, MapQuest also is able to analyze the reams of data about the developers who build on its APIs.

“We had thousands of developers using our platform, but we had no idea who they were, what they were using, or why they were using it,” McMahon said. “Partnering with Apigee, we have a much more detailed understanding."

We're pleased to announce a new version of Apigee Edge Microgateway, a lightweight solution that enables enterprises to manage their APIs in a hybrid deployment.

API traffic flows through a gateway running close to the application while being managed centrally through Apigee Edge, which enables organizations to securely deliver and manage APIs, with agility at scale. Customers use Microgateway (which we released in limited availability back in July) to manage their internal APIs/microservices.

The first release had the following features:

Authentication and authorization using OAuth 2.0 protocols

Analytics

Quota

Spike arrest

Customers appreciated its simplicity and ease of installation, but also said it lacked a few features that would help them really get the most out of it. So we incorporated their feedback and released a new 1.1.0 version, with the following new features:

You can hear how digital is driving growth straight from CEOs at some of today’s digital leaders like Nike.

And top-flight sources are documenting digital’s part in a broader pattern: MIT’s Center for Information Systems Research found that companies that derived 50% or more of their revenues form digital ecosystems had 32% higher revenue growth and 27% higher profit margins than their industry averages.

For most executives, the question is not whether digital matters but rather “how we get there” to seize the opportunity (or stave off a threat).

To that end, the Apigee team has been putting together an agenda for I Love APIs 2015 that includes real-world advice on how to engage corporate boards on thinking big about digital as well as how to re-tool IT to deliver digital experiences.

In this post, we’ll focus on what happens between communicating a great vision and a spring into action by digital-ready teams. We see a risk that executives and managers otherwise committed to digital transformation could be their own worst enemies.

Better at digital, better at business

In our own research with over 1,300 companies, we’ve found that stronger digital capabilities are associated with better business outcomes. The top 50% of companies at deploying apps, operating APIs, and using data analytics—who we call “digital leaders”–are on average two-and-a-half times more likely to strongly outperform their peers than the bottom 50% (the “digital laggards”).

Comparing what executives at digital leaders and digital laggards characterized as “a liability”— something that could hold their company back on digital transformation—shines a spotlight on the importance of how decisions get made (a process influenced by policy and culture) as a critical determinant of success.

In my experience, most big companies feel competitive pressure, but also have some degree of confidence they can handle it (after all, they got big). Most large enterprises (whether they have a chief digital officer or not) have a fairly similar overall organizational structure.

Consistent with these observations, comparable percentages of executives at both leading and lagging companies flag market conditions or company performance and their existing org structure as “a liability that could hold the company back on digital transformation.”

But when it comes to “the way major business decisions are typically made,” digital laggards spike up to a full half (51%) calling it a liability—compared to just more than a third (35%) of digital leaders. This represents a 16-point gap.

Decision-making anti-patterns

This calls to mind anti-patterns I’ve seen of decision-making processes compromising a vision or hamstringing execution. Here are two real-world examples:

The company has an enterprise-wide digital strategy. But the team chartered with implementation has to approach multiple lines of business and convince each to bear part of the cost through chargebacks. Now the vision and roadmap is subject to negotiation with numerous teams who may have their own locally optimal but globally suboptimal conditions for buy-in.

IT has built agile capabilities but funding allocation or re-allocation decisions are made in quarterly (at most) meetings dominated by PowerPoint and debate rather than structured data. The opportunity to deliver minimum viable products to market along with well-structured A/B tests to decide what to “fail or scale” on close to a real-time basis gets squandered.

The blurred boundary between strategy and execution

There are two patterns for successfully evolving the decision-making process that we’ve observed and woven into sessions at I Love APIs 2015 for developers, technologists, and business strategists:

“Test and learn” is more powerful than “present and debate.” A digital experience is based on a strategic hypothesis that a desired customer or partner interaction will have business value, and a delivery hypothesis that the chosen implementation will enable a successful interaction. Connected digital experiences are an opportunity to bring strategy and delivery together to use data at pace and scale to confirm or course correct. Digital leaders are seizing this opportunity: McKinsey & Company notes that “digital strategy also increasingly blurs the boundaries between strategy and execution. In fact, 60 percent of digital leaders run strategy by experimentation through limited releases and prototyping.

The innovation you need to compete can’t be “bolted on.” Jerry Wolfe, former CIO at McCormick & Co., current CEO of Vivanda, and I Love APIs 2015 speaker, offered his peers some advice at this year’s MIT CIO Symposium: “Carve out some funding and sponsor some innovation. Test and learn your way through it, but protect it. Don’t ask the pieces of your business that are responsible for delivering incremental growth and managing downside risk to step into white space.” In a time of change, the degree of innovation that you need to compete may be wildly out of scale with the capacity of existing lines of business to invest in any given time period. A business unit or units with a P&L optimized for stability and predictability is a risky vehicle on which to bet a transformation strategy about the future of the company that may be as extreme as moving from a product to services business.

Unlike competition, how decisions get made in the enterprise is something executives and managers can control. Take the challenges head on to avoid digital self-destruction. I Love APIs 2105 can help you do this.