I wanted to send out a quick update on our progress in addressing the Heartbleed vulnerability.

On April 7th an OpenSSL advisory was published that identified the “heartbleed” bug, identified as (CVE-2014-0160)

As soon as the news was available, the Expressway engineering team began a rapid investigation to determine which versions of Expressway Service Gateway and Expressway Tokenization Broker might be affected.

By 9PM CDT on April 9th patches were made available and published along with a customer notification for the most widely deployed Expressway Service Gateway and Expressway Tokenization Broker versions – R3.4, R4.5 and R5.1

Patches for the remaining versions (R5.5 and R6.1) were made available by 2PM CST on April 10th.

Expressway versions older than R3.4 are not affected by the heartbleed bug. In addition, the Intel® Expressway API Management Portal is also not affected by heartbleed.

If you have further questions or concerns, please feel free to reach out to your Intel Expressway support representative. Additional information is available through our support portal as well.

]]>http://blogs.intel.com/application-security/2014/04/10/intelr-expressway-service-gateway-heartbleed-security-update/feed/0How to effectively build a hybrid SaaS API management strategyhttp://blogs.intel.com/application-security/2014/02/03/effectively-build-hybrid-saas-api-management-strategy/
http://blogs.intel.com/application-security/2014/02/03/effectively-build-hybrid-saas-api-management-strategy/#commentsMon, 03 Feb 2014 14:54:00 +0000http://blogs.intel.com/application-security/?p=2539- By Andy Thurai (@AndyThurai) and Blake Dournaee (@Dournaee). This article was originally published on Gigaom Summary: Enterprises seeking agility are turning to the cloud while those concerned about security are holding tight to their legacy, on-premise hardware. But what … Read more >

Summary: Enterprises seeking agility are turning to the cloud while those concerned about security are holding tight to their legacy, on-premise hardware. But what if there’s a middle ground?

If you’re trying to combine both a legacy and a cloud deployment strategy without having to do everything twice a hybrid strategy might offer the best of both worlds. We discussed that in our first post API Management – Anyway you want it!.

In that post, we discussed the different API deployment models as well as the need to understand the components of API management, your target audience and your overall corporate IT strategy. There was a tremendous readership and positive comments on the article. (Thanks for that!). But, there seem to be a little confusion about one particular deployment model we discussed – the Hybrid (SaaS) model. We heard from a number of people asking for more clarity on this model. So here it is.

Meet Hybrid SaaS

A good definition of Hybrid SaaS would be “Deploy the software, as a SaaS service and/or as on-premises solution, make those instances co-exist, securely communicate between each other, and be a seamless extension of each other.”

Large enterprises are grappling with multitudes of issues when they try to move from a primarily corporate datacenter to an all-out-in-cloud approach. Not only that is not feasible, but also it will result in wasting millions of dollars in sunk costs invested in their current datacenter.

The current NSA actions have muddied up the public cloud safety, further undermining enterprise control over applications and data in the cloud. Yet, the pressure to have a mobile first, a cloud first or some API-centric model means enterprises must move some operations to the cloud.

So enterprises are trying a hybrid model to entertain the seemingly contradictory need for agility and security. In doing so, most organizations are building two different flavors of the same services leading to different silos. Obviously the cloud version is more geared towards fast, easily provisioned, low cost and the self-owned data center version would be geared more towards complete integration with existing eco-system. Often, this leads to two different silos.

Most software versions today don’t support Hybrid SaaS because they are not designed to operate both as a service and/or as an in-house install. A true Hybrid SaaS model allows you to install components that operate in both places with similar (if not the same) functions. In addition, there will also be a connector that allows the continuous integration between the components to make this seamless.

Some savvy organizations are intelligent enough to build the consolidated hybrid API model that we have seen.

One API, Expose Anywhere

The ultimate goal is to publish APIs to the right audience with the right enterprise policies, right amount of security, and just the right amount of governance. The motto here is scale when you can, own what you must. What is the right amount for you? It depends on who your developers are, where your APIs are located now, and what sort of security and compliance requirements you have.

The concept of One API is to publish and be available in multiple places, accessed by multiple audiences (internal developers/applications, external developers, and partners) and be available for multiple channels (mobile, social, devices, etc.). All demand a different experience, which is where the hybrid model really excels.

So how does it actually work? In a hybrid API management deployment the API traffic comes directly to the Enterprise and the API metadata is available in two places: on premise and in the cloud. The API metadata available from an on-premise portal is usually targeted to an internal developer.

Here the metadata and API documentation might be slightly different – an internal developer may require a different response format (XML for instance) for integration with internal systems and have a different access mechanism (API keys or internal credential) compared to an external or zero-trust developer. In this case this means that API traffic never goes to the cloud or any developer portal for that matter – this is often a point of confusion in the hybrid model.

Metadata that is available in the cloud would be described differently and use common standards for access such as OAuth and JSON, with rich community features to encourage the adoption of APIs. While the endpoint information is advertised in the cloud, the traffic itself is sent directly to the Enterprise datacenter, with policies enforced by an API gateway. Also, the UX and the registration process is lighter and faster to attract wider audience.

Hybrid SaaS API Management

This allows a number of different benefits for the Enterprise – they have increased control over the API definitions they choose to advertise to external developers and zero-trust developers can interact in a shared cloud that provides API metadata for a collection of APIs – public developers can sign in once and get access for a set of useful tools. Further, runtime traffic enforcement is always handled by the Enterprise, providing full visibility into API transactions as well as the API responses themselves.

The hybrid model is implemented through policy retrieval and the pushing of analytics data: API key, endpoint configuration, and access policies are defined in either developer portal and pulled down and cached by the API gateway. On the push side, analytics information is sent both portals for analytics. The hybrid design allows Enterprises to take one API and deploy it anywhere with maximum security and control.

Talk to us to find out how Intel can help you build such solutions. Intel/Mashery has the most mature API solution in the market, and has helped over 100+ companies realize their dream.

]]>http://blogs.intel.com/application-security/2014/02/03/effectively-build-hybrid-saas-api-management-strategy/feed/0Snapchat’s Unhappy New Yearhttp://blogs.intel.com/application-security/2014/01/03/snapchats-unhappy-new-year/
http://blogs.intel.com/application-security/2014/01/03/snapchats-unhappy-new-year/#commentsFri, 03 Jan 2014 19:32:48 +0000http://blogs.intel.com/application-security/?p=2523Once More Into the Breach… Less than a month after the Target credit card breach another significant data theft is in the news. This week’s victim is Snapchat, the popular photo sharing social network. Gibson Security announced the weakness, with some solid … Read more >

The irony of this situation is that Snapchat’s brand promise was all about security and privacy. Ephemeral photos could capture a moment without permanently tarnishing one’s reputation. Regardless of how well they delivered on that core feature, poor custodianship of other customer data will leave many users wondering how well the app will deliver on privacy where it matters. One indiscretion may well permanently tarnish Snapchat’s reputation.

Why Does This Keep Happening?

One key quote from the New York Times article illustrates a problem common in API implementation:

In an email, one researcher said the data was not being encrypted or “hashed” to make it difficult for hackers to piece together. “They hadn’t even implemented rate limiting,” the research said.

Why is this common? I think there are two reasons. First, things like rate limiting aren’t perceived as adding value. They’re not something that gives your company an edge. So they’re not the first thing you’re going to implement, even if they do add value (security) to your application or service. Second, high traffic volume is a good problem to have. And with a solid DevOps team, there may be valid reasons to avoid throttling the overall service – for example you want to scale elastically to allow for the unfettered growth and popularity of your fledgling service, allowing you and your team to realize a billion dollar payout. That payout may be at risk, however, if you don’t secure your service, and ultimately that’s why you need rate limiting — to protect against dictionary attacks, DDoS, or other malicious use of your service.

The same goes for encrypting or hashing. Implementing these things takes time, and adds complexity to the logic tier. It can also make an app harder to debug, as developers can no longer talk directly to the DB tier of the application to make sense of what’s happening — additional tools are needed. And finally, depending on when the hashing or encryption is implemented, it could also break other pieces of the application — for example if the developers had decided that it was worth their time to do formatting checks on phone numbers to ensure that valid data was being persisted.

How to Prevent This?

This sort of thing is going to keep happening if the risk/cost of addressing it is perceived as being less than the cost to fix it. Fortunately there are steps that can be taken to help mitigate the risk without adding significant development cost. One such solution is to utilize a service gateway to handle API security. Rather than reinventing the wheel with DDoS protection, Content Attack Prevention policies, and other security features, a development organization can implement standard, proven tools to deliver the same functionality. As new threats need to be addressed, they can be added and managed centrally, avoiding the need to commit changes to multiple back-end services.

As for the encryption piece I mentioned earlier, Format-Preserving Encryption is a relatively new tool that protects data while allowing it to pass format consistency checks. This allows data at rest to be encrypted which limits the impact should an attacker make it through the first lines of defense, but it avoids the need to recode the logic tier to accommodate new data formats.

Further Information

Securosis recently released a nice whitepaper that summarizes how API gateways can add security while enabling innovation. I also did a webinar with the authors in October where we discussed this topic. Stay tuned to this blog as well – we’ll continue to cover these events and best practices in API security as the year unfolds.

]]>http://blogs.intel.com/application-security/2014/01/03/snapchats-unhappy-new-year/feed/1ATOS API: A zero cash payment processing environment without boundarieshttp://blogs.intel.com/application-security/2014/01/03/atos-api-zero-cash-payment-processing-environment-without-boundaries/
http://blogs.intel.com/application-security/2014/01/03/atos-api-zero-cash-payment-processing-environment-without-boundaries/#commentsFri, 03 Jan 2014 18:34:51 +0000http://blogs.intel.com/application-security/?p=2526When ATOS, a big corporate conglomerate (EUR 8.8 billion and 77,100 employees in 52 countries), decided that they wanted to become the dominant Digital Service Provider (DSP) for payments, they had a clear mandate on what they wanted to do. … Read more >

]]>When ATOS, a big corporate conglomerate (EUR 8.8 billion and 77,100 employees in 52 countries), decided that they wanted to become the dominant Digital Service Provider (DSP) for payments, they had a clear mandate on what they wanted to do. They wanted to build a payment enterprise without boundaries. [Wordline is an ATOS subsidiary setup to handle the DSP program exclusively]. One of the magic bullets out of that mandate was:

“The growing trust of consumers to make payments for books, games and magazines over mobiles and tablets evolving into a total acceptance of cashless payments in traditional stores and retail outlets bringing the Zero Cash Society ever closer.”

This required them to rethink the way they processed payments. They are one of the largest payments processors in the world, but they were primarily focused on only big enterprises and name brand shops using their services. Onboarding every customer took a long time, and the integration costs were high. After watching the smaller companies such as Dwolla, Square and others trying to revolutionize the world they decided it is time for the giant to wake up.

The first decision was to embrace the smaller vendors. In order to do that, they can’t be a high touch, very time consuming, takes forever to integrate and very high cost per customer on-boarding environment. They wanted to build a platform that is low touch, completely API driven, fully self-serviced, and continuously integrating yet provides secure payment processing transactions. In addition, they were also faced with moving from the swipe retail payment systems to support ePayment and mobile payments. Essentially, they wanted to build a payment platform that catered not only to today’s needs but flexible enough to expand and scale for the future needs and demands. They wanted to offer this as a service to their customers.

Besides, they also wanted to add value with services to the payment platform such as hotel booking services, loyalty systems, review and ratings sites integration, and the most dreaded – social network integration. Obviously, all of these new features required them to integrate with best of the breed providers to offer a complete platform so that the customers don’t have to integrate themselves with multiple vendors. This would become a dream payment platform offered as a business service as opposed to just a technology service.

This posed an interesting problem. They just can’t discard their existing solution set in trying to revamp everything new. It would be prohibitively expensive and could take forever before they come out to the market and lag the market behind their competitors. That means they have to integrate with existing backend systems, integrate with third party provider APIs yet provide a flexible API set which can be exposed as both internal and external APIs. In the current model, integrating every time for every new customer means higher start-up costs for on-boarding, therefore only high paying customers would consider it. Instead, they wanted to provide a repeatable, self-discovering, smaller and faster infrastructure, faster enablement and customer on-boarding, but most importantly the workflows and processes should be repeatable without a need for major customization for every customer on-boarding.

By doing this, they wanted to create a payment platform that would be secure and continue to serve today’s customers without a glitch yet open the platform up for the newer set of customers: the one’s that demand, API based, self-service enabled, creative front end platforms that use them as a pay as you go service model.

This would open up their payment APIs to a much bigger market. This helps them to move away from their current engagement model of trying to find customers to integrate with their platform in a costly and time consuming manner. Instead they allow their customers to find them, use them, and revolutionize them. In other words, they figured out a way to disrupt the payment processing market. They moved from being a traditional vendor to a nimble, quick to the market and creative vendor by letting the customers build the innovation pieces themselves using ATOS flexible interfaces (APIs). (Though this is a vision and not a current capability, this solution capability is expected to be available very soon.)

To do this, they needed a partner who could take the journey with them not only to help them enable this process, but someone who can help build a hybrid enterprise model. You can listen to the video by Matthew Headford, Worldline CTO, here, where he talks about the needs and the comparison of Intel’s solution with the leading competitors in this space. The need includes API gateway, API portal (segmented internal and external users), and API security/protection combined with the current legacy integration, API orchestration layer, integration with existing security model, and integration with third party API providers but most importantly why those Intel.

When it comes to API enabling your business, there is no need to disrupt the existing eco-system and build everything from the ground up. You need someone who can help you with surfacing APIs from existing systems, get your value proposition quicker to market, build the components that are missing, secure them to be equal to your industry standards or higher, but, most importantly, fit everything with your current eco-system.

]]>http://blogs.intel.com/application-security/2014/01/03/atos-api-zero-cash-payment-processing-environment-without-boundaries/feed/0The Grinch Who Stole Christmas for Target’s Brand and Customershttp://blogs.intel.com/application-security/2013/12/23/target-compliance/
http://blogs.intel.com/application-security/2013/12/23/target-compliance/#commentsMon, 23 Dec 2013 22:23:28 +0000http://blogs.intel.com/application-security/?p=2508News broke last week that a major retailer was the victim of a massive theft of customer credit card data, in what is becoming an all too common cadence of data breaches. Thieves made off with not just the credit … Read more >

News broke last week that a major retailer was the victim of a massive theft of customer credit card data, in what is becoming an all too common cadence of data breaches. Thieves made off with not just the credit card numbers, but also the CVV and expiration dates. If you listen closely, you can probably hear the machines printing up counterfeit cards. At this point there has been no precise confirmation of the attack vector used to collect the data – and the gory details may never be known, absent some government action and FOIA request. But in light of what is likely one of the biggest data breaches in history it makes sense to reflect on some of the Payment Card Industry’s (PCI) best practices for protecting customer data. PCI is more than just compliance but should be viewed as a catalyst to improve overall network defense and data protection. In this first post we cover high-level PCI & PII practices that can make a difference- in subsequent posts we will decompose into more detail into how the Target Breach could have been prevented.

Protecting the Brand and the Business

The fallout for Target will surely be in the millions of dollars going beyond fines and reimbursements to consumers but to the damage done to the Target brand. Shoppers entrusted their confidence in Target’s information security practices. While Target is doing a good job of notifying customers of the steps they can take to protect their credit ratings, I have seen several shoppers thinking twice about using their credit cards while doing last-minute Christmas shopping — not just at Target but in other retailers and other industries. Undoubtedly customers are more wary than ever to give out credit card data unless they see that “good house keeping” seal of approval that a merchant has tested their POS and back end systems to the best of their ability. This mistrust will also spill over into e-commerce where consumers where already wary of giving out personal data.

PCI Tools for Credit Card Security

Just because a retailer is PCI compliant, that doesn’t mean they’re 100% secure. But maintaining PCI compliance will reduce the likelihood of data theft – make it hard enough for an attacker and they will hopefully move on to a more vulnerable target. Unfortunately the scale of compliance can be daunting for many organizations. Considering the size of last week’s theft – some 40 million account numbers compromised – it’s clear that high-capacity, high-volume solutions are needed. Many existing solutions haven’t kept pace with rate of technology changes, leading to implementers being overwhelmed attempting to rip and replace as they move more of their businesses online.

Remediation for legacy technologies can be challenging

There are six milestones in the Prioritized Approach to Pursue PCI DSS Compliance. These are necessary but not sufficient for security. Looking at these milestones, it’s clear that the Gateway Pattern can help a retailer (or anyone processing payment card or any other sensitive data) achieve compliance. Furthermore, a gateway can allow faster remediation of legacy systems and applications because it can be inserted into the application stack without significant modification to these existing applications. When new best practices are identified, they can be implemented quickly as well, without being mired in the bureaucracy and expense of custom implementations. This is essential as companies are moving to the cloud for part or all of their transactions. Here are some pointers on how a service gateway can shorten the path to PCI DSS compliance:

First, remove (i.e. redact) sensitive authentication data and limit data retention. Thieves can’t steal what isn’t there. Card verification numbers, PINs, and magstripe data tracks are not to be stored, as those will enable unauthorized use in more locations via card not present and ATM use. With the increasing dependence on web services for transmitting data across systems and providers, the gateway pattern can help fulfill this requirement by redacting internal fields within a data object (e.g. JSON or XML), ensuring that it isn’t persisted or passed downstream to any application that doesn’t require it.

Second, protect the perimeter, internal, and wireless networks. This becomes much more challenging in today’s distributed environments. Many payment processing systems use private networks and firewalls to prevent unauthorized access. However, card readers and cash registers are by necessity at the edge, publicly accessible – once a device inside the trusted network is compromised, it can be leveraged to gain additional access. Going a step further, application-specific firewalls or gateways can provide additional network security, which feeds into the next requirement -

Third, secure payment card applications. Application level security creates an additional internal perimeter to protect against larger-scale data breaches. An application-specific firewall or gateway can provide an additional layer of security that protects against both external and internal attacks. A content attack prevention policy, for example, can limit the spread of an attack that comes from a single compromised system. By monitoring inbound traffic — even from trusted systems — a gateway can help to prevent a content attack such as a code injection masquerading as a valid call from a compromised payment system to a back-end web service or API.

Fourth, monitor and control access to your systems. Entire books can (and have) been written on this topic, but I’ll highlight a few key benefits of a gateway pattern. First, a multi-tenant gateway allows an organization to separate responsibilities by job function, meaning that a single person doesn’t need to be granted administrative access to every service. Additional logging capabilities provided by the gateway can aid in early detection of malicious activity, alerting to suspicious traffic or other patterns that warrant attention. The gateway can also securely sign the logs to ensure that they haven’t been tampered with.

Fifth, protect stored cardholder data. This goes deeper than simply encrypting the volume where the data is stored. With the move to cloud storage, data is stored in numerous locations – in replicas and application caches in addition to primary storage. Best practices suggest using tokenization or record-level encryption to protect cardholder data. For the credit card numbers themselves, tokenization provides an added benefit of a secure central vault that contains the mapping between card numbers and tokens that can safely be passed between applications. Randomized tokens have no mathematical relationship to the original cardholder data, so systems that only access tokens are effectively removed from audit scope. This means that the addition of the gateway layer actually reduces audit complexity and cost!

Finally, finalize remaining compliance efforts and ensure that controls are in place, including ongoing verification and maintenance of compliance posture. This is where the gateway pattern (particularly when it includes tokenization) really shines — by attesting that downstream systems and services never touch cardholder data, the retailer can dramatically reduce the scope of work to be done. Audit and verification costs time and money (best practices include using an outside specialist), so reduction in scope means less complexity and therefore much less cost. In addition to the insurance against potential costs from a data breach, scope reduction can save millions of actual dollars in audit.

Summary

Thieves are getting savvy with their attempts to gather cardholder data on all fronts. Attacks on retailers and banks, while difficult to pull off, present potentially enormous return on investment. If successful, they also put a tremendous liability on any organization that doesn’t adequately protect their data. Maintaining customer trust and brand reputation means being a good custodian of data – not just credit cards, but also names, email addresses, and any other personal information. Gaining and maintaining PCI compliance is a good first step in protecting customer data, and with it the corporate brand. Using best practices and tools to do so can accelerate the compliance process and reduce the overall cost of staying compliant. A couple of resources that can help take the next step in using tokenization for PCI compliance: a Tokenization Buyer’s Guide, and a QSA Security Assessor’s guide that explains how using a gateway for tokenization helps to remove systems from audit scope.

]]>http://blogs.intel.com/application-security/2013/12/23/target-compliance/feed/0Custom API Analytics with Expressway and Splunkhttp://blogs.intel.com/application-security/2013/12/17/custom-api-analytics-with-splunk/
http://blogs.intel.com/application-security/2013/12/17/custom-api-analytics-with-splunk/#commentsWed, 18 Dec 2013 01:33:48 +0000http://blogs.intel.com/application-security/?p=2257Splunk – An Ancillary Source of API Analytics Data analytics solutions seem as varied as the data they analyze. However, Expressway users have found tremendous success extending it’s built in API Analytics capabilities with those of Splunk’s – a recognized … Read more >

]]>http://blogs.intel.com/application-security/2013/12/17/custom-api-analytics-with-splunk/feed/0API Management – Anyway you want it!http://blogs.intel.com/application-security/2013/12/16/api-management-anyway-you-want-it/
http://blogs.intel.com/application-security/2013/12/16/api-management-anyway-you-want-it/#commentsTue, 17 Dec 2013 03:24:49 +0000http://blogs.intel.com/application-security/?p=2430- By Andy Thurai (@AndyThurai) and Blake Dournaee (@Dournaee). This article originally appeared on Gigaom. Enterprises are building an API First strategy to keep up with their customer needs, and provide resources and services that go beyond the confines of … Read more >

Enterprises are building an API First strategy to keep up with their customer needs, and provide resources and services that go beyond the confines of enterprise. With this shift to using APIs as an extension of their enterprise IT, the key challenge still remains choosing the right deployment model.

Even with bullet-proof technology from a leading provider, your results could be disastrous if you start off with a wrong deployment model. Consider developer scale, innovation, incurring costs, complexity of API platform management, etc. On the other hand, forcing internal developers to hop out to the cloud to get API metadata when your internal API program is just starting is an exercise leading to inefficiency and inconsistencies.

Components of APIs

But before we get to deployment models, you need to understand the components of API management, your target audience and your overall corporate IT strategy. These certainly will influence your decisions.

Not all Enterprises embark on an API program for the same reasons – enterprise mobility programs, rationalizing existing systems as APIs, or find new revenue models, to name a few. All of these factors influence your decisions.

API management has two major components: the API traffic and the API metadata. The API traffic is the actual data flow and the metadata contains the information needed to certify, protect and understand that data flow. The metadata describes the details about the collection of APIs. It consists of information such as interface details, constructs, security, documentation, code samples, error behavior, design patterns, compliance requirements, and the contract (usage limits, terms of service). This is the rough equivalent of the registry and repository from the days of service-oriented architecture, but it contains a lot more. It differs in a key way; it’s usable and human readable. Some vendors call this the API portal or API catalog.

Next you have developer segmentation, which falls into three categories – internal, partner, and public. The last category describes a zero-trust model where anyone could potentially be a developer, whereas the other two categories have varying degrees of trust. In general, internal developers are more trusted than partners or public, but this is not a hard and fast rule.

Armed with this knowledge, let’s explore popular API Management deployment models, in no particular order.

Everything Local

In this model, either software or a gateway that provides API metadata and traffic management are both deployed on-premise. This could either be in your DMZ or inside your firewall. This “everything local” model gives the enterprise the most control with the least amount of risk. This is simply due to the fact that you own and manage the entire API Management platform. The downside to this model can be cost. Owning it outright might cost less in the long run, but the upfront cost of ownership could be higher than other models because your Enterprise needs the requisite servers, software, maintenance, and operational expertise. However, if the API platform drives enough revenue, innovation and cost reductions, the higher total cost of ownership (TCO) can be justified with a quicker return on investment (ROI). This model serves internal developers best and helps large Enterprises that want to start with ownership and complete control of their API management infrastructure that can be eventually pushed out to a SaaS model.

Virtual Private Cloud

In this model, either software or a virtual gateway is deployed in a virtual enterprise network such as an isolated Amazon private cloud or virtual private cloud (VPC). Depending on the configuration, the traffic can either come to the DMZ or go directly to the private cloud. The traffic that comes to the enterprise DMZ can be forwarded to VPC and the VPC direct communication can be enforced based on enterprise governance, risk and security measures. A VPC deployment may be ideal for trusted internal developers and partner developers, and allows the Enterprise to experiment with elasticity. The VPC model with multi-homed infrastructure also allows API metadata to be accessible from the Internet, but done with a soft-launch and not a big-bang. As partners grow, the infrastructure can scale in the private cloud without the need to advertise the API metadata to every garage developer out there. This option gives the enterprise similar control as the local datacenter model deployment, but with a slightly elevated risk but more elasticity.

Hybrid SaaS

In this model, the API traffic software/gateway is installed on-premise but the developer onboarding and public-facing API catalog (or portal) is deployed in a public SaaS environment. Though the environments are physically separated from each other, they are connected through secure back channels to feed information in a near-real time basis. Communication includes information flow from the API management catalog to the API traffic enforcement point which includes API keys, quota policies and OAuth enforcement. The API traffic management pushes traffic analytics, statistics, and other pertinent API usage information back to the SaaS public cloud.

This model provides for a good developer reach and scale, as developers can interact in a shared cloud instance while keeping the traffic flows through the enterprise components. Also, this model allows you to have a split cost model; the API metadata is charged as a service (without a heavy initial investment) and the data flow component is a perpetual license, giving the enterprise a mix of both benefits. The API traffic can still come to the enterprise directly without a need to go to the cloud first which will let the enterprise use components, thereby reducing some of the capital expenditure (Capex) costs. This configuration maximizes enterprise control and security and combines that with maximal developer outreach and scale with a utility cost model.

This may seem like the best of both worlds. Why even consider other models? In practice this model may be extended and combined with the others. For example, by adding a developer portal on-premise to better serve internal developers with improved latency and more IT-architect control. It’s not about exclusive choices, but about understanding the benefits of each of the interconnections.

Pure SaaS

This is the full on-demand model. In this configuration, both developers and the API traffic are managed in a multi-tenant SaaS cloud. In the pure SaaS model, API traffic hits the cloud first and is managed against Enterprise policies for quotas, throttling, and authentication/authorization. Analytics are processed in the cloud and the API call is securely routed back down to the Enterprise. The SaaS portal is skinned to conform to the customer’s branding, has the ability to integrate web content of the customer’s choosing, and is branded with URL of the customer’s choosing so that as far as the developers are aware, the portal is owned and operated by the customer.

Due to the fact that enterprises use the cloud elastic model in this case, both for scaling and for costing, the Opex prices can be multitudes cheaper than the heavy initial investment that might be required in the previous models. In one sense, this is comparing apples and oranges: In the opex model you trade the higher up-front costs of running and maintaining your own servers with a lower monthly fee, but as we mentioned before, there may be reasons for both: A large Enterprise may run a SaaS API program for their marketing department and an internal API management program for their IT department supporting a new mobility strategy. The SaaS API option maximizes developer scale and has the lowest maintenance costs. Plus, the enterprises will require fewer resources to run and maintain the deployment. This is the option best suited for having instant updates to the API management platform with minimal downtime and high performance through CDN caching and managed fail-over and resiliency.

It is never one size fits all when it comes to API management. Each situation is different based on specific needs. Examine the different deployment options carefully, and see what will work best for you, keeping in mind that these deployment models are NOT mutually exclusive as you can combine them.

When we built our API 2.0 platform, by combining Intel and Mashery solutions, we took all of the above into consideration. Not only will we not limit you to a specific deployment model, but also will we help you transition between deployment models with ease.

We just recently announced the combined solution, API 2.0 platform that combines our strengths. Check us out at cloudsecurity.intel.com.

]]>http://blogs.intel.com/application-security/2013/12/16/api-management-anyway-you-want-it/feed/0API Management Predictions for 2014http://blogs.intel.com/application-security/2013/12/13/api-management-predictions-for-2014/
http://blogs.intel.com/application-security/2013/12/13/api-management-predictions-for-2014/#commentsFri, 13 Dec 2013 21:03:26 +0000http://blogs.intel.com/application-security/?p=2410The New Year is fast approaching and it’s time for some wild, speculative predictions on API Management for 2014. As I mentioned in earlier posts, the space is has been rapidly maturing over the second half of 2013 with larger … Read more >

]]>The New Year is fast approaching and it’s time for some wild, speculative predictions on API Management for 2014. As I mentioned in earlier posts, the space is has been rapidly maturing over the second half of 2013 with larger vendors such as IBM, Tibco and Intel making big moves. In Q3, 2013 Gartner sized the standalone API management market for 2013 at about $100M (70MM in 2012 with 40% expected growth).

Secure your future in 2014 with API Management

The total market size as estimated here may seem small but this market intersects with the more traditional, and larger ($474M in 2012), SOA governance market because API management products both complement SOA governance and act as substitutes. The growth and success of API management programs, both public and internal, cause Enterprises to look at how they were handling their existing non-managed SOAP & REST APIs. This second look raises questions about how APIs and interfaces might be better managed in the future, especially when you have to address new demands and channels such as different screen types, developer communities, customers, partners, and devices. So what will 2014 bring? Here are my six API management predictions for 2014.

SOA will reincarnate itself or die a second time – Services are alive and well and service orientation has reached the plateau of productivity, but unless Enterprises find a way to socialize APIs locked up in their existing SOA governance systems, these incumbent products will be quickly passed over in favor of new ways of sharing APIs – a simple developer portal. This services evangelism race is about marketing interfaces to developers and at the starting line developer portals are the Tesla Model S and SOA registry/repositories are the U-Haul. One vehicle you want to drive and the other you drive only when you have to. If you don’t believe me, try searching a UDDI directory and compare that experience to this. The prediction here is that traditional SOA governance solutions will evolve to compete on experiences and improved sharing of API metadata or die trying.

Internal API managementwill be the silent killer app – We have seen quote after quote this year from Netflix, and Programmable Web that Internal API management, which translates into a shared services layer for use by internal applications, is one of the largest, albeit hidden use cases. For newcomers to the space peppered with exciting stories of public and open developer programs, this is a shock, but only because internal API management programs aren’t advertised. The prediction here is that your company is probably already running with hundreds of internal APIs that lack management, are difficult to discover and document, and hard to use by internal developers, existing applications and partners. API management opportunities are likely under your nose in the form of exposing these APIs outside or helping your organization share data internally more efficiently.

Mobile enablement will lead the way – APIs expose data to mobile devices. Without them, all of the ‘app’ experiences we have on tablets and smart-phones would be silo’d. Think back to the days of PC computing before modems or the Internet. That is what an app would be like without an API today. Almost every Enterprise we talk to has a mobile strategy that involves moving traditional IT services to an API layer to make new experiences available to their employees, powered by APIs. The prediction here is that second to internal use, mobile enablement will drive the use of API management and security for external interfaces in 2014.

Security concernswill take center stage – As screen types proliferate, Enterprises will need a strategy and approach for API security that covers more than just enabling “OAuth”. As internal systems participate in API management, a security layer will be needed to decouple perimeter defense, denial of service protection, JSON attack protection, compliance, authentication, authorization and message level security. Otherwise, the ability for an Enterprise to scale its APIs externally will be limited by number of security developers and their expertise in the myriad middleware systems and programming languages in use at the Enterprise. The prediction here is that in order to scale their API management programs Enterprises will need to implement an API governance and delivery tier, whether on-premise or in the cloud – done with or without a vendor product.

IoT could be a bull in the china shop – IoT promises a vision of Wireless sensor networks and low powered devices becoming part of the Industrial Internet. Once enabled, data from sensors could and eventually be exposed through APIs. If sensor data converges on REST (or SOAP) as the final consolidation point demand for API management could skyrocket beyond what we have seen to date, possibly altering the Gartner market size estimates in a big way.

API managementwill be a journey – In 2014, API management will evolve to a suite or platform approach, rather than point tools and the vendor or vendors that can best easily fit into a heterogeneous environment with flexible products will be best positioned to compete. Here is what the journey might look like: Enterprises can start anywhere with API management – some may begin with an open developer program, move to internal API management and then eventually deploy API management in a hybrid architecture. Or, some Enterprises may start only with internal API management and devise a business case and marketing plan for long-tail exposure of their products through an open API developer program. Whenever they start, API management will be an enabler of a particular business model, either driving down costs or providing new sources of value.

]]>http://blogs.intel.com/application-security/2013/12/13/api-management-predictions-for-2014/feed/0API Management Invasion: SOA At the Gateshttp://blogs.intel.com/application-security/2013/11/18/api-management-invasion-soa-at-the-gates/
http://blogs.intel.com/application-security/2013/11/18/api-management-invasion-soa-at-the-gates/#commentsTue, 19 Nov 2013 00:17:54 +0000http://blogs.intel.com/application-security/?p=2229One of the most surprising moments of my talk at QCon San Francisco last week was when I asked the audience who is ‘doing’ service oriented architecture inside their Enterprise. Everyone raised their hand, or nearly everyone. There was no … Read more >

]]>One of the most surprising moments of my talk at QCon San Francisco last week was when I asked the audience who is ‘doing’ service oriented architecture inside their Enterprise.

API Management Best Practices are being used for Internal API Management

Everyone raised their hand, or nearly everyone. There was no hesitation. The question was clear and the response was swift. Attendees didn’t look around to see if they were the only one riding this ‘dead’ trend. Instinct took over and hands shot up all around. The same question last year at the same conference yielded a positive response from less than half the respondents. Sure, this experiment is anecdotal with a mere slice of the relevant respondents and absolutely no control group, but I think it validates Gartner’s plateau of productivity for services. Productive yes, but maximally productive – no. For internal services to be realized more fully, SOA needs API management.

API Sharing – What’s That?

I talked to attendee after attendee, all with a similar story. The story was how their Enterprise decomposed their assets into programmable services using SOA and hosted their services on vendor platforms (IBM,Tibco, Microsoft) and/or open source. An informal survey yielded most developers using Spring, Jersey or Ruby on Rails as popular ways to host internal services. While services were plentiful, there was simply no single pane of glass, or single source of the truth for internal developers to go to discover and make use of disparate services.

APIs, which in one sense are the closest thing to any developer’s heart, were also the most elusive. For the day to day practitioner, the developer, there is still a significant mental gap between SOAP web services and “APIs.” Many attendees hadn’t heard of solutions for internal SOA governance of the registry/repository ilk and the distance between SOAP and API management seems like light-years. Public and open API programs didn’t seem to “apply” to the quandary of the day to day developer.

Even when valuable functionality is implemented, I heard horror stories of services being implemented twice or three times over in different parts of the Enterprise simply because developers didn’t know that this functionality already existed and had no good way to reuse the components. A service hiding behind a WSDL on Microsoft .NET with zero discoverability is like an invisibility cloak on your SOA. The functionality is there, but almost impossible to use unless you are the original developer or an asetic monk that regularly engages in <wsdl:definitions> tag torture.

It’s time for an API Management invasion. API management has optimized the process for developer on-boarding and fast time to market for services. Developer portals shine in solving this problem. Why? Because they’ve been battle-tested on the open Internet, with hundreds or thousands of “zero-trust” developers. The model is there, it just needs a way to invade the Enterprise. If you are like any of the attendees I talked to last week and already have a SOA that isn’t delivering value, consider how you might apply best practices such as an internal developer portal, fast on-boarding processes, interactive documentation, analytics and other best practices from the open API movement to evangelize and share your components from within. Invade your SOA with API Management.

]]>http://blogs.intel.com/application-security/2013/11/18/api-management-invasion-soa-at-the-gates/feed/0API Management or Enablement? You Decidehttp://blogs.intel.com/application-security/2013/11/01/api-enablement-is-the-new-api-management/
http://blogs.intel.com/application-security/2013/11/01/api-enablement-is-the-new-api-management/#commentsFri, 01 Nov 2013 19:28:38 +0000http://blogs.intel.com/application-security/?p=2203Last week I was at the HTML5 developer conference and then spent the remainder of the week at the API Strategy Conference in San Francisco. All of the keynotes and presentations were great and I think everyone enjoyed a new, … Read more >

]]>Last week I was at the HTML5 developer conference and then spent the remainder of the week at the API Strategy Conference in San Francisco. All of the keynotes and presentations were great and I think everyone enjoyed a new, cleaned up Kin Lane, whom we hardly recognize now that he is in a suit. One observation resonating among colleagues and attendees is that the API Management space is maturing. In one sense, Kin’s transition in his outward style from hacker to suit is a metaphor for the space as a whole.

This year’s conference had a great turnout which shows the space maturing

This was evident at the show as some of the larger companies are entering the space. Enterprises are waking up to the buzzword of API Management and are looking to see how APIs can help them meet business objectives, in other words, companies are looking inward to see how API management applies value to their existing IT systems as well as their external presence.

Convergence comes when Enterprises look at what they have and try to match their architecture and approaches to the “new world” of API management. I recently participated in a webinar with Forrester’s Randy Heffner that tackled the subject of SOA vs API management head on, entitled “Exposing the Beast: Custom API Management for the Enterprise.”

Randy’s position is that SOA and API management are the same but different. I agree, but I will also add that I think the differences are shrinking, day by day. I am seeing trends in both directions. Cross-pollination is happening and I think that next year’s API strategy conference will have even more traditional Enterprise use cases as well as exciting new innovations for public and open APIs.

For instance, we had Daniel Jacobson’s fireside chat where he claimed that “sub-1 percent of our API calls are public”, this is an astounding number, yet Netflix considers what they do API management. Another example is ancestry.com, which described a scenario where internal developers started using the external API, just because of the improved developer experience (DX). The external API was so easy to use that internal apps started consuming it. Another example was the excellent presentation given by Keith McFarlane, the CTO of LiveOps, a cloud service provider.

In his talk he showed their strategy of wrapping legacy applications with APIs through a migration strategy that first puts a facade in front of the API and then moves the processing into the facade layer. This type of approach is applying the principles of service-orientation and data-sharing to modernize mature (read: legacy) applications that were created to address a need at the current time, but can’t be directly changed without impact current business operations.

What this means is that API management’s original value of a great developer experience is being transported internal to the organization. It’s APIs everywhere and I have been calling this API Enablement, similar to “SOA Enablement” which used to take center-stage in the old “SOA” days. Internal API Management applies what worked about SOA, but to solve internal data silo problems.

Another trend emerging, especially for public APIs is the notion of forward integration for developers. In short, why go pick up a pizza when it can be delivered to you? The concept here is the same. Why go to a developer portal to sign up for an API when you can, in principle, be delivered an SDK in your favorite programming language that wraps the API you want to use? Treating the developer as a customer, this provides the developer the most value and can accelerate the adoption and use of APIs. This was a viewpoint expounded by Mehdi Medjaoui of Webshell.io in his presentation that accompanies a polemic quote, “in a Developer Experience perspective, Developer Portals are a bug, not a feature. But they are here to define a contract between a user/customer and a Provider. It’s a compromise.” This concept goes hand in hand with Pamela Fox’s comment that the best developer experience is no sign-up and no key. It remains to be seen how this concept will play at Enterprises for internal APIs. Faster on-boarding is valuable, but needs to interplay properly with existing entitlements and access control policies.

Overall, the space is growing up fast and I expect to see more Enterprise usage models emerge as we head into 2014, especially around the convergence of SOA and API management into a cohesive solution set for internal, partner or public developers deployed anywhere – on-premise, hybrid, private cloud or in public cloud environments.

]]>http://blogs.intel.com/application-security/2013/11/01/api-enablement-is-the-new-api-management/feed/1API Management for Obamacare and Healthcare.govhttp://blogs.intel.com/application-security/2013/10/25/api-management-for-obamacare-and-healthcare-gov/
http://blogs.intel.com/application-security/2013/10/25/api-management-for-obamacare-and-healthcare-gov/#commentsFri, 25 Oct 2013 18:41:00 +0000http://blogs.intel.com/application-security/?p=2186It’s not every day that you hear about a software project on public media, but NPR and other public outlets are covering the troubled rollout of the Healthcare.gov website nearly hourly. As a software professional, the problems I was hearing … Read more >

It’s not every day that you hear about a software project on public media, but NPR and other public outlets are covering the troubled rollout of the Healthcare.gov website nearly hourly. As a software professional, the problems I was hearing about are common in a large software project, where multiple pieces of the final product are built independently and then integrated together at the end.

We are in the Post-Website Era. APIs Can Help.

The practical problem here is that it is too easy for disparate contractors working on just their piece to even understand how the whole will fit together. In fact, the nature of computing and programming relies on this to some extent: Treating individual components as modules assumes a certain amount of ignorance on how inputs to one particular module are derived and where outputs are used in other parts of the system. This means developers can focus on making their piece meets the appropriate functional and non-functional requirements which makes them “cogs in the machine.” All of them are performing essential functions, but can’t see the forest for the trees. They can’t step outside of their own cog.

Multiple contractors can be a cog in the machine

This type of result can actually be predicted. In 1968, Computer Scientist Walter Conway described an assertion later known as Conway’s Law: “organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations.

This implies that the resultant software system will inherit communication (or non-communication) properties of the organization that designed it. In this case if we had dozens of private contractors with inadequate communication, you will end up with a system not properly tested end-to-end, which is exactly what happened here. Further, testing that occurs only ‘at the end’ of a software project is reminiscent of a waterfall software model, which is great for designing nuclear missiles, but extremely bad for designing a dynamic, highly scalable software system with heavy user-interface and usability requirements like Healthcare.gov.

So what happened with Healthcare.gov? Reuters’ technology review suggests that the core design problem with the Healthcare.gov website was not the scalability of the server-side architecture, but the sheer amount of client logic pushed down to the browser, citing 92 separate files and plugins, including over 50 JavaScript files. By design, this means that your experience on Healthcare.gov is not just a function of how the website was designed, but also the client processor power, memory and client side factors, not to mention your available network bandwidth and round-trip latency. In short, the current architecture of the website appears to place too much work, and consequently blame, on the client. This also means the website may work better for some if you have a beefier client system.

Before the public fiasco, I mused that an Obamacare API and an API Management architecture might be a good thing based on lowered expectations of a smooth rollout of Healthcare.gov. Now I think it’s more than a good thing, API Management just might be a savior. How? Rather than build a user interface, the government should have made an API and had the contractors compete to build the best interface. Here, the API could be a RESTful API launched as an open API allowing anyone to take a crack at using it to make the best possible experience for the user. This architecture cleanly separates the concerns – the government runs the server side and manages the API, data and transactional services and someone else writes the client piece.

For the uninitiated, API here is a programming interface that represents just the server side of the Healthcare.gov functionality. The API would consist of a set of interfaces that provide all of the necessary data and transaction methods to allow a client consumer to purchase healthcare through the exchange. It could use well-established, highly scalable technologies such as an API Management Gateway for handling traffic and API Catalog and Developer on-boarding portal for on-boarding public and internal developers. For reference, Intel’s API gateway can handle over 18 billion calls per month, per node. Moreover, the current technology offerings for a developer catalog and portal would effectively allow internal developers working at the government to compete with external developers to build the best user interface.

The best part about this approach is that the government would not have to worry about the user interface and client experience. This could be left up to people who know how to design great user interfaces and would open the way to making the Healthcare.gov application available not just through a browser, but with an HTML5 or native mobile application. This is a true win-win. The government won’t be blamed for a bad website and consumers get the best possible experience.

]]>http://blogs.intel.com/application-security/2013/10/25/api-management-for-obamacare-and-healthcare-gov/feed/0How APIs Fuel Innovationhttp://blogs.intel.com/application-security/2013/10/21/how-apis-fuel-innovation/
http://blogs.intel.com/application-security/2013/10/21/how-apis-fuel-innovation/#commentsMon, 21 Oct 2013 16:54:07 +0000http://blogs.intel.com/application-security/?p=2172This article originally appeared on ProgrammableWeb. There has been so much talk about APIs and how they add additional revenue channels, create brand new partnerships, allow business partners to integrate with ease, and how they help with promoting your brand. … Read more >

There has been so much talk about APIs and how they add additional revenue channels, create brand new partnerships, allow business partners to integrate with ease, and how they help with promoting your brand. But an important and underlooked aspect, which happens to be a byproduct of this new paradigm shift, is the faster innovation channel they provide. Yes, Mobile First and the API economies are enabled by APIs.

One of the major issues of B2B integration and partner/community-based application development in the past was not only that we gave developers specific limited building blocks but also a set of very rigid interfaces. When combined with tight governance (GRC), security and unreasonable restrictions, essentially it gave the developer community a steel cage to build things inside. This used to allow no leeway, no room for imagination, and certainly thinking out of the box was verboten. The complex interfaces that were handed over to the partner developers and the fixed data exchange models required someone to understand technology deeply and demanded use of very sophisticated integration tools to build these B2B data exchanges. It took months, sometimes years, to build them. When the interface, data model, or the backend system changed, we had to do this all over again.

I’m reminded of an old pirate movie I saw decades ago. In it, someone loaded their gun with gun powder and shot, only to miss the target. That person followed up by trying to reload the gun with gun powder (after uttering an expletive, of course) before a pirate charged him with a sword. In a similar fashion, if we missed the target/goal with what we built, we had to reload the work with our “gun powder”. The problem with that model is that in most cases, we rebuilt from end to end which cost us too much money, too many resources, and took too much time while tying up our entire developer force.

Then came the revolution – the API revolution. This was specifically created to address the issues that were faced by businesses, or entities, when working with developer communities, both internal and external (B2D – Business2Developer, as it is called sometimes). In this model, we are opening up the proverbial steel cage (or shackles). We provide very standardized API interfaces that have well-defined contracts, are version-controlled and have carefully managed life cycles. Yes, the APIs most times are very simple – especially if they are REST-based. With that, we did not limit this to just our developer partners, but rather we opened this up to a society of capable developers who were hungry to make newer things happen. By providing a cleaner entry point into our assets (data, process, systems, platforms, etc.) we asked them to dream up and build something unique, by showing their differentiation and their value add to us and to our customers. This fueled the innovation from the bright minds who wanted to prove their half of the finished product was better than the thousands of other developers who were trying to build something similar using the same baseline. This is where innovation happened. I deal with major corporate customers who are rushing to expose their assets to these bright, out of the box thinkers, and surely enough, those brilliant minds rush to build something unique thereby adding value to each other in the process. Let those bright minds innovate, build, test, deploy, and bring you additional revenue while you enforce, control, and protect your assets in the right way. The key is not only to create APIs out of your existing systems but provide APIs that are enterprise-grade level. This includes security, governance, and scalability but still provides usability factors the users are looking for. Check out how Intel API management solutions can help you with creating, protecting, and providing enterprise grade APIs here.

As innovation and integration continues to improve, now the internal developers are demanding that they participate in the innovation spiral along with the external developers. The data and asset holders are excited about that because the internal developers can take this up a notch. They can create innovative internal solutions using the secret sauce that may not be sharable with external developers, or partners, due to sensitivity of the assets. Now I see that a lot of companies are creating two different channels, one for external developers and one for internal developers. With this there is a new need to have an external facing developer API portal and internal facing API catalog systems that will expose little more than the external portal. Though a lot of companies claim that they will cut off the external portal and developers, I don’t see that happening in the near future. On the contrary, they both need to work in tandem in a harmonized way to get the full potential out of the assets. We at Intel are the very first ones to realize that and have developed a solution that will offer both external and internal API developers a set of rich enablement functions. Check out Intel API Management solutions portal for more information.

One final point I want to discuss. People seem to often mix up open interface and flexible data access with open source. While it is possible to build one on top of other, care should be exercised when you build all of these on top of open stack and open yourself up for insecure or non-enterprise grade solutions.

As my dad used to tell me many times, “Tell them what to do, not how to do it. People will surprise you with their creation.” It is time to open up your locked assets and let the creative developer community put their magic dust to work to create multiple beautiful stories out of it.

I will be speaking on this very topic at Defrag in Broomfield on Nov 4-6. Please stop by my API speaking session if you are attending this event or anywhere in CO on those days. It is worth it.

]]>http://blogs.intel.com/application-security/2013/10/21/how-apis-fuel-innovation/feed/0API Management as a Platformhttp://blogs.intel.com/application-security/2013/10/18/api-management-as-a-platform/
http://blogs.intel.com/application-security/2013/10/18/api-management-as-a-platform/#commentsFri, 18 Oct 2013 22:55:43 +0000http://blogs.intel.com/application-security/?p=2134Why should you think of API management as a platform? Because it’s becoming one of the most prodigious and important aspects of how Enterprises of all sizes participate in the digital economy.Keeping in line with the standard platform technology definition, … Read more >

]]>Why should you think of API management as a platform? Because it’s becoming one of the most prodigious and important aspects of how Enterprises of all sizes participate in the digital economy.Keeping in line with the standard platform technology definition, an API management platform supports the deployment of Enterprise APIs without the introduction and expense of a new process or technology. A platform allows the management of APIs as a first class citizen for the Enterprise.

In J.K. Rowling’s novel “Harry Potter”, choosing the right platform makes all the difference.

To date, many of the discussions around API management from vendors and analysts alike have been very technology or implementation focused. This is understandable as APIs tend to appeal to a technical audience. The details are great but sometimes it is worthwhile to step back and look at general capabilities.

If we take the wider view, what sort of capabilities or functional modules should an API Management platform have?

Gartner’s Eric Knipp released new research last week that begins to define API management as a complete platform. The research is entitled Run and Evolve a Great Web API with API Management Capabilities. Not everyone will have a Gartner subscription, but I think this research will be one of the most important for Enterprises looking to deploy API management due to the breadth of material it covers.

In this research note, Eric is one of the first analysts to describe a comprehensive set of capabilities for API Management.

API Management Platform Capabilities

He breaks the topic into four categories which he calls (i) enable developers, (ii) manage the API life cycle, (iii) communicate securely, reliably, and flexibly, and (iv) measure improve business value.

Enabling developers includes all aspects of managing API metadata, the API catalog, community management, and also includes interesting capabilities such as developer API customization which is an advanced concept that really puts the developer in control of the API. Here the developer can morph the interface to their liking, allowing the consumer to effectively participate in the interface design. It really puts the developer at the center of how data is accessed. Also, this category expands the discussion to include the notion of SDKs and sample code that developers can directly incorporate, moving one step beyond just providing interfaces definitions.

Managing the API Life cycle includes how APIs are published, how versioning is handled as well as changes and issue tracking. For example, an API management platform needs to have CRM capabilities and ticket tracking, truly treating the developers as customers.

Communicate Securely, Reliably, and Flexibly includes all aspects of surfacing APIs from legacy systems, scaling traffic, handling authentication, SLAs, building service orchestrations, and providing threat defense and data privacy. This is the largest category in terms of the sheer number of capabilities and approximates the “runtime”or “traffic’ portions of moving data in and out of interfaces.

Measure and Improve Business Value includes all the capabilities needed to relate APIs to the business as well as measuring uptime, activity, user auditing, contracts and terms of service, and SLA monitoring. This generic set of capabilities answers the questions: Is my API providing value? Is it up and running? How are business relationships maintained?

One of the merits of this article is that it does a great job of outlining precise requirements without diving into specific implementation choices. As with most things that involve software and technology, implementations can have different physical instantiations but still support a consistent set of common capabilities. Talking in capabilities allows decision makers to stay out of technology “rat holes” that can color and bias business decisions.

Long Live APIs

This research note advances the discussion around API management by widening its scope and purpose, moving it from a technology discussion to a capability and platform discussion. Early in the article Eric widens the definition of APIs.

He explicitly covers messaging APIs, SOAP APIs and custom APIs in addition to RESTful APIs. I think this move is absolutely correct. Not only does it more closely approach the original definition of the term, but it matches well with the idea of subsuming the older SOA terminology to militate under a new banner of APIs, similar to a previously article I wrote on the subject, Long Live API Management.

We are only killing the name, not the act of service enablement. Eric’s article seems to represent APIs as big concept, including the full suite of programmatic access whether realized as REST, JSON, XMLSOAP, XML-RPC, Messsage-Oriented-Middleware (MOM), FTP and file protocols, as well as (correctly) broadening the definition to include software development kits and sample code. One can even go as far as to say any programmatic interface is an API – and voila, APIs are regaining their original definition as a true application programming interface. The lesson here is to ditch the jargon and apply what works for the Enterprise.

Eric also makes some statements around APIs a universal tunnel to the Enterprise and correctly describes them as follows: “As a programmatic channel into your enterprise, it is critical that you identify and address any attacks or misuse of your API”.

This critical point highlights the importance of APIs moving forward, if businesses like Expedia are doing 80% of their revenue through APIs, it’s APIs that are the front door to your Enterprise, and by implication, apps that send and receive data over this channel, – not necessarily the website.

Attackers always look for the weakest link, and APIs are largely wide-open at this point. Many of the existing 30,000+ APIs in the wild have been optimized for rapid adoption and bolstering a developer ecosystem, not for protecting Enterprise assets.

APIs and Data Protection

Eric also mentions encryption under the data privacy category and talks about both transport level security and message level security. To expand the discussion here we can also add things like JSON message level security, format preserving encryption and even the “ancient” WS-Security/XML Security protection mechanisms here. I was also excited to see the inclusion of data masking. Eric describes this as two-way, which I think is the correct approach though my terminology would be different as we use the term tokenization here, but the concept is the same. The distinctions we use in our product line include redaction (for one-way removal of sensitive information) and tokenization, to indicate a reversible mechanism for replacing plaintext with a surrogate.

I can’t reproduce Eric’s entire article here, but it’s definitely worth a read and matches what we are hearing from Enterprises today – it’s about understanding and supporting the breadth of capabilities.

]]>http://blogs.intel.com/application-security/2013/10/18/api-management-as-a-platform/feed/0API Management; doing APIs now or doing them right?http://blogs.intel.com/application-security/2013/10/17/api-management-surfacing/
http://blogs.intel.com/application-security/2013/10/17/api-management-surfacing/#commentsThu, 17 Oct 2013 17:10:38 +0000http://blogs.intel.com/application-security/?p=2096Intel has recently been gaining some chops in API Management. Expressway API Manager has been out a while now and we acquired Mashery and Aepona this year. Mashery you will (or should) know but Aepona, you may not have heard of. … Read more >

Intel has recently been gaining some chops in API Management. Expressway API Manager has been out a while now and we acquired Mashery and Aepona this year. Mashery you will (or should) know but Aepona, you may not have heard of. They’re likely behind many of the telco or utility services you use. They build and support the API platform need to run and charge for the business.

I’m between conferences at the moment. I gave several talks at Nordic APIs on API management and protection a few weeks ago and will be doing similar at Apps World next week. One concern that keeps being raised, particularly from developers with enterprises is that they have a wealth of internal services, data sources and ideas to make money from exposing data but no convenient way to expose them as a RESTful service that’s likely to be used by either partners or public developers. As marketing or IT or whichever dept kicks off this idea to externalise data they hit the barrier of what legacy infrastructure they have compared to what they need.

Icebergs, REST on the surface but who cares what lurks underneath.

As I go around companies typically I will see that they have data accessible via SOA, messaging middleware or SQL, all with differing authentication and identity handling but which needs to be brought together to form a coherent API. This is where the concept of surfacing comes in.

sur·fac·ing [sur-fuh-sing] noun

1.the action or process of giving a finished surface to something.
2.the material with which something is surfaced.
3.the act or an instance of rising to the surface of a body of water.

Until recently surfacing may have meant two dodgy looking blokes on your doorstep saying they have a truck load of tarmac round they corner they could get on your driveway for a few quid. From the definition above, I’m referring to the first of course; “The process of giving a finished RESTful surface to your existing infrastrucure”. This process happens in an API gateway, usually in your DMZ, which supports data transform, protocol mediation and identity mediation. This is so that any number of systems under the surface present a uniform set of interactions above the surface. In other words, API Management.

Surfacing with a gateway then becomes a method by which you scale out both the number of APIs and the messaging throughput. Your gateway should talk to the portal which advertises and defines your API, allowing you to grow, monitor and charge for more usage. It also becomes the focus for new backend integrations required for new APIs. What you should be looking to do is eliminate having a spaghetti of code and custom adaptors built out of Apache Camel or your ESB of choice.

You can liken it to an iceberg, there’s a lot lurking underneath that’s brought together at request time but all the developer sees or the app user sees are the nice white peaks of RESTful APIs and functional, fast apps which are quick on the market. Don’t carry the metaphor too far, icebergs have some down sides so don’t watch this video…

You’ll remember my title, “Doing APIs now or doing them right”. Is there some magic architecture to marry up the services you have with the way they should be? Probably not. I was kidding though. Doing APIs now is right. There is an opportunity cost involved in delaying what you want to do because your competitors are already pressing ahead. You may encounter people in the organisation who have sunk cost into a half complete SOA architecture or whatever you have. Rolling up your collective sleeves and getting involved in changing the whole IT architecture is like the tail wagging the dog for just standing up some APIs for your first few projects. An API gateway gives you the freedom and versatility to bypass architectural difficulties and get to market quickly for once.

To learn more about API management you can visit Intel’s resources and we can also provide some free advice to kick start your API project through talking to Kin Lane {“logo”:”API Evangelist”}.

]]>http://blogs.intel.com/application-security/2013/10/17/api-management-surfacing/feed/0Flexible Token Authentication Empowering Identity and Access Managementhttp://blogs.intel.com/application-security/2013/10/14/flexible-token-authentication/
http://blogs.intel.com/application-security/2013/10/14/flexible-token-authentication/#commentsTue, 15 Oct 2013 01:29:50 +0000http://blogs.intel.com/application-security/?p=1877POC Requirements – Token Authentication and Mapping Often times in sales engineering I get “tunnel-vision”, focusing so much of my efforts on just meeting the requirements of a proof-of-concept (POC) that I fail to fully appreciate the true value Expressway … Read more >

Often times in sales engineering I get “tunnel-vision”, focusing so much of my efforts on just meeting the requirements of a proof-of-concept (POC) that I fail to fully appreciate the true value Expressway Service Gateway provides Intel’s customers. Take a POC I recently completed, some of the functional items on my list to demonstrate included:

Although I successfully demonstrated and checked off all the functional items on my list, these four had special significance for my customer. And like the ingredients in my Mom’s famous brownie recipe – each are “sweet” outright but baking them together yields something truly awesome. Stepping back I realized the same applied to these key POC functional items – combining these within Expressway created something of immense value to my customer.

Security Token Service

A Security Token Service (STS) is a software based provider primarily responsible for authenticating clients and issuing security tokens. An STS removes the burden of authenticating clients from applications – simply redirecting to the STS not only performs authorization but also provides a trusted security token allowing applications to easily consume protected services likely part of a federated model. Not surprising Expressway Service Gateway was chosen to facilitate this, considering its application security capabilities, including:

Identity Extraction Capabilities
Multiple ways to extract identities, including: Single Sign-On from cookie, name from SAML subject, principal name from Kerberos, name or token from OAuth authorization request, HTTP basic authorization, any HTTP metadata (query, headers, etc.), and WS-Security.

Summary

The POC was a complete success – the customer is now employing Expressway for multiple infrastructure needs, particularly as a STS. Expressway provides them increased flexibility and security while opening up tremendous opportunities for their infrastructure roadmap. Identity Access and Management initiatives can now be broken into phases thanks in part to Expressway’s integration with contemporary and emerging technologies – eliminating the necessity to do too much, too soon.

]]>http://blogs.intel.com/application-security/2013/10/14/flexible-token-authentication/feed/0Exposing the Beast: Customized API Managementhttp://blogs.intel.com/application-security/2013/10/14/exposing-the-beast-customized-api-management/
http://blogs.intel.com/application-security/2013/10/14/exposing-the-beast-customized-api-management/#commentsMon, 14 Oct 2013 15:00:37 +0000http://blogs.intel.com/application-security/?p=2018What do Dr. Henry McCoy and large Enterprises have in common? They can both be brilliant at what they do and be a veritable beast to manage. Enterprise complexity and legacy debt can hamstring an organization trying to move to … Read more >

Dr. Henry McCoy becomes the beast and he is smarter with API Management. Original artwork used by permission and credited to Mike Monroe @ http://blog.mike-monroe.com

What do Dr. Henry McCoy and large Enterprises have in common? They can both be brilliant at what they do and be a veritable beast to manage.

Enterprise complexity and legacy debt can hamstring an organization trying to move to APIs, but the good news is that the more services and applications an Enterprise has, the more potential they can realize through exposing these services both inside and outside the organization.

How this gets done won’t always fit like a glove, but require some customization, especially when use cases become more complex, involve multiple developer communities, regulated industries, and hybrid data-centers. I’m excited to be participating in a webinar with Forrester’s Randy Heffner on October 22nd @10AM entitled, “Exposing the Beast: Custom API Management for the Enterprise.” You can register for the webinar at this link here.

We’ll be touching a range of subjects, including the distinctions between SOA governance and API Management, the rise and growth of internal API management, the importance of compliance and how that intersects with APIs, as well as API sharing. API Management in this context is not a “one solution fits all” approach- it requires a loose coupling of connected systems and a customized approach to manage APIs at Enterprise
scale.

]]>http://blogs.intel.com/application-security/2013/10/14/exposing-the-beast-customized-api-management/feed/0Instant API Management with Intel and Amazonhttp://blogs.intel.com/application-security/2013/10/11/instant-api-management-with-intel-and-amazon/
http://blogs.intel.com/application-security/2013/10/11/instant-api-management-with-intel-and-amazon/#commentsFri, 11 Oct 2013 20:10:23 +0000http://blogs.intel.com/application-security/?p=1908Did you know you can get started with Intel Expressway API Manager on AWS Marketplace today with only a few clicks? You can have instant API Management and enhanced EC2 security for applications and services exposed from public or hybrid … Read more >

The offering is available from Amazon to anyone with a valid Amazon account. If you haven’t tried Amazon’s AWS marketplace, you can have an instance of the gateway up and running in a few minutes.

Are you looking to mobile enable an Enterprise application need to expose an API? Are you looking to try a flexible DevOps model for cloud bursting? Are you looking to provide a centralized API governance, security and throttling layer for Enterprise applications? If so, Expressway can help.

Here are four things you can do today with Intel Expressway on Amazon AWS Marketplace:

1. Publish a Mobile Ready API

Here is the scenario – you have data you want to make available to an HTML5 mobile application, but it’s split up among different application services in your Enterprise environment. Each service sends its reply with different data formats, some are XML, some are JSON, and some are text, and becasue these services weren’t designed with mobile in mind, they send too much information. What you want is a mashed-up, filtered subset. Worse, authentication for this data is also based on different types of credentials, such as a username/password or Kerberos ticket.

You’ve considered custom development work to bring together these services to expose an API, and you’ve also considered trying to parse, transform and filter the data on the client, but you feel the app experience would be negatively impacted due to client performance. Each alternative is wrought with development costs. What to do? Expose the API through a gateway.

To test it out, you can load sample responses from your services directly into Expressway, and design a proof-of-concept in the cloud. When you are read to move to production you can either migrate your applications to the public cloud or stand-up a gateway in your internal network. To facilitate testing, you can also rely on mock-up services such as mocky.

2. API Data Protection & Compliance

Here is the scenario: Suppose you are using Amazon’s DynamoDB, a standard SQL database or another cloud-hosted Big Data solution. Your application collects personally identifiable information (PII) or PCI information for CRM or lead management and communicates using APIs.

You you want to ensure this data remains protected in the cloud, and more importantly, when it is made available to API clients, such as a smartphone or partner web service, the caller should only be able to view protected information if they have the proper authorization.

You need a way to enforce fine-grained compliance on API content. Worse, you want to enforce security in the same way, no matter what persistence technology you may switch to in the future. It’s SQL today, but it could be NoSQL tomorrow. What to do? Protect API content using a gateway.

API Management and Compliance with DynamoDB

In the previous example sensitive information such as a social security number, driver’s license, credit card, or bank account information can be encrypted and protected before it is stored in the cloud. Data protection is enabled by a pii protection policy in Expressway and inserted directly to the persistence layer using RESTful APIs or through the use of the Expressway Java extensibility framework.

Further, data can only be decrypted by the gateway layer and access is enforced using strong identity management, OAuth 2, X.509 certificates or 2-factor authentication, adding an additional layer of compliance to API content.

3. Extend Your ‘API Network’

Here is the scenario: You want to enforce API governance but don’t want to migrate all of your back-end applications to the public cloud. Or perhaps you want to migrate some back-end services, but not all. On top of this, you have to consider an efficient network architecture, which means network latency is an important factor. What to do?

Given Amazon’s ability to handle multi-homed EC2 instances as well as Amazon’s VPC feature, you can create a hybrid API architecture with improved network latency and improved time to market.

In the previous diagram, the network design allows the Enterprise to immediately expose APIs with minimal application changes. The gateway is deployed in a VPC in a multi-homed configuration with a public IP address and an IP address in a private subnet connected through a hardware VPN. This means services can stay where they are.

The gateway acts as the enforcement and API exposure point (see the first use case), receiving API requests from mobile devices on the Internet that eventually route to services in the Enterprise Datacenter or in the Amazon Virtual Private Cloud (VPC).

This network architecture also provides improved client latency compared to a pure on-premise approach as additional network hops can be skipped compared to a case where the Enterprise has to expose an APIs through its DMZ.

4. Design an API Governance Layer

Here is the scenario: Suppose you have exposed a few APIs to support your mobile and partner strategy with some success. Perhaps you’ve exposed APIs directly by the smart use of open source frameworks such as Jersey, Microsoft, Node.js, RESTlet, or Ruby on Rails.

You’ve been successful with a handful of developers and have implemented security, throttling, authentication and API design with careful and deliberate project management. Now you need to scale from a handful of APIs to hundreds or even more. You need a consistent way to expose APIs with policies, not code. You need screaming performance and scale with built-in perimeter defense and application level denial of service protection.

Also, you need to ensure your APIs are available to the right developers at the right time to drive value throughout your organization. You also need API sharing. What to do? Design a scalable API Governance layer.

Here you can use the gateway and the examples described above as a design pattern for an API governance layer, which can be scaled in the Amazon cloud or in a hybrid architecture.

This means the API definitions, associated policies and the sharing of API definitions to developers is all handled at the gateway layer. You design your API interfaces independent of how the APIs are implemented. Today it could be a Microsoft .NET web service and tomorrow it could be Node.js. All of the security policies, API plans, authentication, data mediation and API sharing options are managed at the gateway. The gateway layer becomes the API consolidation point.

Intel provides both on-premise, partner and SaaS based API sharing options depending on your level of scale, control and security. The API sharing layer is available as an add-on component to the gateway, please contact us for more information.

It should be noted that some of the features above are enabled through the use of our visual policy editor, which is available at no cost to marketplace subscribers while others use the built-in policy editor available on the web interface.

Features that require the editor include Websockets, OAuth enablement, advanced protocol and data format mediation, database support & enterprise identity management. Please contact support to request the policy editor which is available at no extra charge to valid subscribers.

]]>http://blogs.intel.com/application-security/2013/10/11/instant-api-management-with-intel-and-amazon/feed/0API Management for the Internet of Things (IoT)http://blogs.intel.com/application-security/2013/10/08/api-management-for-the-internet-of-things-iot/
http://blogs.intel.com/application-security/2013/10/08/api-management-for-the-internet-of-things-iot/#commentsTue, 08 Oct 2013 18:43:18 +0000http://blogs.intel.com/application-security/?p=1838A fundamental premise of the Internet of Things (IoT) is the recognition of a certain human weakness. Humans are poor data collectors. We are poor fact collectors. We are poor sensors. Our senses fail us, we make mistakes, and we … Read more >

A fundamental premise of the Internet of Things (IoT) is the recognition of a certain human weakness. Humans are poor data collectors. We are poor fact collectors. We are poor sensors. Our senses fail us, we make mistakes, and we misremember. Research in Psychology shows that our brains fill-in fragmented memories with phantom events and facts when we can’t remember what really happened. Far too often facts are colored by our own value judgments and biases. Worse, we have limited time to focus on fact collecting, so what we get in sum total is a weak representation of the information available to us, especially information in digital form on the Web and throughout the Internet. As good as the web is, it’s weak information compared to the potential. Why is it weak? It’s weak because we typed most of it in.

But I’m a smart thing that talks APIs!(Image courtesy of JD Hancock under the creative commons license)

Human mediated information is a slow-path recipe for inefficiency, at least compared to the promise of the Internet of Things (IoT) where full-time Internet-linked sensors and devices with perfect accuracy and a tireless work ethic make a far better substitute. Sensing data timely and accurately, however, is only half of the battle. Data needs to make it into existing back-end systems, fused with other data sources, analytics and mobile devices and be made available to partners, customers and employees.

Even more importantly, sensed data needs to arrive with the appropriate contextual information and filtering. If every “thing” out there has a sensor and is providing data with regular frequency, it’s not feasible to process data from all sensors at all times, so we need contextual filtering and way to direct attention to the relevant data that we care about as well as behave properly in the face of failures and spontaneous reconfiguration of the sensor node network. In short, we need things-as-a-service (TaaS). How can we get there? API Management.

API Management Holds the Key

This is precisely where Web APIs, API Management and a RESTful architecture provide dramatic value. As APIs have become ubiquitous, IoT deployments in a wide range of market segments can benefit from this proven architecture. APIs lower the barrier to entry for connectedness and enable secure communication from sensor nodes to applications living just about anywhere – in any cloud, any datacenter or accessible from API-enabled devices. Moreover, RESTful communication has well-defined security patterns for bulletproof API management, including authentication, authorization, leak protection, compliance and data security. If you can get your “thing” to talk APIs you’ve got a back-stage pass to the the party, so to speak.

API Devices or Sensors

In the context of APIs and IoT, it makes some sense to talk about the distinction between sensors and devices. First off, everything is a thing, but some of the members in the Internet of things are already API-enabled and some are not.

Some things can already provide contextual information from its environment and some cannot. Most notably HTML5 & native smart-phones and tablets are API enabled, whereas a temperature sensor on a factory floor connected via a wireless sensor network (WSN) is not.

If you have sensor nodes in participating in a flat or two-tier sensor network, you aren’t really doing IoT unless you can get your data to higher end computational devices. In these so called brown field deployments, sensors may be working in complete isolation. With a smart device on the other hand, sensors are coupled to a device that already speaks Web APIs, let’s call these API devices. Without APIs your sensors are stranded, shouting continuous or discrete data into the ‘ether’ and getting brown field sensors to join the IoT requires a bolt-on approach or technology bridge to speak APIs. The opposite, green field deployments build sensor networks with IoT in mind from the beginning, potentially lowering API on-ramping costs. In either case, brown or green, an intermediate layer is needed to connect south-side sensors and networks to north-side APIs, clouds, datacenters and devices.

In fact, middleware and gateway technology is far from optional: The lack of an effective coordination layer has the potential to kill IoT dead in its tracks. With an extremely large number of sensors undergoing constant chatter, integration costs will be far too high unless organizations rely on proven, well-established communication paradigms such as APIs, and well-understood coordination patterns. On the the other hand, as more things are connected through middleware, this means more data is available, which as a positive impact on apps that use APIs. This virtuous cycle illustrates the power of complements – more Internet connected things-as-a-service (TaaS) drive increased adoption of APIs which reinforces the IoT vision. In this sense, middleware and gateways are the great enabler of IoT.

In the conceptual architecture shown in the diagram, sensor networks communicate with sensor middleware which can be thought of as one step closer to the raw sensor behavior compared to the gateway. In some cases the gateway itself may subsume all functions of the sensor middleware, depending on the use case. For example, if sensors are enabled with higher level protocols such as CoAP these nodes and networks may be able to talk directly to the gateway itself.

Sensor middleware typically provides the following key capabilities:

Abstraction support – The ability to provide a homogenous and holistic view of a sensor network in the face of substantially different sensor hardware and capabilities

Data fusion & enrichment – The ability to enrich data from sensors with environmental and contextual information, forming a higher level view of the data suitable for consumption by other applications.

Dynamic network – If sensors are added, moved or removed, sensor middleware must be able to deal with ad hoc changes in the underlying network topology with graceful impact to higher level services

Scalability – When dealing with a large number of sensors, middleware should have long reliable operation with a potentially large number of sensors

Security – If sensors are deployed collecting sensitive information, data protection schemes and encryption are required. This is further challenged by the fact that sensors can be low power with low compute capabilities

Network Delivery & Quality of Service – Maintain consistent availability and behavior in the face of high latency, network failures and bandwidth challenges

Sensor middleware, however, is typically not enough for a managed API suitable for use by the Enterprise, other back-office applications, and smart-devices. Here you need to add a second layer of API management gateways that can provide further value to the data.

API Management and IoT Gateways

API management completes the square for IoT. Gateway, hybrid and SaaS offerings provide the face of communication for raw sensor networks with robust, managed interfaces able to speak to any other API out there, providing universal communication. For IoT specifically, there are a number of other capabilities that gateway technology contributes to the Internet of Things (IoT):

Complete Context & Orchestration- Context for a sensor is any information that characterizes its situation. Complete context opens up sensor data to data from other APIs, application, services, social networks and devices. IoT Gateways provide the ability to mash-up and orchestrate data across any API to bring higher level information to the application, especially from other sensor networks

Adaptive Analytics & Big Data – Sensors are poised to generate a 10-fold increase in data over the next 5-10 years. IoT Gateways bring well-defined secure interfaces to large scale data sets stored in big data repositories, enabling insights and access to other APIs, applications and devices.

Compliance and Privacy – IoT Gateways form the control point for data protection. As data is made available through APIs, gateways provide selective data encryption, tokenization, and leak protection, helping to protect privacy and ensure compliance. Even if data cannot be protected at the sensor level, it can be protected at the API level, enabling and enhancing security.

Multi-Tenancy – The only way shared services required for connected sensor networks can be offered is if there is a shared multi-tenant API management layer on which different developer ecosystems can land as tenants using the same sensor data in different ways. A ‘silo’ approach compromises the developer experience data availability. Without multi-tenancy data enriched by other developer ecosystems unavailable.

Onboarding and Discovery – If nobody knows about your data or your API, how can it ever be used? The use of a developer facing API catalog with self-service capabilities shortens the time to market and makes APIs that include sensor-derived data available to the widest possible developer audience, either public, partner or internal developers.

How about some real-world examples? Most notably we made raw sensor information available through APIs to an HTML5 enabled mobile device here at Intel.

See Travis’s blog on the IoT project entitled citizen Developers Empowered by APIs and HTML5. It uses room sensors to maximize the use of conference space and is a great example of APIs and IoT reinforcing each other. This demo required the use of data fusion, scalability, data availability, security, and governance. If you are thinking about an IoT Gateway, check out the Intel(R) Expressway API Management product, which can help enable your IoT vision.

Concern over big government surveillance and security vulnerabilities has reached global proportions. Big data/analytics, government surveillance, online tracking, behavior profiling for advertising and other major tracking activity trends have elevated privacy risks and identity based attacks. This has prompted review and discussion of revoking or revising data protection laws governing trans-border data flow, such as EU Safe Harbor, Singapore government privacy laws, Canadian privacy laws, etc. Business impact to the cloud computing industry is projected to be as high as US $180B.

The net effect is that the need for privacy has emerged as a key decision factor for consumers and corporations alike. Data privacy and more importantly identity-protected, risk mitigated data processing are likely to further elevate in importance as major new privacy-sensitive technologies emerge. These include wearables, Internet of Things (IoT), APIs, and social media that powers both big data and analytics that further increase associated privacy risks and concerns. Brands that establish and build trust with users will be rewarded with market share, while those that repeatedly abuse user trust with privacy faux pas will see eroding user trust and market share. Providing transparency and protection to users’ data, regardless of how it is stored or processed, is key to establishing and building user trust. This can only happen if the providers are willing to provide this location and processing transparency to the corporations that are using them.

Disaster waiting to happen

With big data or analytics/BI (Business Intelligence), processing location is the key as it determines regulatory and data protection law compliance requirements and risk, for example, from government surveillance. Location transparency includes geographic location of data centers and cluster nodes that store and process the sensitive personal information of users. While most of the Big Data providers are able to provide security for the storage and transmission of sensitive data, most implementations don’t provide location transparency or location contingent data processing.

Providing corporations and their target consumers with visibility into where and how their information is processed can establish and build trust. User power would increase as consumers are able to choose where their data is processed, or stored, as opposed to being at the mercy of the big corporations and data consolidators.

Once consumers become aware of this issue, specific location processing could become a positive service differentiator in a highly competitive market. Currently, big data/analytics processing is often purely a function of processing capability and availability. However, given processing location information and applicable regulations and data protection laws, one could envision rule driven big data/analytics where the location of processing of sensitive personal information is also a function of processing locations, user choices /consent options, and policies.

How can it be solved?

Given the multi node processing capabilities of Big Data, you should be able to choose where and how (such as what level of security) you will be processing certain data from certain users. Given today’s technology, it is possible to build more secure clouds (including using technologies that verify a known clean state that is free of malware and virus – such as Intel Trusted Execution Technology – TXT) and have some of the big data nodes process the data more securely from within such highly secure clouds.

Conceptually, GRC (Governance, Risk and Compliance) collects the location of data subjects and processing resources. GRC, armed with location information, policy rules, and data subject choices can drive the data collection gateway and routing to correctly route personal information from data subjects in compliance with policy rules, and data subject choices, taking into consideration the locations of both the data subject and processing resources, and the level of security of the processing resources. Data can be scrubbed and protected before entering a Hadoop cluster or for data leaks at the API level, mitigating PII exposure at the outset. Especially if you use technologies such as tokenization by Intel Expressway Tokenization Broker, you can scrub for the personal data without the need to modify your applications intrusively. The smart intelligent gateways such as Intel Expressway API Manager or Service Gateway can do a context/ user/ sensitive data/ policy based routing dynamically.

They may also specify their preferred location and level of security of processing, further enhancing privacy in the areas of access and participation. For example, a person in Germany participating in an online service that involves Big Data/Analytics, perhaps for targeted advertising, prefers for their data to be processed in Germany with a higher level of security. In this case the data center, or Hadoop cluster nodes, used for processing of their data is routed to be processed on a high security compute environment in Germany. Aside from this general example of citizens of a given nation preferring their data processed within their country, another example could include controversial services such as online gambling where data subjects around the world would prefer any processing of their sensitive personal information, including for big data / analytics, to occur in certain geographies where regulations and data protection laws are more compatible with the particular online service provided, and levels of processing security take into consideration the value of their particular data and associated risk.

We propose a data classification levels tagging scheme to enable routing, such as “highly secure processing, geo tag restricted, medium or none”. For example, data tagged “none” will be executed in the next available cluster regardless of the location in the fastest, cheapest possible way. This could also enable service providers to charge based on the classification level as well. For example, if you guarantee an enterprise grade secure processing then you can charge a high premium to go with that. A geo restricted labeling would make sure the processing happens within a specific country on geo (such as EU zone) location. History of data movement and processing can be audited, tracked, and tuned to fit specific needs.

We can also use this approach to enable the service provider to enforce the cleansing operation based on the location. For example, if it is processed somewhere that is not considered a higher security location, destroy the data objects and clean up any residues after the operation.

This is an enhancement we are proposing to our Big Data group. Subsequently, we hope to influence all versions of Big Data.

]]>http://blogs.intel.com/application-security/2013/10/03/taming-big-data-location-transparency/feed/0APIs and Your Personal Cloudhttp://blogs.intel.com/application-security/2013/10/01/apis-and-your-personal-cloud/
http://blogs.intel.com/application-security/2013/10/01/apis-and-your-personal-cloud/#commentsTue, 01 Oct 2013 23:44:26 +0000http://blogs.intel.com/application-security/?p=1798You didn’t know you had a personal cloud did you? I was a bit shocked myself. Well, we might not all have a personal cloud yet, but Rackspace’s Robert Scoble gave an intriguing keynote talk today at Dataweek 2013 on … Read more >

You didn’t know you had a personal cloud did you? I was a bit shocked myself.

Well, we might not all have a personal cloud yet, but Rackspace’s Robert Scoble gave an intriguing keynote talk today at Dataweek 2013 on what he calls the age of context, which promises such a thing for rapid “context adopters” as himself. It was a great talk and I was able to see Google glass up close.

So just what is this new age of context?

Age of Context

Robert described five changes that contribute to the age of context:

Social network data is going up exponentially

Location data is going up exponentially

New types of databases are proliferating, especially for sensor type data (such as MongoDB and Hadoop/HBase)

The number of sensors around us are increasing

Mobile devices are changing and the use of ‘wearables’ are increasing.

This new age of context implies two new things: Personalized products that act on this contextual information and new assistive technologies.

I think there is another aspect here that binds all of these items together, and that is of course Web APIs. APIs form the conduit for the personal cloud and make the data move within the cloud.Mobile devices use APIs, APIs form the entry and exit point for data from all sorts of clouds and provide that ubiquitous, well understood communications channel.

So in this sense, APIs enable the age of context. Without them there would be only data silos.

Robert’s keynote was very interesting and the audience was eventually steered towards the hot-button issue of privacy. Robert’s main stance was a popular one, to paraphrase, “I just let it all hang out.” Not everyone in the audience shared the same view, and at least one objector noted that with this view there is an imbalance between corporations and governments which tend to keep data secret or ultra-secret and the age of context, which does the exact opposite for consumer data. It appears to trade a measure of “fun” for diminished, or even ‘near-zero’ privacy.

If you are at the conference I’ll be speaking tomorrow at 4:00pm at the main stage (Festival Pavilion) on Enterprise APIs and will try to relate some of these concepts to the Enterprise, especially the connection to mobile devices and APIs, as well as some of the concerns we’ve seen from the largest Enterprises in the world as they put API management in practice. Hope to see you there.

]]>http://blogs.intel.com/application-security/2013/10/01/apis-and-your-personal-cloud/feed/0Bulletproof API Managementhttp://blogs.intel.com/application-security/2013/09/30/bulletproof-api-management/
http://blogs.intel.com/application-security/2013/09/30/bulletproof-api-management/#commentsMon, 30 Sep 2013 22:31:04 +0000http://blogs.intel.com/application-security/?p=1754With McAfee Focus underway this week I wanted to revisit security, risk and compliance in the context of providing Bulletproof API management. So what does it take? There is some prevailing wisdom out there that security for APIs and API … Read more >

With McAfee Focus underway this week I wanted to revisit security, risk and compliance in the context of providing Bulletproof API management. So what does it take? There is some prevailing wisdom out there that security for APIs and API management is licked and considered a solved problem. With the widespread use of standards like OAuth 2 and SSL/TLS it seems that Enterprises have nothing to worry about when considering authentication, authorization and transport level security for their APIs. Well, almost nothing – modulo back-doors in the pseudo-random number generation from the NSA (scary if true). For most public or open APIs, OAuth 2 or shared-secret style API keys seems like security enough.

Simply relying on these standards without looking at all possible attack vectors can lull the Enterprise into a false sense of security. What do I mean?

Bulletproof API Management Tip: Content Threats

As APIs continue their prodigious rise across all sectors of the world economy let’s not forget that they are a data and function window into your organization. I’ve used the term universal tunnel before and I think the term is as apt as ever. Moreover, if we consider some of the business models where the API is the product itself, bulletproofing the API takes on even greater import. APIs allow programmatic access to your Enterprise assets, and the more valuable & interconnected you make your API, the more you will need bulletproof API management.

Bulletproof API Management Against Code Injection

This is not a new concept, but simply a variation of code injection applied to JSON content as it travels between a device (or client) and your Enterprise. When the an API call is executed there is an assumption that the system exposing the API endpoint is strongly correlating function or method calls to parts of the JSON content. In other words, your phone (or API client) is actually causing server side function calls – it’s this essential behavior the attacker is trying to exploit.

These concerns potentially multiply when we move to a world of connected “IoT” devices, which may be more susceptible to physical tampering in the wild. Where and how exactly this happens in your environment is a function of how you designed your API and your adherence to various levels of RESTfulness, but it’s always the weakest link: It’s the authenticated and authorized clients that will best be able to best play with, discover and exploit the limits of your API. Attackers and rogue users can discover the undocumented parts & behaviors of your API by hacking different types of requests and function calls, especially if you don’t have an active strategy in place for limit or quota enforcement or input validation. Even better, the code injection is sent over SSL/TLS so its protected by prying eyes and authorized with an OAuth 2 secret. Phew, good thing we have security in place…

Bulletproof API Management Tip: Data Leaks

The second piece of achieving Bulletproof API Management is to mitigate risks to the Enterprise for data leakage (DLP), compliance and personally identifiable information. Most modern application servers make it easy to expose a RESTful model, take Ruby on Rails (RoR) for example which exposes a CRUDable interface nearly by default in rails routing. This means that developers can easily enable RESTful access, potentially exposing data of unknown sensitivity quite easily.

This is great for productivity and is exactly how developers should be thinking about APIs, after all, when APIs begin their life it’s not entirely clear how they will be accessed and by whom. With time to market pressures, however, the development team may be more worried about getting the new API up and running and the CISO may not know if parts of the database contain trade secrets, employee data, patent data, price lists, revenue, or sensitive PII data. With more and more data moving between devices and APIs, Enterprises need a DLP strategy for APIs.

So what to do? For starters, obfuscating exceptions and errors is an important first step to mitigate code injection attempts. Exceptions and stack traces will give the attacker information about the underlying system that implements the API. Second, you need some type of input validation which ensures that data sent in RESTful requests undergoes the proper data-type checking and range checking. Third, if you are exposing an internal system as an API without the use of an API management gateway, you really have to ensure that internal APIs are inaccessible, which could mean expensive code auditing and further software development costs.

Bulletproof API Management with Expressway API Manager

The core problem here is that security hole can be inadvertently introduced with a strong coupling between Enterprise middleware, databases, or application servers and the published API; it’s always a good idea to have a level of separation and the API management gateway can trap these types of attacks in a distinct security layer, check outgoing payloads against a DLP engine (such as McAfee DLP), protect PII through format preserving encryption or tokenization and act as a security policy enforcement point before threats become that or before sensitive data leaves the Enterprise.

If you’re in the Las Vegas area you can see a demo of an API gateway in action to protect against these types of threats. Visit the Intel booth and be sure to catch our turbo talks this week. We’ll be showing the Intel Expressway gateway @ 1:50p & 2:30p. Intel Expressway solutions are now available through both Intel and McAfee channels as well!

]]>http://blogs.intel.com/application-security/2013/09/30/bulletproof-api-management/feed/0Mobile Access: Citizen Developers Empowered by APIs and HTML5http://blogs.intel.com/application-security/2013/09/30/mobile-access-from-citizen-developers/
http://blogs.intel.com/application-security/2013/09/30/mobile-access-from-citizen-developers/#commentsMon, 30 Sep 2013 17:00:00 +0000http://blogs.intel.com/application-security/?p=1490It has been several years since Gartner first made their prediction that Citizen Developers will create at least 25% of business applications by 2014. We have quite a few of these at Intel, and I recently shared one of my … Read more >

]]>It has been several years since Gartner first made their prediction that Citizen Developers will create at least 25% of business applications by 2014. We have quite a few of these at Intel, and I recently shared one of my favorites at IDF. Some of my colleagues outside of IT built a little app that provides mobile access to real-time conference room availability so you can find an empty room. Here’s how that app came to exist.

First, you have to understand something about Intel’s conference rooms: they’re always booked. It’s a vicious circle. Because they’re always booked, people frequently book a room “just in case”. The image below shows room reservations for one of our bigger buildings:

IT’s Web-Based Conference Room Reservation Tool

However, if you go walking through our hallways at 10 past the hour you will frequently find an empty room because of a no-show. Corporate Services has stepped in to help by offering a set of “reservationless” conference rooms: collaboration rooms, rooms for stand-up meetings (no chairs!), and phone booths. These are all available on a first come, first served basis. The challenge, of course, is finding an empty room since there is no system of record to show availability.

Mobile access to room availability data using a map-based visualization

Some of my colleagues over in IT had a great idea to help with this: sensors. We already had sensors in the rooms to turn off the lights when no one was there. What if those sensors could communicate room availability to the outside world? They did a pilot to upgrade the sensors, using motion and sound to determine whether anyone was in the room. That data was fed through some APIs to a panel display at the end of a hallway. The result was a list of nearby rooms and occupancy status.

That was a great improvement, but what about mobile access? The original implementation required someone to walk to a central location to check the room status. And what if there were no empty rooms nearby? You’d have to walk to another floor or building to check.

Some of our citizen developers provided a solution. They used the Intel XDK to create an HTML5 app that consumes APIs from both the sensors and the conference room scheduling system. We provided them with a redacted version of the conference room API, ensuring that no PII (names, email addresses) or other sensitive data (project names & status, e.g. “SuperSecret ProjectX Go/No-Go Decision Meeting”) leaks out. They created an intuitive, touch-friendly interface that overlays room availability on a map.

This little app is at the confluence of the Internet of Things, enterprise API Management, and HTML5. It pulls in sensor data through secure, well-defined enterprise APIs. It provides a visualization of that data, and uses HTML5 to deliver mobile access as well as desktop use. It’s just one of many applications I have seen that iterate on a corporate tool, repackaging the underlying data in a way that serves a different usage model.

For more information, and an entertaining view of how API management and HTML5 combine to enable BYOD and other mobile access programs, check out this video. For a roadmap on how to get started, click on the image below to download VisionMobile’s poster, illustrating the full spectrum of mobile app dev tools for enterprises.

]]>http://blogs.intel.com/application-security/2013/09/30/mobile-access-from-citizen-developers/feed/0What about an Obamacare API?http://blogs.intel.com/application-security/2013/09/27/apis-cross-agency-data-sharing-and-obamacare/
http://blogs.intel.com/application-security/2013/09/27/apis-cross-agency-data-sharing-and-obamacare/#commentsFri, 27 Sep 2013 15:00:06 +0000http://blogs.intel.com/application-security/?p=1702APIs are big news this week for the federal government. First we have the former U.S. CTO calling on APIs as a means to accelerate data sharing across agencies, and second we have a preview from NPR of what it … Read more >

]]>APIs are big news this week for the federal government. First we have the former U.S. CTO calling on APIs as a means to accelerate data sharing across agencies, and second we have a preview from NPR of what it might be like to actually sign-up for “Obamacare” insurance on October 1st. Why not make it easy for folks? Let’s push for an Obamacare API.The NPR article implies homework involved, suggesting that applying for health insurance through the website would be closer to a root canal procedure rather than buying socks on Amazon.com.

Rather than publish and own the healthcare.gov website and the associated user-interface and portal, the federal government should follow in the steps of private industry and open an Obamacare API instead. With the proper API, platform providers could compete to become the best “health insurance portal” and provide a more usable website with an HTML5 or a native application that would improve usability and accelerate adoption of “Obamacare” insurance. Technology should be an enabler, and a predictably unusable website is like an artificial speed limiter on your new Tesla model S.

The government isn’t in the business of providing the most usable interfaces, especially to normal humans, so why not create an incentive structure and the proper API to really improve the experience for consumers, bringing us ever-so-closer to that utopia of universal health care coverage?

Indeed this is the model followed by Expressway customer, Blue-Cross Blue Shield Association (BCBSA), they used an API layer and SaaS developer portal to expose interfaces to ensure consistent health care information was made available to all 38 independent Blue-Cross-Blue-Shield agencies.

As for agency API sharing, this is a problem that we routinely solve with an API Management Gateway. There is a well known api patterns for cloud & mobile that agencies can use to surface legacy XML feeds or unstructured data as first-class APIs and then make these APIs accessible from an API catalog accessible to developers. Oh, and one more thing: It’s not enough to simply publish the layer with consistent, secure interfaces, any API that sends or receives sensitive agency information and performs message level security needs the appropriate certifications and compliance posture. Agencies may need to ensure their API infrastructure is certified against common requirements such as FIPS, Common Criteria and DoD certifications.

Are you an agency looking to publish an API and share data to citizens or other agencies? Need compliance and certificates? Need scale and reach? Expressway can help.

]]>http://blogs.intel.com/application-security/2013/09/27/apis-cross-agency-data-sharing-and-obamacare/feed/0Enterprise Class Multitenant API Managementhttp://blogs.intel.com/application-security/2013/09/26/enterprise-class-multi-tenant-api-management/
http://blogs.intel.com/application-security/2013/09/26/enterprise-class-multi-tenant-api-management/#commentsThu, 26 Sep 2013 17:00:00 +0000http://blogs.intel.com/application-security/?p=1657Here is a free lesson to start-up companies trying to position their products for large scale Enterprises: plain and simple, your products need to support multitenancy. All of the prevailing trends such as IoT – including connected devices and wearables, … Read more >

]]>Here is a free lesson to start-up companies trying to position their products for large scale Enterprises: plain and simple, your products need to support multitenancy.

All of the prevailing trends such as IoT – including connected devices and wearables, digital media and gaming, Telco APIs, hybrid clouds, and SaaS require an API layer that provides elasticity and efficiency beyond run-of-the-mill virtualization. Virtualization and over-provisioning of infrastructure may work in a mid-sized Enterprise, but when it comes to scale, only a truly multitenant infrastructure will do.

The largest enterprises are diversified, and with the increased adoption of APIs, multiple departments will want to own and control their own API definitions, life-cycle management and API policies – for both production and development.

This assumes an “on-premise” or “owned” model where the Enterprise owns and manages the infrastructure to expose the API themselves. I’ve talked before about the blind faith we sometimes put into SaaS; it’s the religion of our time. For those that want a more quantitative view, this simple TCO calculator can do wonders. Before you place your banner down on one side of the argument, look at the numbers for yourself and actually calculate which is better for your organization.

For Enterprise API Management, a mid-sized organization might address these concerns by deploying a number of independent clusters of virtual API Gateways (software or appliances) to ensure isolation for security and availability. While this model works, it is not efficient as the Enterprise may buy more licenses than are justified by throughput alone, not to mention the operational overhead of managing each API gateway itself.

Even if a mid-sized Enterprise can get away with it, a large service provider that needs to worry about driving costs out of its IT budget cannot as the savings multiply per instance.

Single Tenant API Management

For example, take the first diagram as an example case study. Here a customer uses API gateways to surface APIs, with projects originating in different departments, each with its own audience. Here we have three tenants or business groups: sales & marketing, the CIO Team, and the cloud service architects. The sales and marketing team has a new content-rich tablet application that accesses relevant partner and social feeds exposed by the Enterprise, the CIO Team has opened internal APIs for integration and mobile employee productivity apps, and the cloud architects have exposed APIs for external B2B and partner access.

In each case gateways are provisioned as a set of units specific for these tenants. In this environment there is a tendency to over-provision, no matter how accurate you think your sizing will be in terms of number of API calls and data throughput. Based on actual throughput, each department is likely replicating costs & resources for fail-over, high availability and operational maintenance.

If we take this example and extrapolate to to a larger Enterprise, the repeated costs can really add up. This is where a true multitenant API Management platform helps.

Multitenant API Management adds the correct measure of control & resource allocation to drive costs out of the system. In the multitenant case, we’ve reduced the number of licenses (including gateway, O/S, and other software licenses) by nearly 40%. Rather than maintain three distinct clusters, the same separation of concerns, manageability and policy separation, as well as fail-over and throughput is being handled by 10 gateways. Multitenancy brings consolidation and efficiencies for API management.

Multitenant API Management

While all of this is conceptually simple, actually building the feature in a production product is difficult and takes careful engineering to ensure the system is resilient to tenant changes yet remains stable in the face of potentially millions of API requests. This is exactly what we’ve done in Expressway for API Management over the last 8-10 years working with the Fortune 50. Despite claims made by others, your product probably doesn’t support true multitenancy that scales to production use cases unless you are an Expressway customer.

Many of the products in the market go only ‘halfway’, supporting a set of views or domains, but never support a true separation of statistics, logs, roles. and insulated policy changes for production environments. Halfway doesn’t cut it when there are trillion devices out there looking to access your API.

Expressway Multitenant API Management Capabilities:

Insulated tenants – Application data is protected from view from other tenants in the system

Log Separation – Statistics and logs produced by one tenant are only viewable within a tenant context

Distinct Roles – Tenants have unique administrative roles that are separated from system management

Policy Lifecycle Separation – APIs and their associated policies can be updated and changed independent of other tenants’ administrative operation and runtime processing

Scriptable Configuration – Expressway multi-tenancy is controllable by scripting languages from the command line such as Python and Perl to automate API deployment into an Enterprises API layer

Global Manager Control – The entire tenancy system is controlled by a global manager role used to manage tenants, provide a consolidated view and manage clustering, all with the Fortune 50 CIO in mind.

]]>http://blogs.intel.com/application-security/2013/09/26/enterprise-class-multi-tenant-api-management/feed/0API Gateways, Security, and Innovationhttp://blogs.intel.com/application-security/2013/09/25/api-gateways-security-innovatio/
http://blogs.intel.com/application-security/2013/09/25/api-gateways-security-innovatio/#commentsThu, 26 Sep 2013 02:15:00 +0000http://blogs.intel.com/application-security/?p=1488Securosis has a new analyst report out called “API Gateways: Where Security Enables Innovation“. The paper describes how API gateways simultaneously enable security and software development. It shows how security can be enforced practically, without becoming an impediment to productivity … Read more >

]]>Securosis has a new analyst report out called “API Gateways: Where Security Enables Innovation“. The paper describes how API gateways simultaneously enable security and software development. It shows how security can be enforced practically, without becoming an impediment to productivity and creativity. The paper covers a pretty broad range of topics, from developer tools to key management to implementation. It also includes a helpful buyers guide, which can be used to craft an RFI.

I thought the paper made a number of challenging concepts much more accessible. It takes an end-to-end view, putting developer experience at the forefront. Also, being security experts, the authors include some sound advice on core topics like key management and attack prevention.

Download the report today, and find out how you can enable your developers while protecting your corporate assets. Also, please join me at 10a PT / 1p ET on Tuesday, October 15th for a conversation with the authors. We’ll be discussing the paper and other related API security topics.

]]>http://blogs.intel.com/application-security/2013/09/25/api-gateways-security-innovatio/feed/0Long Live API Managementhttp://blogs.intel.com/application-security/2013/09/24/long-live-api-management/
http://blogs.intel.com/application-security/2013/09/24/long-live-api-management/#commentsTue, 24 Sep 2013 17:00:00 +0000http://blogs.intel.com/application-security/?p=1636It’s death came furiously and quick, like an earthquake shaking the carefully constructed buzzword tower engineered by Enterprise software marketers around the world. Anne Thomas Manes proclaimed the death of SOA back in 2009 in her seminal blog “SOA is dead; Long … Read more >

It’s death came furiously and quick, like an earthquake shaking the carefully constructed buzzword tower engineered by Enterprise software marketers around the world. Anne Thomas Manes proclaimed the death of SOA back in 2009 in her seminal blog “SOA is dead; Long Live Services.” Yet here we are in late 2013, over four years later still wrestling with this undead terminology. We have analysts publishing new reports referencing SOA, popular blogs with references to the dreaded “SOA-saurus” and we have companies with SOA in their name (still), and while #SOA now denotes the hash-tag for the popular show “Son’s of Anarchy”there was in fact a time when this term was heralded. With all of the recent activity and ‘fire’ in the API Management, let’s resolve to engulf and bury SOA once and for all.

SOA is Still Dead

SOA promised to be the goose that laid a golden egg for Enterprises, and then we killed it. Whoops. So what’s a good Enterprise to do now? APIs and API Management are the rage, yes, but there is a lot of confusion out there. There is talk of open APIs, public APIs, and private APIs. Talk of developer communities and partner developers. There is talk of governance (this word is just a little bit too close to SOA, it scares me) and policy life-cycles (hold me back from the excitement). And then there are artificial categories created. Some say that if you are doing internal stuff that’s “SOA” and if you are doing external stuff, that’s “APIs.” I’m not so sure… ever heard of WS-Security for external B2B WS-* exchanges?

Or, if you’re doing “business agility” and “application integration”, well that’s “SOA” and if you are doing developer communities, well, *that’s APIs*. Or, if its social and cool, that’s an API and if its boring and ‘Enterprisey’, that’s SOA. All of this is hogwash. Let’s agree now that Anne had it right – long live services.

#SOA star Tara Knowles

And let’s also agree that while the term “APIs” is a bit jargony (is that a word?) and thumbs its nose at CS degree holders everywhere, the term is infinitely more descriptive than SOA. Here comes the popular culture Netflix-analogy: If orange is the new black, APIs are the new SOA, but APIs are a long way from a low security women’s detention facility. Furthermore, API Management and services should run deep within the organization; an artificial split between SOA and APIs is nonsensical. Amazon, in their famous example of adopting a service model simply declared that their internal systems would have published, stable interfaces or the developers would be fired. That’s what we call a services strategy folks. The concept of API Management is more descriptive and includes all of the important concepts of SOA. Web services are APIs. Call them APIs, don’t call them SOA unless you want to artificially date yourself. Remember: In the tech world, 4 years is like 40 years in real-life, so do the math. It’s time to bring these two worlds together, to forcefully bring these two old friends to the bar and agree to be the same thing and share a drink. Don’t be fooled into a false dichotomy.

It’s an unfortunate aspect that the history of this subject brings these categories which are hard to shake. Let’s talk in plain language about what is needed.

Enterprise API Management Requirements

Enterprises need to:

Turn existing business assets into well-defined, stable interfaces, suitable for internal, partner, or public consumption.

Provide scalability, security and control for these interfaces and corresponding back-end systems

Ensure compliance for data transferred through these interfaces

Publish interface definitions and accelerate use among internal, partner or public developers

Deploy such infrastructure in a flexible, yet cost-effective means

And lastly, item 6: Manage change throughout items 1 through 5.

What about API lifecycle? That’s covered in #1. What about hosting choices? That’s covered in #5. What about SOA registry/repository? That’s covered in #4. What about Hackathons? That’s covered in #4. What about reuse? That’s covered in #4. What about OAuth? That’s covered in #2. What about a developer portal? That’s covered in #4.

The truth is, once you boil down the requirements into the essentials, it’s easy to see that SOA and APIs live rather harmoniously. It isn’t ‘decisive’ that REST is always easier than SOAP. What if you’ve been using SOAP for the last 10 years in your organization? The costs to rip and replace could be quite expensive. Similarly, it isn’t ‘decisive’ that JSON should be the format for all API calls – what if you aren’t (gasp) exposing data to a mobile application? The answer is both. Enterprises should use whatever weapon is available to get the job done and if they want real business agility, ditch the jargon.

This is the way we see it at Intel with Expressway and API Management. We want to help Enterprises get moving on their projects. We can help you with your progression to deploying cost effective API management solutions – no matter how they are deployed and where you are across items 1-5 above. The progress of APIs becoming a larger part of the daily lexicon is happening and has happened. Just the other day I heard my non-technical sister-in-law talking about the unofficial Vine API. When members of the general public understand your job, your ship has come in. Intel can help your ship come in with API Management today.

]]>http://blogs.intel.com/application-security/2013/09/24/long-live-api-management/feed/0Tokenization for De-Identifying APIshttp://blogs.intel.com/application-security/2013/09/23/tokenization-for-deidentifying-apis/
http://blogs.intel.com/application-security/2013/09/23/tokenization-for-deidentifying-apis/#commentsMon, 23 Sep 2013 17:31:56 +0000http://blogs.intel.com/application-security/?p=1608De-identifying Data in APIs I was catching up on my RSS feeds over the weekend, reading all the things I missed while I was at IDF, when I saw this great post from Kin Lane calling for “A Masking, Scrubbing, … Read more >

I was catching up on my RSS feeds over the weekend, reading all the things I missed while I was at IDF, when I saw this great post from Kin Lane calling for “A Masking, Scrubbing, Anonymizing API“. It reminded me of a conversation I had at IDF about Kaggle, which is a platform for crowdsourcing solutions to big data problems. In both cases, the goal is to surface data in a way that protects personal information. It got me thinking about how compliance intersects with API strategies. With APIs being a universal tunnel into the enterprise, it’s important not to neglect security compliance in API content! Fortunately, a Tokenization proxy or API Manager can be used to address these types of usage models.

Tokenization vs Encryption vs Redaction

Tokenization is the process of replacing a string with another randomized string. Expressway Tokenization Broker can perform this operation as a proxy for any API response, storing the PII in a secure vault. The only way to recover the original data is through a detokenization routine performed by a system with access to the secure vault. This is somewhat similar to the mechanism Kin describes (replacing actual values with fake values), except that the tokens are not likely to be human-readable (i.e. instead of replacing Kin Lane with John Doe it might wind up reading zAe N8fc). On the other hand, tokenization preserves correlation – if you replace every instance of any name with “John Doe” you may lose the ability to do associations across data sets. The Retail industry has been using this mechanism for years, adopting the tokenization of Payment Account Numbers (PAN) as a best practice for PCI compliance. We have recently seen adoption of this tokenization capability for other types of PII, particularly where there are compliance and audit concerns.

Tokenization of Payment Account Numbers for PCI Compliance

Format-Preserving Encryption (FPE) is another mechanism for de-identifying data. It is available in all of our Expressway products. In this case, the data is encrypted using ciphertext that conforms to the same formatting as the input data. For example, the SSN 123-45-1234 might encrypt to 789-12-3456. This ensures that the ciphertext will pass any downstream format checking that may occur. However, unlike tokenization, FPE is reversible — it is possible to decrypt the ciphertext to plaintext without access to a tokenization vault. This makes the ciphertext behave more like encrypted data, enabling applications to use a shared secret to decrypt the data independently from the secure vault.

Finally, data can be anonymized using redaction, which is also supported in all of our products. This is the process of eliminating PII entirely rather than replacing it. This is the most surefire mechanism for keeping PII out of the wrong hands, but it comes with a potential downfall: it may prevent records from being associated with the same owner, particularly across data sets. This correlation can be the most valuable opportunity in many types of big data analysis.

De-Identification Using the Façade API Proxy Pattern

We have seen customers take advantage of regular expressions to identify personally identifiable information (PII). There are standard policies that can pick out Social Security Numbers, email addresses, and other common types of PII in any API. Nonstandard types of PII can be detected as well, provided that they conform to a well-defined structure (generally alphanumeric with a fixed length, although other patterns can be identified as well). Once the PII has been identified, the data can be de-identified using tokenization or encryption (including format-preserving encryption). Or the data can be anonymized completely via redaction. This policy can be generalized to proxy several APIs and replace any PII that passes through. This works particularly well for credit card or social security numbers, both of which follow a very well-defined and relatively unique pattern.

Anonymization policies can also be tailored to specific APIs that have well-defined schemas (along the lines of the Swagger example that Kin suggested), matching based on the JSON or XML field information. For example, a colleague and I were playing with the idea of stashing employee information in DynamoDB. An employee record might look like:

Within this data set, email, SSNs, Drivers License numbers, and Zip Codes follow well-established rules that lend themselves to regular expressions. However, the zip code rule (5 digit number) could match the salary field. Obviously you could enforce Zip+4 and decimal inputs (XXXXX-XXXX for Zip Code, XXXXX.XX for CurrentSalary), but it would probably be safer to match the name rather than the value for this data set.

Another benefit of the anonymizing facade API pattern is that it can support conditional de-identification. For example, I may want to allow the PII to be read within my network but have it de-identified for external clients. Or I may want to tokenize internally but redact externally. We can define a workflow that uses any of a number of factors to make the decision at API request time, allowing access to live data rather than a snapshot.

Summary

I’m excited about the potential for APIs to allow faster problem solving through crowdsourcing. Kaggle looks like a very interesting platform for enabling this. I’m also happy to see folks like Kin working to make government more open and accessible through the use of APIs. API gateways can play a role in those transformations by sanitizing the data, reducing the risk of PII being compromised. As Mark Silverberg pointed out in the comments on Kin’s blog, the safest way to protect PII is to scrub the data set before it goes out. By using a tokenizing or encrypting proxy facade, the “scrubbing” is made internal, minimizing the risk of an escape.

As I noted above, our products are unique in the API management space, in that they support high-performance de-identification policies. They also include powerful regular expression libraries that can be used to identify (and then de-identify) PII that is contained in an API response. I did a webinar with John Kindervag recently that touched on many of these topics as well. You can watch the replay to learn more, or try out FPE and redaction for yourself using Expressway API Manager on Amazon Web Services.

]]>http://blogs.intel.com/application-security/2013/09/23/tokenization-for-deidentifying-apis/feed/1The API Economyhttp://blogs.intel.com/application-security/2013/09/20/the-api-economy/
http://blogs.intel.com/application-security/2013/09/20/the-api-economy/#commentsFri, 20 Sep 2013 20:57:00 +0000http://blogs.intel.com/application-security/?p=1564The API Economy is launched and it’s not too late to join in the fun. According to Pew Research, 91% of American adults have a cell phone and 34% of American adults own a tablet computer. The proliferation of mobile devices … Read more >

]]>The API Economy is launched and it’s not too late to join in the fun. According to Pew Research, 91% of American adults have a cell phone and 34% of American adults own a tablet computer. The proliferation of mobile devices opens up a multitude of business growth opportunities through the exposure of Application Programming interfaces (APIs). API Economy leaders are generating real revenues from APIs. Salesforce.com generates more than half of its revenue through APIs’. Twitter, Google and Amazon are deriving revenue from API transactions that number in the billions. How can my business make money from APIs? The answer will depend on your product and business model, but can include more than one strategy.

Direct Revenue

Direct Revenue is the most apparent business model for generating API revenue. In this model the consumer is billed for usage of the API. Amazon Web Services is a classic success story for the pay as you go consumer usage revenue model. Another effective direct revenue use case can be seen in PayPal’s transaction fee model for API usage. Traditional businesses such as software providers can plug into the direct revenue business model by exposing software as a service (SaaS). Intel’s EC2 Security services are available on a pay as you go basis. Revenue sharing models pay the API consumer for posting ads on their sites. Google’s AdSense pays 20% to developers for revenue generated from posted images, fotoglif.com pays 50% to photographers, and shopping.com pays by the click.

Distribution Channel

APIs are an outlet for the syndication of content and data. Traditional publishers of content, such as The New York Times, have capitalized on APIs as a news delivery mechanism. Google Maps has been hugely successful following the content syndication model. The Google Maps API had 200 million users in 2011 and is reported by Programmable Web to have 2510 mashups or syndication partners. Entertainment industry businesses like Netflix, ESPN and CBS are well suited to benefit from exposing content to mobile platforms through APIs. Gaming companies like Ubisoft are using Intel’s Expressway API Manager to expose their content and they are leveraging API Analytics to provide their customers with a customized gaming experience.

Marketing Channel

APIs can be a tool for expanding market awareness through 3rd party distribution. Free APIs encourage 3rd party developer usage. The Freemium business model is followed by Amazon, eBay and Netflix. They allow potential customers to view content in a “kick-the-tires” mode thus promoting sales and memberships. A free API is free advertising. Facebook and Google also provide free access to most of their APIs. Their business model is to grab the consumer’s attention and to then sell advertising to marketers very willing to pay for access to that consumer attention. Traditional Retailers can leverage REST APIs to expose commodities, build brand awareness and to enable content acquisition.

Application Enablement

Exposing content via APIs has encouraged innovation and is also the cheapest, fastest way to getting applications built. Facebook is one example of this economy. Facebook’s use of APIs has quickly broadened their users experience. The Facebook API subscriber applications of FarmVille, CandyCrushSaga, Spotify and Skype all contribute to the Facebook product. Salesforce.com has created a preeminent partner environment for customer relationship management through application enablement. Part of the Salesforce offering is a marketplace where apps can be browsed and downloaded. Automotive organizations could tap into this business model by publishing APIs and encouraging the development of applications that could tie cars to maps, music, retail, and traffic.

Distribute Services

Another benefit of APIs is to enable mobile access for all platforms. eCommerce APIs go from web transactions to making sales from phones, tablets, TVs, and automobiles. In-store price checks are made possible with quick and easy phone scanner applications. Also consider how telecommunication firms can promote even more mobile usage by publishing API access to traditional services such as telephony, data and location services. Mobile application developers can become value added resellers for the Telco industry exposing and distributing Telco assets.

3rd Party Innovation

Mobile application developers are the innovators that enable the publishers of APIs to address the long tail of markets and segments. Long tail marketing concentrates the less popular products and relies on low cost of inventory, carrying lots of items, targeted product descriptions, product tracking ,easy access by customers, and customer posted reviews. Sounds a lot like Amazon, but the strategy could be applied to an industry like Healthcare too. Walgreen’s has published the Pharmacy Prescription Refill API building on their success with the QuickPrints API. Walgreen’s has been proactively reaching out to developers to include the refill API in existing applications. Healthcare APIs can equip Healthcare service providers with the tools to address the long tail markets.

Lock-In

Integrated enterprise code does not change often. Your business becomes sticky when your APIs are built into the fabric of other business’s operations. Gift certificates are an example of this economic model. And once your API fits operationally – it can be difficult to replace. Utilities could leverage this model by exposing service rates, and usage as APIs. Electronic devices in the home, thermostats, refrigerators, air conditioning, and lights, build onto the Utility APIs and encourage consumers to lock-in.

The API Economy is not just for the early adopters. Whatever your business model you pursue, the marketplace has plenty of room for more APIs. Intel’s Expressway API Manager can help you to quickly and securely expose APIs. Intel’s API Management portal options, cloud and on-premise, provide self-service developer on boarding. Leverage Intel’s guidance on how to evangelize your APIs and be on the way to increasing your bottom line.

]]>http://blogs.intel.com/application-security/2013/09/20/the-api-economy/feed/1Connecting Enterprise APIs to Mobile Developmenthttp://blogs.intel.com/application-security/2013/09/20/connect-enterprise-apis-to-mobile-application-development/
http://blogs.intel.com/application-security/2013/09/20/connect-enterprise-apis-to-mobile-application-development/#commentsFri, 20 Sep 2013 17:00:00 +0000http://blogs.intel.com/application-security/?p=1511I am very excited to be speaking alongside Andreas Constantinou from VisionMobile next week in a joint webinar entitled “Connect Enterprise APIs to Mobile Application Development.” We’ll be talking about the explosion of mobile application development tools and the complexity this … Read more >

We’ll be talking about the explosion of mobile application development tools and the complexity this brings to Enterprises looking to mobile enable their applications. Andreas will be going through the explosion of tools, over 700+ tools in 25 categories that span the mobile application development lifecycle. He’ll walk through the life-cycle developed by VisionMobile which includes six categories or stages: (i) integrate, (ii) develop, (iii) test, (iv) deploy, (v) measure, and (vi) market.

I’ll be walking through some slides and talking about ideas around architectural changes you can expect to see in the datacenter as the Enterprise begins to emphasize mobile-ready architectures rather than legacy web server/app server architectures for Enterprise APIs. I’ll also discuss the tie between mobile enablement and API management, and then explore an example use case done at Intel with HTML5 and Intel Expressway for API Management. The future is very exciting for Enterprise APIs if you can keep costs down!

]]>http://blogs.intel.com/application-security/2013/09/20/connect-enterprise-apis-to-mobile-application-development/feed/0Building an API Strategy? We Can Help!http://blogs.intel.com/application-security/2013/09/19/building-an-api-strategy-we-can-help/
http://blogs.intel.com/application-security/2013/09/19/building-an-api-strategy-we-can-help/#commentsThu, 19 Sep 2013 17:00:00 +0000http://blogs.intel.com/application-security/?p=1489My colleague Blake posted yesterday with a response to Daniel Jacobson’s thought-provoking post, “Why you probably don’t need an API strategy”. Blake spells out some pretty clear reasons why you do need an API strategy and outlines some of the different … Read more >

]]>My colleague Blake posted yesterday with a response to Daniel Jacobson’s thought-provoking post, “Why you probably don’t need an API strategy”. Blake spells out some pretty clear reasons why you do need an API strategy and outlines some of the different things to consider when formulating one. If you’re starting from the ground up, or looking to change direction or incorporate a gateway into your architecture, we have a couple of programs that can help.

First, we’re excited to announce a new program from our partnership with Kin Lane, API Evangelist and Presidential Innovation Fellow. For a limited time, we’re offering 30 minutes of free API consulting to qualified enterprises. If you’re looking for an outside party to help shape your API strategy, Kin’s your man. Even though we pay for the consultation, it’s 1:1 with no Intel sales or consulting staff on the line.

API Strategy sessions with Kin Lane

Second, if you’re just looking for more information on our product or would like a longer conversation with an API implementation expert, we have enabled live chat through our Big Data and Application Security portal. Our team can give guidance on tools and solution architectures for API management, along with different deployment options. From on-prem to SaaS to hybrid, we’ve seen and implemented pretty much every variant imaginable. These can be longer sessions as needed, allowing you to get a better feel for what your fellow travelers are doing to jump start their API strategy. As an added benefit, our team also covers the Intel Distribution for Apache Hadoop, so this live chat can get you in contact with Hadoop experts and folks who specialize in surfacing and protecting APIs for Big Data (as well as broader Hadoop implementation and training topics).

Connect with us for help building your API Strategy

If neither of those meet your needs but you still want more information, of course we have traditional methods of contact as well. We’d be happy to set up a live demo or provide you with more information about our professional services offerings.

It’s been clear for a while now that pretty much every brand is going to need an API to survive. Sure, you can take the “ready, fire, aim” approach, but it’s well worth the investment to create and periodically revisit an API strategy. If you agree but you’re not sure where to start, we’re here to help.