Home

there just a few days left before my live O'Reilly
Implementing Hypermedia
online tutorial on february 9th (11AM to 5PM EST). and i'm spending the day tweaking the slides and working up the six hands-on lessons. as i do this, i'm really looking forward to the interactive six hour session. we'll be covering quite a bit in a single day, too.

by the time the day is done, everyone will have a fully-functional Hypermedia API service up and running and a Cj-compliant general-pourpose hypermedia client that works w/ ANY Web API that supports the Collection+JSON media type.

Greenville Hypermedia Day

the tutorial is geared toward both individual and team participation. i know some companies are arranging a full-day session with their own dev teams for this, too. and i just heard about a cool event in Greenville, SC for people who want to get the "team" spirit...

i found out that
Benjamin Young is hosting a
Hypermedia Day
down in Greenville, SC on feb 9th. If you're in the area, you can sign up, show up, join the tutorial in progress, and chat it up w/ colleagues. I know Benjamin from our work together for
RESTFest and he's a good egg w/ lots of skills. He'll be doing a Q&A during the breaks in the tutorial modules and i think he might have something planned as an "after-party" thing at the end of the day.

if you're anywhere near Greenville, SC on feb-09, you should join Benjamin's Hypermedia Day festivities!

cut me some slack

i know most of the attendees are going "solo" -- just you, me, and the code -- that's cool. O'Reilly is hosting a live private Slack channel for everyone who signs up for the tutorial. I'll be around all day (and probably some time after that, too) so we can explore the exercises, work out any bugs, and just generally chat.

it's all ready!

so, as i wrap up the slides, the hands-on lessons, the github repo, and the heroku-hosted examples, i encourage you to sign up
and join us for a full day of hypermedia, NodeJS, and HTML5.

see you there!

the week of january 11th i'll be in Dallas for two events. this is my first trip of 2016 and i'm looking forward to catching up w/ my Dallas peeps. I'll be visiting with the great folks at DFW API Professionals on Jan-13 and addressing a gathering of Dallas-area IT dignitaries at AT&T Stadium during the day on the 14th.

DFW API Professionals

i've known Traxo's Chris Stevens for several years and, when i learned i would be in Dallas in January, we were able to arrange an oppty for me to address his meetup group: DFW API Professionals. I'll be talking about and demoing hypermedia API client coding patterns and taking questions, too. check out the event and, if you can, join me and the whole DFW API Pro membership.

API Management Best Practices Discussion

on thursday, i'll be at the AT&T Stadium with my fellow CA colleagues and folks from Perficient to join in the discussion on API mgmt and a look into the near future. i get to share the podium w/ CA SVp and Distinguished Engineer, Scott Morrison. in a lively open discussion (no slideware), we'll be covering API design, deployment, DevOps, Microservices, and IoT with Perficient's Director of Emerging Platform Solutions, Annel Adzem. stellar conversation, stunning view of the field -- what's not to like?

just the beginning

of course, this is just the start of my travels for 2016. i've already got the cities of Vancouver, Washington DC, E. Brunswick, Seoul, Tokyo, Melbourne, Sydney, San Francisco, Sao Paulo, Rio, Buenos Aries, and New York on my agenda. and that's just the first few months of 2016!

gonna be another great year with the API Academy! if you're anywhere near those cities, keep in touch and i hope we meetup sometime soon.

A number of maxims have gained currency among the builders and users of microservices to explain and promote their
characteristic style:

(i) Make each microservice do one thing well.
To do a new job, build afresh rather than complicate old microservices by adding new features.

(ii) Expect the output of every microservice to become the input to another, as yet unknown, microservice.
Don't clutter output with extraneous information. Avoid strongly-typed or binary input formats.
Don't insist on object trees as input.

(iii) Design and build microservices to be created and deployed early, ideally within weeks.
Don't hesitate to throw away the clumsy parts and rebuild them.

(iv) Use testing and deployment tooling (in preference to manual efforts) to lighten a programming task,
even if you have to detour to build the tools and expect to throw some of them out after you've finished using them.

this is my ring of keys -- just three of them: work, home, car. i've been focusing over the last couple years on reducing. cutting back. lightening my load, etc. and the keys are one of my more obvious examples of success.

i've also been trying to lighten my load cognitively -- to reduce the amount of things i carry in my head and pare things down to essentials. i think it helps me focus on the things that matter when i carry less things around in my head. that's me.

staring at my keys today lead me to something that's been on my mind lately. something i am seeing quite often when i visit customers. the approach these companies use for governing their IT development lack the clarity and focus of my "three keys." in fact, most of the time as i am reading these companies' governance documents, they make me wince. why? because they're bloated, over-bearing, and -- almost all of them -- making things worse, not better.

over-constraining makes everyone non-compliant

i am frequently asked to provide advice on design and implementation of API-related development programs -- most often APIs that run over HTTP. and, in that process, i am usually handed some from of "Design-Time Governance" (DTG) document that has been written in-house. sometimes it is just a rough draft. sometims it is a detailed document running over 100 pages. but, while the details vary, there are general themes i see all too often.

Constraining HTTP

almost every DTG approach i see lists things like HTTP methods (and how to use them), HTTP response codes (and what they mean), and HTTP Headers (including which new REQUIRED headers were invented for this organization). all carefully written. and all terribly wrong. putting limits on the use of standard protocols within your organization means every existing framework, library, and tool is essentially non-compliant for your shop. that's crazy. stop that! if your shop uses HTTP to get things done, just say so. don't try to re-invent, "improve", or otherwise muddle with the standard -- just use it.

Designing URLs

another thing i see in DTG documents is a section outlining the much-belabored and elaborate URLs design rules for the organization. Yikes! this is almost always an unnecessary level of "bike-shedding" that can only hold you back. designing URLs for your org (esp. large orgs) is a fool's errand -- you'll never get it right and you'll never be done with it. just stop. there are more than enough agreed standards on what makes up a valid URL and that's all you need to worry about. you should resist the urge to tell people how many slashes or dashes or dots MUST appear in a URL. it doesn't improve anything.

look, i know that some orgs want to use URL design as a way to manage routing rules -- that's understandable. but, again, resist the urge to tell everyone in your org which URLs they can use for now and all eternity. some teams may not rely on the same route tooling and will use different methods. some may not use routing tools at all. and, if you change tooling after five years, your whole URL design scheme may become worthless. stop using URLs as your primary routing source.

Canonical Models

i really get depressed when i see all the work people put into negotiating and defining "canonical models" for the organization. like URL designs, this always goes badly sooner or later. stop trying to get everyone/every-team to use the same models! instead, use the same message formats. i know this is hard for people to grasp (i've seen your faces, srsly) but i can't emphasize this enough. there are several message formats specifically designed for data transfer between parties. use them! the only shared agreement that you need is the message format (along with the data elements carried in the message).

Versioning Schemes

here's one that just never seems to go away -- rules and processes for creating "new versions" of APIs. these things are a waste of time. the phrase "new version" is a euphemism for "breaking changes" and this should never happen. when you build sub-systems that are used by other teams/customers you are making a promise to them that you won't break things or invalidate their work (at least you SHOULD be making that promise!). it is not rocket-science to make backward-compatible changes -- just do it. once you finally accept your responsibility for not breaking anyone using your API, you can stop trying to come up w/ schemes to tell people you broke your promise to them and just get on with the work of building great software that works for a long time.

so, stop constraining HTTP, stop designing URLs, stop trying to dictate shared models, and forget about creating an endless series of breaking changes. "What then," you might ask, "IS the proper focus of design-time governance?" "How can I actually govern IT systems unless I control all these things?"

three keys form the base of design-time governance

ok, let me introduce you to my "three keys of DTG". these are not the ONLY things that need the focus on IT governance, but they are the bare minimum -- the essential building blocks. the starting point from which all other DTG springs.

Protocol Governance

first, all IT shops MUST provide protocol-level governance. you need to provide clear guidance and control over which application-level protocols are to be used when interacting with other parts of the org, other sub-systems, etc. and it is as simple as saying which protocols are REQUIRED, RECOMMENDED, and OPTIONAL. for example...

"Here are BigCo, Inc. all installed components that provide an API MUST support HTTP. These components SHOULD also support XMPP and MAY also support CoAP. Any components that fail to pass this audit will be deemed non-compliant and will not be promoted to production."

you'll notice the CAPITALIZED words here. these are all special words taken from the IETF's RFC2119. they carry particular meaning here and your DTGs SHOULD use them.

Format Governance

another essential governance element is the message formats used when passing data between sub-systems. again, nothing short of clear guidance will do here. and there is no reason to invent your own message-passing formats when there are so many good ones available. for example...

"All API data responses passed between sub-systems MUST support HTML. They SHOULD also support one of the following: Collection+JSON, HAL, Siren, or UBER. sub-systems MAY also support responses in Atom, CSV, or YAML where appropriate. When accepting data bodies on requests, all components MUST support FORM-URLENCODED and SHOULD support request bodies appropriate for related response formats (e.g. Collection+JSON, Siren, etc.). Any components that fail to pass this audit will be deemed non-compliant and will not be promoted to production."

you'll notice that my sample statement does not include TXT, JSON or XML as compliant API formats. why? because all of them suffer the same problem -- they are insufficiently structured formats.

Vocabulary Governance

the first two keys are easy. have a meeting, argue with each other about which existing standards are acceptable and report the reusults. done. but, this last key (Vocabulary Governance) is the hard one -- the kind of work for which enterprise-level governance exists. the one that will likely result in lots of angry meetings and may hurt some feelings.

there MUST be an org-level committee that governs all the data names and action names for IT data transfers. this means there needs to be a shared dictionary (or set of them) that are the final arbiter of what a data field is named when it passes from one sub-system to the other. managing the company domain vocabulary is the most important job of enterprise-level governance.

the careful reader will see that i am not talking about governing storage models or object models here -- just the names of data fields passed within messages between sub-systems. understanding this is most critical to the success of your IT operations. models are the responsibility of local sub-systems. passing data between those sub-systems is the responsibility IT governance.

what about all those "ilities"?

as i mentioned at the opening, these three keys form the base of a solid DTG. there are still many other desirable properties of a safe and healthy IT program including availability, reliability, security, and many more. this is not about an "either/or" decision ("Well, I guess we have to choose between Mike's three keys and everything else, right?" -- ROFL!). we can discuss the many possible/desirable properties of your IT systems at some point in the near future -- after you implement your baseline.

so, there you have it. protocol, format, vocabulary. get those three right and you will be laying the important foundation for an IT shop that can retain stability without rigidity; that can adapt over time by adding new protocols, formats, and vocabularies without breaking existing sub-systems or ending up in a deep hole of technical-debt.

tight coupling is trouble

tight coupling to any external component or service -- what i call a fatal dependency -- is big trouble. you don't want it. run away. how do you know if you have a fatal dependency? if some service or component you use changes and your code breaks -- that's fatal. it doesn't matter what code framework, software pattern, or architectural style you are using -- breakage is fatal -- stop it.

the circuit

you can stave off fatalities by wrapping calls to dependents in what Nygaard calls in his book Release It! a Circuit Breaker but that requires you also have either 1) an alternate service provider (or set of them) or, 2) you write your code such that the unavailable dependency doesn't mean your code is essentially unusable ("Sorry, our bank is unable to perform deposits today."). and the Circuit Breaker pattern is not meant for use when services introduce breaking changes anyway -- it's for cases when the dependent service is temporarily unavailable.

a promise

you're much better off using services that make a promise to their consumers that any changes to that service will be non-breaking. IOW, changes to the interface will be only additive. no existing operations, arguments or process-flows will be taken away. this is not really hard to do -- except that existing tooling (code editors, build-tools, and testing platforms) make it really easy break that promise!

there are lots of refactoring tools that make it hard to break existing code, but not many focus on making it hard to break existing public interfaces. and it's rare to see testing tools that go 'red' when a public interface changes even though they are great at catching changes in private function signatures. bummer.

so you want to use services that keep the "no breaking changes" pledge, right? that means you also want to deploy services that make that pledge, too.

honoring the pledge

but how do you honor this "no breaking changes" pledge and still update your service with new features and bug fixes? it turns out that isn't very difficult -- it just takes some discipline.

here's a quick checklist for implementing the pledge:

promise operations, not addresses

service providers SHOULD promise to support a named operation (shoppingCartCheckOut, computeTax, findCustomer) instead of promising exact addresses for those operations (http://myservice.example.org/findCustomer). on the Web you can do that using properties like rel or name or id that have predetermined values that are well-documented. when this happens, clients can "memorize" the name instead of the address.

promise message formats, not object serializations

object models are bound to change -- and change often for new services. trying to get all your service consumers to learn and track all your object model changes is just plain wrong. and, even if you wanted all consumers to keep up with your team's model changes, that means your feature velocity is tied to the slowest consumer in your ecosystem - blech! instead, promise generic message formats that don't require an understanding of object models. formats like VoiceXML and Collection+JSON are specifically designed to support this kind of promise. HTML, Atom, and other formats can be used in a way that maintains this promise, too. clients can now "bind" for the messgae format, not the object model -- changes to the model on the service don't leak out to the consumer. when this happens, adding new data elements in the response will not break clients.

promise transitions, not functions

service providers SHOULD treat all public interface operations as message-based transitions, not fixed functions with arguments. that means you need to give up on the classic RPC-style implementation patterns so many tools lead you into. instead, publish operations that pass messages (using registered formats like application/x-form-urlencoded) that contain the arguments currently needed for that operation. when this happens, clients only need to "memorize" the argument names (all pre-defined in well-written documentation) and then pay attention to the transition details that are supplied in service responses. some "old skool" peeps call these transition details FORMs, but it doesn't matter what you call them as long as you promise to use them.

promise dynamic process-flows, not static execution chains

serivces SHOULD NOT promised fixed-path workflows ("I promise you will always execute steps X then A, then Q, then F, then be done."). this just leads consumers to hard code that nonsense into their app and break when you want to modify the workflow due to new business processes within the service. instead, services SHOULD promise a operation identifiers (see above) along with a limited set of process-flow identifiers (start, next, previous, restart, cancel, done) that work with any process-flow that you need to support. when this happens clients only need to "memorize" the generic process-flow keywords and can be coded to act accordingly.

not complicated, just hard work

you'll note that all four of the above promises are not complicated -- and certainly not complex. but they do represent some hard work. it's a bummer that tooling doesn't make these kinds of promises easy. in fact, most tools do the opposite. they make address-based, object-serialization with fixed argument functions and static execution chain easy -- in some tools these are the defaults and you just need to press "build and deploy" to get it all working. BAM!

so, yeah. this job is not so easy. that's why you need to be diligent and disciplined for this kind of work.

eliminating dependencies

and -- back to the original point here -- decoupling addresses, operations, arguments, and process-flow means you eliminate lots of fatal dependencies in your system. it is now safer to make changes in components without so much worry about unexpected side-effects. and this will be a big deal for all you microservice fans out there because deploying dozens of independent services explodes your interface-to-operation ratio and it's just brutal to do that with tightly-coupled interfaces that fail to support theses promises inherent in a loosely-coupled implementation.

for the win

so, do not fear. whether you are a "microservice" lover or a "service-oriented" fan, you'll do fine as long as your make and keep these four promises. and, if you're a consumer of services, you now have some clear measures on whether the serivce you are about to "bind" to will result in fatalities in your system.