Hi everybody, remember us? After a long downtime between actual posts, we figured it was time to clue you in on what’s been happening in the wonderful world of the REST API. Here’s our recap of the goings on.

Version 1.2

The astute among you will have noticed we recently released version 1.2 of the API. If you’re not using it already, you’re missing out on Cross-Origin Resource Sharing (CORS) support, new actions and filters to support hijacking requests (for things like caching), and a tonne of bug fixes. Our thanks go out to all 29 (!!!) contributors to this release, and to Brian Krogsgard for our fantastic new plugin header.

It’s also with a little bit of sadness and a lot of excitement that we announce 1.2 will be the final major release for the 1.x branch. While version 1 has served us well, it’s time to move on to better things and prepare the project for core integration. Version 2 is well underway, and we’re hoping to have a beta out in the next month or so.

Version 1.9?

As @rachelbaker mentioned in the release post for 1.2, there’ll actually be another almost-release on the 1.x branch. From the start of the project, I’ve always pledged compatibility with whatever goes in to core. Our almost-release will be a final-final release on the v1 branch, with no new features or bug fixes, but instead removing the internals of the version 1 code and hooking them up to version 2.

This will allow existing code to work essentially forever into the future, using the version 1 interface around the version 2 implementation. Bug fixes on version 2 should then also be carried down to the version 1 code.

I’ve taken to calling this shim Version 1.9, although it may not end up as that in the end. Whatever the case, this won’t be counted as a full version 1 release, as it will only be a wrapper around version 2. We do, however, plan to have full backwards compatibility as with every other release.

Version 2

Version 2 is under pretty heavy development right now. For those not familiar, this is the non-backwards-compatible edition of the API intended specifically for core integration. Version 2 is unlikely to ever have a full standalone release, however we’re planning on releasing betas on the lead up to core merge.

Note that while version 2 isn’t backwards compatible with version 1, it is an iteration on v1, so the API will be both inwardly and outwardly familiar to anyone using v1. We’re following a policy of not changing things purely for the sake of it, so the eventual version will be easy to adapt to for anyone using it now.

So far, our core focus has been around two key elements: extensibility, and consistency. We’ve refactored and rearchitectured a fair chunk of the core endpoints to make them more easily reusable, as well as ensuring that our core endpoints all follow a similar structure to make them easier to learn. As part of this, we’ve also introduced better support for common tasks like checking permissions, as well as changing the way endpoints interface with the infrastructure. One key change is that endpoints now receive only a single Request object parameter (modeled after PSR-7, for those keeping count). This means that parameter registration has moved out of the function signature and up to the endpoint registration instead.

We’re also working on making requests and responses consistent across the board, including swapping our dodgy filter parameters out for better supported querying. Fields are being renamed across the board (albeit not without careful consideration first) to make it easier to learn and use the API, as well as helping clients be more robust. Internal linking is also changing to match the HAL specification, along with support for “embedding” related requests. This is designed to help mobile clients and similar avoid excessive round-trips, but does come with the cost of larger response bodies, but we’ve made this up to client authors to decide and calculate the tradeoffs.

We also occasionally replace these meetings with voice and video meetings on Google Hangouts, depending on availability and agenda. In these cases, we’ll post the hangout link in the channel at the start of the hour. Everyone is welcome to join and listen in or participate, but please appreciate the limited amount of time and energy we have for these meetings.

Danke

Thank you again to all the wonderful people who make this API possible, including but not limited to the amigos (Rachel, Daniel, and Joe), our core team minders (Gary, Dion, and nacin, among others), lovable lurkers (Demitrious), everyone speaking about the API (I don’t know how Jack Lenox gets anything else done), our wonderful publicists journalists (Brian and Sarah), and everyone else. And you, especially you. ❤

We’re also opening this comment thread for any thoughts, feedback, feelings, or otherwise that you’d like to post. If there’s something you want to tell us or talk about, here’s the opportunity to do so.

Hey everyone! Quick heads up on some of the work that’s been going on this week:

The CLI client is now functional – The readme runs through how to connect WP-CLI to your site and get started. Currently, it only contains user and post reading functionality, however post creation/editing/deletion can be achieved relatively easy! This is more a question of time; building out the remaining functionality should be relatively painless. Volunteers welcome, as always!

Post endpoint testing is getting filled out – @rachelbaker and I have been working on getting these up, with the aim of 100% coverage of the post endpoint code. We’re slowly getting there! Reminder that we can always use help with writing tests, as there’s plenty to tackle here.

JS client is getting filled out – Thanks to Taylor Lovett, Matthew Haines-Young and K. Adam White for a huge push on the JS client recently. This client is significantly better than the version in 1.0 thanks to the tireless effort from these folks; that said, contributors are always welcome.

BuddyPress now has an API plugin – modemlooper has created a plugin that adds API endpoints for BuddyPress. Take a look if you’re interested in using BuddyPress data!

Pods now has an API plugin – Scott Kingsley Clark has created a plugin that adds API endpoints for Pods. Check it out!

1.0 is out! Thanks to everyone who helped out with this release and made it the best so far.

That said, progress never halts! We’re working on 1.1 now, and there’s already a heap of issues open. We need to start on a huge documentation and testing push, which @rachelbaker has already started on.

I’ve published the initial draft of the core integration plan for the API on GitHub. It covers motivation behind the API, rationale for the current design, and concerns of the integration itself. Feedback extremely welcome.

One of the longest standing feature requests for the API is to add support for modifying post meta. While at first it may seem like a reasonably simple request, it becomes quite the rabbit hole when you begin digging into the issues surrounding it. Here’s a quick summary of the issues surrounding meta, and how we’re looking at solving it.

The Issues

The biggest issue with post meta is the difference between the data model in WordPress, and the most common usage of meta. In general, the main meta usage is as a key-value store; that is, a one-to-one mapping between a meta key and a meta value. However, the data model in WordPress allows multiple values per key. Handling these in a coherent way is challenging, as we want to handle the most common case easily while also making the multiple value model possible.

The other main issue with meta is serialized data. As anyone who’s used post meta in WP knows, WP will store any type of data (except resources or closures) by serializing the data under the hood. This isn’t usually an issue, as the process is transparent for most uses of post meta. However, this presents a problem when accessing the data via the API, as we cannot expose this data transparently.

JSON has no distinction between associative arrays and objects, as associative arrays only exist in PHP. In addition, JSON cannot pass objects and their types; that is, the representation of stdClass and a custom MyAwesomeObject are the same in JSON. Exposing this data using the default JSON encoding semantics would cause data loss. On the flip side, while exposing the raw serialized string would not cause data loss, this would expose protected and private properties on objects, as well as the internal implementation details including the class. This could expose critical internal data.

In addition to these issues, combining the two can cause further problems. With a naive approach of mapping key-to-value in a JSON object and allowing multiple keys and serialization, we could have a result like:

{
"my_key": [
"value1",
"value2"
],
}

However, it’s impossible to tell whether this is a key with multiple values, or a key with a single value of a PHP array.

If we treat this as a key with multiple values, updating the values could prove problematic. How do we distinguish between adding elements, updating existing elements, and removing elements? Simply leaving elements out does not necessarily mean we want to remove them, as we may just want to reduce the amount of data being sent over the wire.

Proposals

With these issues considered, there’s a few resolutions we need to implement.

The first resolution is to not handle serialized data at all. That includes displaying it and allowing modification. For all intents and purposes, serialized data will be treated as protected meta. We cannot avoid this, due to the object data loss issue.

We’ve now come up with two proposals on how to handle updates.

First Proposal

The first proposal by Rachel Baker and Taylor Lovett uses the following format for reading the data:

This approach has the advantage of keeping data access fairly simple, and means the most common case of key-value storage is simply post_meta.my_key[0], while multiple values can iterate post_meta.my_key.

However, it has the disadvantage that the input format does not match the output format. This means that you cannot send the post data straight back to the server without causing an error. In addition, it mixes actions into the data itself that is sent to the server. Multiple values are also not handled by this, however this could be corrected by including a previous_value when updating.

Second Proposal

The second proposal by myself builds on Rachel and Taylor’s work, but changes the format slightly. Data looks like the following when reading a post:

This approach has the advantage that the input format matches the output format. Submitting the post data back to itself will have no effect, as the value will already match. In addition, the action to take is implied from the data, rather than specifying it in the data; updating a value requires just updating the value, adding a value requires adding a value without an ID, and deleting a value requires renaming the key to null (that is, specifying an empty value). This format uses the meta ID from the database as the primary key, allowing manipulating multiple values easily.

One of the disadvantages of this approach is that reading data becomes more complicated. Accessing all data for a key now involves filtering the meta values on the client side, rather than a simple lookup. While this is reasonably easy to achieve, it’s not as obvious as a straight access (and also has worse performance characteristics). The updating format is less obvious, as it’s implied from the data format rather than being spelled out explicitly.

Other Approaches

One approach that isn’t considered above is using the first approach’s data for the post itself, and exposing meta in the second form via another endpoint (e.g. /posts/[id]/meta). While this would enable both simple and complicated uses nicely, it also introduces significant fragmentation and duplication. This means developers would need to learn and support two separate methods of achieving the same result, and also work out internally which to use. In practice, clients would end up simply supporting a single approach for consistency. This approach would also violate the Decisions, Not Options mantra of WordPress.

Decisions

We need to make a decision on how we handle meta data. Personally, I’m biased towards the solution I wrote, but it’s not the perfect solution, and we’ll never have a truly perfect situation. Both approaches are a compromise, and we need to decide on which compromise we want to choose.

I’d love to hear thoughts on which approach people would prefer, and anything we may have missed during consideration.