Here we are, almost 2.5 years after the announcement of WebRTC. I recently attended WebRTC Global Summit. What I was most surprised about was the level and intensity of whining about what is missing in WebRTC.

This got me thinking – what really is missing in WebRTC – and why? Remember: WebRTC is a technology that is meant to be a native part of web browsers. Without WebRTC, there’s no way to interact in real time via the browser without plugins. With WebRTC, the sky is the limit.

Being only a technology also means that it is but a building block in a larger solution. What are the things Google left for us to whine about and implement on our own? Here are 8 of them, in no specific order.

1. Interoperability

As Serge Lachapelle of Google said at that same conference – WebRTC wasn’t created with Telecom in mind. Neither was it created with UC in mind. It was, and still is, about the web (and web developers).

This is why WebRTC doesn’t really care about interoperability in any meaningful way:

It has G.711, but no wideband voice codec other than Opus, which is even newer than WebRTC itself

It has no mandatory video codec yet, but the only browser implementations out there are using VP8 and not H.264

It has no RTP. It mandates DTLS instead of SDES-SRTP for key exchange

It forces the implementation of new or uncommon RFCs

Interoperability is something left out of WebRTC intentionally. A wide gap left for the incumbents to fill.

The reason Googled decided to ignore interoperability? Trying not be mired down by the past.

2. Popular Voice and Video Codecs

While this affects interoperability, there’s more to it than that.

Voice and video codecs today are expensive and hard to purchase – to the point of impossibility in a lot of cases.

If you haven’t tried to license a voice codec, I urge you to go and try it. This isn’t an easy task, to say the least.

7. Server-Side Media Processing

Want to do recording? Fancy a multi-point video use case? How about broadcasting a session to thousands of participants?

All these require server side media processing, and Google hasn’t provided any infrastructure here for developers. This is where Backend-as-a-Service for WebRTC comes into play and is required.

There’s also a need for media engines and SDKs that can be adopted and used on servers and assist WebRTC developers. Luckily – there are quite a few already.

8. Non-SPA Capabilities

Want to start a chat in one Window and browse during that session? It must be done within the same web page. This is called SPA – Single Page Application – and is a critical part of WebRTC browser implementations today.

The problem? The moment you browse through a page, the WebRTC session you had opened gets severed. This makes it hard to get WebRTC embedded into existing websites easily.

It would be super nice if WebRTC enabled starting a session on one page, and somehow keeping it open while browsing to other pages inside the same website.

Somehow, existing websites got the same treatment as existing VoIP networks – they are just ignored by the WebRTC spec.

Challenges or Opportunities?

Here’s the thing. You can view these gaps as challenges, and then decide to ignore WebRTC as a viable technology. Or you can decide it is actually an opportunity – and fill these gaps in your own service or for others.

I just published a report on choosing an API platform for WebRTC. Taking the time to review 13+ platforms and seeing which gaps they close for developers and how, reinforced my belief that WebRTC is production ready yesterday.

It is just a matter of viewpoint.

This entry passed through the Full-Text RSS service — if this is your content and you’re reading it on someone else’s site, please read the FAQ at fivefilters.org/content-only/faq.php#publishers.