My Experience with WebRTC for iOS App Development

Share

WebRTC (the RTC stands for Real Time Communications) — is an open-source project that lets you stream data between browsers or other applications using point-to-point* technology.

German PolyanskiyiOS developer

Share

The first to know!

Sign up for our newsletter and get the latest news first

*If you want some background on WebRTC, I recommend reading an introductory article on the platform like this one here.

How I Came to Use WebRTC

The project’s objective:

We need to connect two random users together and allow them to transmit real-time video streaming data to one another.
How do we solve this problem?

Option 1:

Connect the two users to a server, then wait for the video to be transmitted to the users through the server.

Cons:

Costly server equipment

Limited scalability due to the limited resources of the server

Increased video delay time due to the intermediary

Pros:

The user doesn’t have to do anything – it’s our job to control the server

Option 2:

Connect to the user directly, creating a data transmission tunnel that lets the user receive information from the server.

Cons:

Requires a continuous connection between the two users

The complexities of creating a peer-to-peer connection

NAT-related difficulties when transferring data directly

Requires multiple servers to create a connection

Pros:

Reduces the load on the server

Video will not be interrupted even when the servers stop working

Video quality depends on the quality of the users’ data connection

End-to-end encryption can be implemented

Connecting the control lies in the hands of users

One of the key goals of the task was to create a cost-effective solution with a minimal server load. Based on this information, we decided to go with option 2.

After a diligent search on the internet, I discovered the WebRTC platform. I felt it would be the perfect solution for this project.

Implementation

So began the second stage of my search. During this stage, I came across a lot of different online tutorials that tried to solve my WebRTC-related problems. However, I found minimal information related specifically to iOS.

Then, by chance, I went to the kernel git for Google Chrome and noticed there was a folder called «WebRTC». There were several files and two folders inside, including «Android» and «iOS». Eventually, I stumbled upon a directory called «Example». There was a project inside that folder called AppRTC.

I immediately launched the project on two iOS devices. I was quickly able to connect the two phones together – so I knew video chat integration over iOS was a possibility. Key video chat functions like registration and authorization were already available through AppRTC.

By this time, the project had reached a stage where we needed to implement the server-side infrastructure. We decided to find examples of servers that used AppRTC. We spent a lot of time implementing server-side debugging before we were eventually able to ͞make friends͟ with a client server.

The app was coming along quickly, but we soon encountered problems during the debugging phase. We were faced with the fact that the library was completely unstable. There were problems with constant disconnects, poor video quality, high processor usage above 120%, low refresh rates, and more.
We spent a lot of time on optimization, but we still couldn’t get the service to the level we wanted. We were able to introduce minor speed improvements. But we ran into other problems – like the fact that a lot of resources required VP8 codecs and other codecs that were not supported in the service.

Eventually, we realized that we had to completely update the library. We ultimately rewrote 80% of the library – but it still did not work. The server part refused to work with the new version of the library.

Based on this frustrating development, we decided not to use the server for WebRTC. In other words, the application server only analyzed the connected users and invited them to connect. Then, the application started to work with the AppRTC server – all interaction with packet transmission and the STUN and TURN servers took place here. All we had to do was rewrite everything to the latest version in the application.

During the rewrite process, I was able to increase the number of frames per second and considerably reduce the load on the server by encrypting data using the integrated H264 codec.

Ultimately, we completed the rewriting process and the application worked ever faster than before. As a result of rewriting the code, the WebRTC library was moved to a separate Pod here.

Conclusion

We solved the problem. We have an application that connects two random users together. Both users have downloaded the app, launched it, and successfully connected with each other.

We achieved this with virtually no material costs. Best of all, the video quality depends on the bandwidth of each user’s internet connection.

Ultimately, we developed and presented several solutions to optimize the «AppRTC» library. We sent all of these solutions to the developers.