HLS, or HTTP Live Streaming is a video streaming format first introduced by Apple in May, 2009. It’s a format that breaks streams into small file-based segments made available for download over HTTP. It now is a widely supported format for viewing streams in almost real time. I say almost only because the protocol, by its very nature, introduces a lot of latency.

We’ve been getting a lot of questions and inbound interest from our customers about HLS. What are the advantages of using it? What are the disadvantages? How does HLS compare to WebRTC? Will Apple approve apps that don’t use HLS for streaming? This post is an attempt to address some of these questions.

Advantages

There are many reasons you would want to use HLS for your live streams, and this is why we recently added the support to Red5 Pro.

Ubiquity

First of all, HLS is widely supported. Although originally conceived by Apple for Quicktime, iOS and the Safari browser, it’s now implemented on virtually every browser out there. As you can see, the leading browsers support it. Now of course most of them support a comparable standard called MPEG DASH as well, but since Apple Safari and iOS devices don’t support it, we think HLS is currently a better choice.

Adaptive Bitrates

Another huge advantage of using HLS is that it allows for clients to choose from a variety of quality streams depending on the available bandwidth.

Disadvantages

So if it does all that, why wouldn’t I want to use HLS for my live streaming app?

Terrible Latency

It turns out that while HTTP Live Streaming was designed to deal efficiently with multiple quality streams, it wasn’t built for delivering video quickly. Practically speaking, HLS introduces at least 20 seconds of latency (often more!) in your streams.

Here’s why; HLS requires three segments in queue before it will allow playback, and the segments are divided by the keyframes in the video. The only way to create a super low latency stream (let’s say under one second) with HLS is to encode the video going out with a keyframe happening every 250 ms. The HLS playlist moving-window would be four items long with a segment duration of quarter of a second. This would of course create high bandwidth video, add to the number of HTTP calls happening (at least four per second), and put additional load on the server.

The whole point of keyframes in video protocols like h.264 is to reduce the number of times you need to send a full frame of image data. With the above scenario, you might as well be sending the frames as a series of JPEG images in sequence. There’s a lot more to this, like the fact that media has to be packaged in 188 byte packets which creates added overhead when you do it too much, but hopefully now you’ve got the gist of it: HLS is a poor choice when it comes to low latency video streaming.

No Publishing

HLS is a subscriber-only protocol. Unlike WebRTC, which has a spec for publishing from a browser, HTTP Live Streaming only supports playing streams. If you want to publish a live video stream for a device, you simply have to look for other technology to do this. Luckily with Red5 Pro, we have alternative SDKs for mobile that allow you to create publishing apps that utilize RTP; you can then relay those streams over HLS for folks to view these streams right in their browsers. You can check out our HLS player example using video.js on GitHub. We are also in development on full WebRTC support with Red5 Pro that will include a JavaScript SDK. This implementation will feature out-of-the-box tools like a WebRTC Publisher and a player that supports WebRTC, HLS and RTMP (as a fallback), so stay tuned for that update as well.

Apple iOS HLS App Rules

Another question that comes up when our customers get ready to submit their iOS apps to the App Store is: will Apple reject my app if it’s not using HLS? As many of you know, our SDK uses RTP streaming for iOS, and Apple has some strange requirements that all apps must use HLS for streaming. That’s not quite true however. Apple states the following in their App Store Submission Guidelines:

“9.3 Audio streaming content over a cellular network may not use more than 5MB over 5 minutes.”

“9.4 Video streaming content over a cellular network longer than 10 minutes must use HTTP Live Streaming and include a baseline 192 kbps or lower HTTP Live stream.”

What we’ve also found is that if the app is a communication app–meaning that you have some form of two-way communication like Periscope has with live chats–then they tend to group the app in a different category. Apple also considers video calling apps like Skype to be in a different category, and the live streaming restrictions of having to use HLS don’t apply. The other good news is that as the popularity of apps like Periscope and Meerkat continues, Apple is getting used to the idea of live real-time streaming apps, and is gradually becoming more and more flexible with the restrictions.

So with that in mind, because of HLS’s high latency, Apple will approve apps that use other protocols if there’s a need for real-time communication. We simply recommend making a note of why you can’t use HLS when you submit your app.

Summary

As you can see, the ubiquitous support of HLS on a variety of browsers, mobile phones, and operating systems makes it a great choice for distributing your streams to the most amount of viewers. However, since it’s a rather slow protocol, if you are building any kind of app that relies on near real-time communication, you should look at other options. Finally, while Apple’s rules do seem quite rigid when it comes to their iOS streaming requirements, they are actually flexible when the need for something else is justified. What are you thoughts; are you currently using HLS in your apps? Have you submitted a non-HLS based streaming app to Apple’s app store? How did that go? Let us know in the comments.

Reflecting on 2015 and what we’ve built over the last couple of years, I started thinking about Red5 Pro and the reasons why we built it in the first place. So I figured I should write a post.

Why did we spend so much time and energy building this thing? Not only did we see that there was a trend of people wanting to make new mobile experiences based on live streaming like Periscope and Meerkat, but we also weren’t happy with the developer tools available to build these kinds of experiences. We saw two approaches towards creating live streaming tools for developers, and we didn’t like what we saw with either approach. First, we have media servers doing a poor job of supporting mobile, and… well, anything but Adobe Flash. Then second, we saw platform as a service companies providing good, but very limited tools.

Media Servers

One thing is increasingly clear–traditional media servers by themselves aren’t sufficient for building a live streaming app today. Why? Because all of them focus on the server side, and they rely on others to do the client. It’s not a full stack solution, and when you don’t control both the client and server endpoints, things can get messy rather quickly.

So, what is the origin of this issue? It’s because the servers, when they were originally designed, were relying on Flash as the client. This makes sense, because at the time Flash was the only viable client for streaming. Now of course, the world has shifted to other platforms. iOS, Android, and even modern browsers on the desktop don’t properly support Flash; the direction is towards native apps or WebRTC.

Before we came out with Red5 Pro, if you wanted to build your own mobile streaming app you would need to find a useable SDK for iOS and for Android. You would then need to install something like Wowza on the server and stream to that with your 3rd party SDK. What we found is that many of the open source SDKs weren’t well supported, and the paid ones just didn’t work that well. The flaws were obvious: they lacked flexibility and extensibility, and they all relied on RTMP, an older Flash based protocol. We decided this just wasn’t acceptable.

Hosted PaaS

The other trend we saw happening for live streaming solutions was the advent of hosted PaaS (Platform as a Service) products. Companies like LiveStream and TokBox are a few of the best ones in this category. What we found is that these solutions don’t provide enough control for the developer. Companies like TokBox have done a good job providing easy-to-use SDKs and make it super simple to setup since you don’t need to install the server–but this comes with a price.

Lack of control is a big one.

You have to rely on what the platform gives you. Either you are forced to include advertising in your streams, or you can’t easily modify portions of the SDK to grab a video source other than the device’s camera. Maybe it’s latency you are having trouble with like with Kickflip, or it could be a whole number of things that you want to do with your app that the provider’s SDK won’t allow you to do.

The PaaS solutions are also harder to scale and will cost more if you do achieve massive scale. The primary issue here is that you are basically locked into them handling the server for you. There is simply no way to host it where you want it. You can’t modify the server program easily to do things like live transcoding, moving recorded files to S3 buckets, integrating third party software like FFMPEG, etc. To make it worse, some companies can’t use cloud solutions and need to be deployed on networks that they control. Do you think your bank is going to allow video calls about proprietary financial information to flow through a 3rd party that they don’t control? How about medical software? I don’t think so–good luck getting around HIPAA.

Another thing to consider is that the major cloud hosting platforms offer credits to startups, and it would make sense to be able to easily move between platforms to take advantage of these deals (think of it like how consumers switch cable providers to get better deals). We designed Red5 Pro to be hosting agnostic so that where you host your solution is up to you.

Our Solution

We think we’ve taken the right approach by building the whole stack and providing super flexible SDKs for mobile (JavaScript WebRTC SDK coming soon!), Plus we give the power of where the server is hosted to you the developer. We are always looking for ways to improve, and if there’s something you are looking for, and we don’t currently do it, please let us know.

A lot of developers have been asking us detailed questions about recording streams using Red5 Pro. The one thing that comes up most frequently is how to create an MP4 file for easy playback on mobile devices. While you can use our SDK to playback the recorded file as is (Red5 records in FLV format), this requires that the user always use your native app, and sometimes it would just be nice to provide a link to an MP4 for progressive download.

So with that, here’s a quick guide to get you up and running with a custom Red5 application that converts recorded files to MP4 for easy playback. You can find the source for this example on GitHub.

Install ffmpeg

The first step is to install ffmpeg on your server. ffmpeg is a great command line utility to manipulate video files, such as converting between formats like we want to do for this example.

You can find precompiled packages for your platform from ffmpeg’s website, although if you need to convert your recordings to formats that need an extra module on top of ffmpeg, you might need to compile those from scratch. Once you have ffmpeg on your server, just issue a simple command to make sure it’s working. The simplest thing you could do is just called ffmpeg with no params like this:
./ffmpeg . That should output something like this:

If you see it output the version number and other info, it means you have a working version installed on your machine. Note that I’m running on my local Mac, but the same concept applies to putting it on any other OS. Also note, for production use, you would want to put the binary in a better place than
~/Downloads , and you would want to update your
PATH to point to the ffmpeg application from anywhere.

Start Your Own Red5 Application

Now that you have ffmpeg on your server, the next step is to build your own custom server side application for Red5, extending the live example here. You will use this as the beginning of your custom app that will convert your recorded files to MP4.

Overwrite
MultithreadedApplicationAdapter.streamBroadcastClose()

Now on to calling ffmpeg from your custom Red5 Application.

In your
MultiThreadedApplicationAdapter class you will now need to override the
streamBroadcastClose() method. As is pretty obvious from the name, this method is called automatically by Red5 Pro once a stream broadcast has finished and closed. Even though the Broadcast is finished, it doesn’t mean that the file has finished being written. So, in order to handle that, we need to create a thread to wait and allow the file writer to finish the flv write-out. Then, we access the file and convert it to an MP4 using ffmpeg.

Let’s see what that looks like in code:

1. Overwrite the method.

Java

1

publicvoidstreamBroadcastClose(IBroadcastStream stream){

2. Check to make sure there is a file that was recorded.

Java

1

if(stream.getSaveFilename()!=null){

3. Setup some variables to hold the data about the recorded file.

Java

1

2

3

4

5

finalStringname=stream.getPublishedName();

finalStringtheFlv=stream.getSaveFilename();

finalIScope fileScope=stream.getScope();

4. Create a thread to wait for the FLV file to be written. Five seconds should be plenty of time.

Java

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

newThread(newRunnable(){

@Override

publicvoidrun(){

//let the server finish writing the file.

try{

Thread.sleep(5000);

}catch(InterruptedExceptione){

e.printStackTrace();

}

5. Next we need to get a reference to the FLV file.

Java

1

Resource res=fileScope.getResource(theFlv);

6. Make sure that there is a file before continuing.

Java

1

if(res!=null){

7. Get the path to the file on the file system and the path to where you want the MP4 version saved.

Now that you know how to intercept one of the methods of the
MultiThreadedApplicationAdapter in Red5 Pro you can start to look at other ones you could implement to do custom behavior for your app. Many people who are recording their files want to move them to an Amazon S3 bucket for storage. Adding in a few lines using Amazon’s API to do that wouldn’t be too challenging.

What other things would you want to do with a custom Red5 application? We would love to hear from you.

We have been working hard on a number of features that we are super excited about including clustering and HLS support. You can get the new release here. Make sure to be signed in with your account first. Also, expect new SDKs to follow soon as well. Cheers!

RED5 PRO SERVER 0.2 RELEASE NOTES

This release has two new features our customers have been requesting.

CLUSTERING
This version of the server allows for configuring servers together to achieve infinite scale. This is the first version of the feature, and is limited to round robin style load balancing of Edge servers to an Origin server. There are many configurations possible with this setup including support for multiple origins, and Edges being able to connect across Origins. If there’s a setup you are looking for but our solution doesn’t seem to support it, please let us know.

OUT OF THE BOX HLS SUPPORT
In addition to clustering, the new server now supports HLS. Where as before you had to install the open source Red5 HLS plugin, Red5 Pro now comes with the feature out of the box. With the feature we also have created an HTML5 example using Video.js that connects to our server via HLS.

Let us know if you have any questions. We look forward to seeing what you do with this!

Our CTO Dominick Accattato just finished a great video tutorial on how to set up iOS streaming with the Red5 Pro server in six easy steps, all within a matter of minutes. In this example Dominick demonstrates how to stream from iOS to the web and then from the web back down to iOS.

Was this video useful? We’d love to hear your feedback on how we can continue to enhance our user support!

Don’t take this the wrong way; while we are thrilled about WebRTC and its potential to open up live streaming and communication within browsers, we want to pose the following question: do you actually need WebRTC for your use case? After much analysis, what we’ve found based on talking to our customers is that the answer nine times out of ten is no.

Adobe Flash
I know what you are thinking. Flash is dead: it’s a memory hog, it has terrible security flaws, and it’s an antiquated relic that should be banished from every browser on the planet. Yes, yes, much of this is undoubtedly true. Regardless of its obvious weaknesses, Flash is still a strong solution, and it can do the job that WebRTC does pretty much universally on desktops and laptops today. We originally built Red5 to integrate with Flash, and one of the most logical reasons we didn’t just start from scratch with Red5 Pro is that it still works great in this scenario. There’s a reason that solutions like video.js have a Flash fallback feature. Flash exists on these older browsers, it’s consistent, and it still works remarkably well considering its age.

Native Apps
Mobile publishing (meaning accessing the camera, encoding to a video format, and live streaming out) typically has to be done via native apps. This is especially true on iOS, as the Safari browser doesn’t support WebRTC today. The latest versions of Chrome on Android support WebRTC, and in a lot of cases it’s a viable way to implement live streaming. However, the most successful apps out there today are all native, such as Skype, Periscope and WhatsApp. In this regard, most of our customers feel it’s important to make a native app, and WebRTC doesn’t buy you anything if you have to implement it natively.

If you have the time and patience you can get the WebRTC project to compile and integrate on your mobile app, but most developers don’t want to be bothered with the complexities of integrating protocols at this level–they prefer a nice layer of abstraction using an easy-to -implement SDK. While there are excellent WebRTC based SDKs like TokBox, most developers don’t actually care what the underlying protocol is. In the end, they just want it to work.

We chose RTSP/RTP for our protocol within the Red5 Pro mobile SDKs as it’s both extremely fast and efficient. In addition, even after we’ve finished our current WebRTC implementation, which will be released in the Spring of 2016, we won’t be switching the protocol for the mobile endpoints. As an aside, if you study the WebRTC standard you will see that they actually use RTP as the base level protocol for the transport anyway.

What do you think? Do you really care what protocol your preferred streaming SDK is using under the covers, or do you just want it to work flawlessly? We would love to hear your feedback on this.

HLS
It also turns out that if you want to build an app that simply views live streams in a browser, then there are other ways to to do it. HLS, or HTTP Live Streaming has already become the standard streaming protocol on iOS devices, and there’s support for it on newer Android devices. I mentioned the video.js player, and there’s commercial ones like JWPlayer which support HLS as well.

On many Android devices, RTSP is a standard that can be relied upon as a way to consume live streams. With Red5 Pro we support both HLS and RTSP.

Not to Discount WebRTC
Clearly there’s a future in WebRTC, and as more browsers begin to support the standard and the protocols begin to be unified, we will see it become the go-to strategy for live streaming and live communication apps. We strongly believe in the future of WebRTC, and this is why we are working diligently on implementing WebRTC support in Red5 Pro. In the meantime though, there are many alternatives which work well, and for the foreseeable future we will still need these other protocols and platforms to work as fallbacks. Red5 Pro supports what you need today to get live streaming apps up and running for 90% of the use cases.

Let us know what you think. Are you currently using alternatives to WebRTC in your apps today? How much longer do you think solutions like Flash will stay relevant?

With live streaming becoming increasingly prevalent in 2015, developers are focused on creating applications to address the public’s fascination with streaming media. Periscope is the prime example of such an application and the sheer size of Periscope’s user base and class-leading engagement metrics validate its dominance in the space.

But what does it take to build a live streaming and communication platform such as Periscope, with the capability to broadcast to one hundred thousand or even one million subscribers? What if I told you that you could build a live streaming application with Periscope-like functionality and scalability in just 10 minutes?

Before we created Red5 Pro it took some serious effort to build this kind of server-side infrastructure and tackle the high level of complexity to build a native Android and iOS video encoder/decoder that works with the server. We saw this trend of a new kind of mobile app that connects people in real-time, and we saw these early adopters cobble together inefficient software wasting tons of time and energy. We couldn’t allow this to happen anymore, so we decided to make it easy for developers. With Red5 Pro, you truly have the ability to build the guts of the next live streaming phenomenon in a matter of minutes, and here’s how:

Let’s first start with all the pieces, and what you would need to build if you were to do this from scratch.

The Fundamentals

1. Publish from the mobile client:

Access the camera

Encode the video

Encode microphone data

Negotiate a connection with a media server

Implement a low-latency streaming protocol for transport

Stream the data to the server

2. Intercept with a media server

Intercept the stream

Relay to other clients

and/or

Re-stream to a CDN (adds latency)

Record the stream (optional)

3. Implement client side subscribing:

HLS in WebView (even more latency)

and/or

Setup connection with media server

Implement streaming protocol

Mix the audio and video

Decode video/audio

Render video and play the audio

*Note-this is actually a simplified list of all the tasks involved. Try doing this on multiple threads and getting it to perform well; it is complicated! It’s truly a rabbit hole that most developers don’t want to venture down. Given the awesome tools and libraries that exist for us developers, we thought that it was ridiculous that an easy-to-use and extensible live streaming platform just didn’t exist. That’s why we built Red5 Pro.

Red5 Pro to the Rescue

Let’s uncomplicate this. The Red5 Pro Streaming SDKs provide what we think is an intuitive and flexible API to remove the complexity while retaining tremendous control if you need it. Let’s take a look at the classes our SDKs provide. (note that they are the same on Android and iOS).

Let’s step through an example using these classes, piece by piece.

The Publisher

R5Configuration:

The first step is to create an R5Configuration. This class holds the various data used by your streaming app. It contains things like the address of your server, the ports used, protocols, etc. In this example we are connecting to a server running at 192.168.0.1 on port 8554 via the RTSP protocol. This Red5 Pro server has an app called “live” running on it, and that is what we want to connect to based on the context name. And finally we are setting the buffer time to half a second.

iOS

Objective-C

1

2

3

4

5

6

7

//Setup a configuration object for our connection

R5Configuration*config=[[R5Configurationalloc] init];

config.host=[@"192.168.0.1"];

config.contextName=[@"live"];

config.port=8554;

config.protocol=1;

config.buffer_time=0.5;

Android

Java

1

2

R5Configuration config=newR5Configuration("rtsp","192.168.0.1",

8554,"live",0.5f);

R5Connection:

Next, you create an R5Connection object, passing in your configuration. This establishes a connection to the Red5 Pro media server.

iOS

Objective-C

1

R5Connection*connection=[[R5Connectionalloc] initWithConfig: config];

Android

Java

1

R5Connection connection=newR5Connection(config);

R5Stream:

Now you create a stream object passing in the connection object you just created. Note that the R5Stream is also used for incoming streams, which we will get to in a bit.

iOS

Objective-C

1

2

3

4

//Create our new stream that will utilize that connection

self.publishStream=[[R5Streamalloc] initWithConnection:connection];

//Setup our listener to handle events from this stream

self.publishStream.delegate=self;

Android

Java

1

2

//setup a new stream using the connection

publishStream=newR5Stream(connection);

R5Camera:

Next we create a camera object and attach it to the R5Stream as a video source.

R5RecordTypeAppend – Stream and append the recording to any existing save.

If you compiled and ran this app with it configured to point to a running Red5 Pro server, you would be able to see it running in your browser. Open a browser window and navigate to –> http://your_red5_pro_server_ip:5080//live/streams.jsp to see a list of active streams. Click on the flash version to subscribe to your stream.

The Subscriber

Now that we’ve built the publisher we have established a live stream being published to the server. Yes, we did see the stream in Flash, but in order to consume that stream on mobile we need to build the subscriber client. Let’s dig into that now.

R5Configuration:

Just as before, we setup a configuration object holding the details of our connection and protocols.

Finally, we tell the stream to play by using the play method and passing in the unique stream name that the publisher is using.

iOS

Objective-C

1

2

//start subscribing!!

[self.subscribe play:[@"myUniqueStreamName"]];

Android

Java

1

subscribe.play("myUniqueStreamName");

Voila, you can now build your own one-to-many live streaming experience, all within minutes with the help of Red5 Pro. What do you think, are there ways we could make this even easier? We love hearing feedback, so let us know in the comments or email us directly. Happy Coding!

A new iOS SDK is now available: it includes a critical fix for the latest iOS devices which covers the iPad Pro, iPhone 6s and iPhone 6s Plus. This build also involves a few updates with new examples, including how to create a custom R5VideoSource and how to publish using the Swift language. Full release notes are below.

Release Notes: iOS SDK v0.8.41.1-

iOS streaming SDK has been loaded up on Red5Pro.com.

Updates in this SDK include:

- fixed a memory leak in publishing
- fixed issue with new iOS devices (A9 chip) creating bad streams and microphone errors
- support for image capture during subscribing to a stream
- support for publishing from a custom video source

Today Clutch published its first report highlighting Top Boston App Developers and we are extremely pleased to have been included on this list. It’s an honor to be included among these top app development shops, and as we transition our focus to Red5 Pro, we look forward to collaborating with all of them as well by helping developers create awesome live streaming experiences. Check out Clutch’s post on Infrared5 and the other winners here:

Here’s what Ryan Stevens, Senior Analyst at Clutch had to say about the mobile app development landscape in Boston and New England more generally: “The Boston and Greater New England Area continues to become a more competitive development landscape, but many of these firms have been longstanding staples in this development community. These firms have proven that they can take a client’s concept and develop an application that more than delivers on expectations.”