"twilio-video" posts

Before eLearning, a student needed to commute across town or even move to a different country to get a quality education. Improvements in technology, especially in WebRTC, has taken the hassle out of connecting students with great teachers and schools.

When we started four years ago, we made a bet that WebRTC would be the video technology of the future. Users much prefer the seamless experience of not having to download an external app or software. The quality has been getting better every year and it’s already superior to many many established video-conferencing providers.

We’re going to use Twilio Video with the AdonisJs framework to create a system where a user can host a video, and viewers can watch their presentation. AdonisJs is a full-stack, open-source MVC framework for Node.js that was inspired by the Laravel framework and borrows some of its concepts. AdonisJs saves you time and effort becauses it ships with a lot of features out of the box.

This system can be extended that users can sign up and schedule talks on, even pay to use. But we’re going to keep our project simple so it is easier to build your initial application.

Most smart phones come with a front and back camera, when you’re building a video application for mobile you may want to choose or switch between them.

If you’re building a chat app you probably want the front camera, but if you’re building a camera app then you’re more interested in the rear camera. In this post we’re going to see how to choose or switch between cameras using the mediaDevices API and media constraints.

What you’ll need

To follow along with this post you’ll need:

An iOS or Android device with two cameras to test with, if you have two webcams this will work on your laptop too

ngrok so you can easily access the project from your mobile device (and because I think ngrok is awesome)

In other posts we have investigated how to capture screen output in Chrome and built a screen sharing video chat application. There was one feature missing though. The Chrome extension made screen capture possible, but didn’t test whether it had been installed before the application tried to use it. In this post we are going to build a Chrome extension that can be detected from the front end.

Getting set up

We’re going to use the extension we built for screen capture and add the functionality to make it detectable. We’ll then build an example to show handling the two cases, with and without the extension.

Download the source for the extension from the GitHub repo or by cloning the building-extension-detection branch

The extension

In recent posts we’ve seen how to capture a user’s screen in Chrome and Firefox. Now it’s time to combine this with a real video chat application and share screens as part of a video chat.

What we’re building

In this post we’ll take the Twilio Video quickstart application and add screen sharing to it. When we are done your application will let you make calls between browsers and share screens from one to the other.

If you’ve built an app with our Programmable Video SDKs, you are familiar with the concept of Tracks. Tracks represent an individual stream of audio from a microphone or video from a camera, shared by a Participant in one of your Programmable Video Rooms.

Today we’ve added another API that should help you make your Programmable Video apps just a bit richer. DataTracks—a simple API for publishing real-time data among Room Participants lets you build shared whiteboarding, collaboration features, augmented reality apps, and more. And coming soon, the Media Sync API will let you ...

Component based UI libraries are a popular way of building modern applications. Angular and React are the heavyweights at the moment, but the humble browser and its native APIs are never far behind. Web Components were first introduced in 2011 and are the browsers’ attempt to bring componentisation to the web platform.

There are a few libraries available for writing Web Components, most notably Google’s Polymer, but also X-Tag and Bosonic. To really get a grip on what the platform can achieve on it’s own, I’m going to show you how to build a Web Component using the APIs available in browsers today. There are many “hello world” examples of Web Components, so we’re going to build something a bit trickier today, a video chat widget using Twilio Video. By the end of the post it will look a bit like this:

Twilio Video simplifies building multi-person video chat applications and minimizes complicated WebRTC boilerplate. The Twilio docs have a thorough quickstart which will assist you in creating a production ready Video application, but we are going to build a more bare bones JavaScript application to get up and running as quickly as possible.

Context is everything, and there’s nothing more annoying than using a mobile application and then being told you need to switch to a different app to communicate with other users or technical support. In my last blog post I showed you how to add IP communications into an existing application using the Twilio IP Messaging SDK for Android.

Twilio also offers a Video SDK which can be used to add video communications into the apps you’re already building.

In this blog post I will show you how to use the Twilio Video SDK and how to video applications on Android that let you have a video conversation between a device and a browser.