CaptionHub, the market-leading video & audio captioning platform, announced its latest broadcast and media client, VICE, the leading global youth media company reaching audiences in over 80 international territories across mobile, digital and linear channels.

After a proof of concept phase spanning 2018, that involved rigorous internal testing and a technical integration into VICE’s production systems, VICE has rolled-out CaptionHub to its European distribution network for its online video content subtitling.

Using CaptionHub, VICE can provide its cross platform award winning shows in a variety of languages and formats.

CaptionHub will significantly speed up video reach to its distribution territories allowing for faster and more cost effective delivery of high-quality content that will reach larger audiences.

CaptionHub was selected due its leading position in caption production, blending the latest technology in automatic speech recognition, machine transcription and translation, speaker identification, frame-accurate caption alignment and team collaboration, for global post-production teams in broadcast.

Daniel Elias, Head of Post Production at VICE said: “The vision has allowed VICE to bring innovations across the business including content velocity thanks to the capabilities introduced by CaptionHub.”

Adding, “We can control key technical aspects of captioning from a centralised system while acting as an open portal to our global offices for translation, ensuring the VICE voice remains relevant to its discerning youth local audiences.”

“VICE has achieved an incredible reputation for its content production, rapidly reaching cult status seemingly out of nowhere” said Tom Bridges, CaptionHub Founder and CEO. “We look forward to helping VICE deliver the best possible content and captions to their audience around the world.”

Last month at London Tech Day, CaptionHub, the market-leading video & audio captioning platform, announced its integration with Brightcove, the leading provider of cloud services for video.

Brightcove, which powers video distribution and management for some of the world’s most popular brands, publishers, and broadcasters, will enable mutual clients to use CaptionHub’s AI-enabled collaboration platform to publish original language and translated subtitles directly to their video content.

With CaptionHub and Brightcove, mutual clients have access to the most comprehensive and advanced features available for creating, editing and publishing subtitles for audio and video.

CaptionHub provides the capability to produce high-quality, interactive and engaging content for audiences who have English as a second language, are hard of hearing or are watching video in silent mode. CaptionHub enables customers to create highly-customized subtitle experiences, including language-specific subtitle translations. CaptionHub significantly speeds up video reach to new geographies.

“We strive to provide our customers with the tools needed to distribute the best content to their audiences, while accounting for accessibility and language requirements. Our partnership with CaptionHub is one way we do that. We are excited for this venture and look forward to seeing the enhanced video experiences our customers get from this new partnership.”

What is CaptionHub on-premise?

The first thing to point out is that unlike some consumer applications that also power businesses, like 1Password for example, an enterprise-first platform like CaptionHub isn’t a straightforward single-unit application. It’s a multi-faceted platform which makes it harder to deploy technically. Unlike a desktop application, CaptionHub is a browser served subtitling platform, its speech recognition engine, its machine translation memory, its machine translation engine – are all components and applications in their own right, that are required to be installed onto a server before CaptionHub can be used from a browser pointing at that server. This is the case whether its in our cloud infrastructure and being used by our cloud customers from Buenos Aires to London, or whether it’s used by global users for our on-premise clients from Cupertino to Berlin.

It’s not a trivial thing to do, nor to maintain. It also takes us, as a company, away from a pure cloud business model. We have to price differently, support differently, and maintain differently. I’ve sat in executive board and ops meetings alike in previous companies, unswervingly defending a strategy to do completely the opposite “we must maintain a 100% cloud approach”! Which had fortunately been successful in helping that company towards a happy ending for it and its customers. So why would we now defend and pro-actively pursue a dual-model of offering cloud or on-premise?

Cloud eats world, but can’t swallow on-premise

Cloud versus on-premise has been a debate at the centre of software strategy for the last two decades. We’ve experienced the unprecedented trend towards cloud, driven by monumental improvements in reliability, availability, tooling and services – with the natural commoditisation of those services and tools driven by AWS, and Google Cloud Platform.

But, 65% of companies are still running their own data centres. It might not make sense to run everything in the cloud depending on your company culture and security requirements. Multi-cloud is the notion of extending the simple hybrid enterprise architecture approach: it essentially means having different servers run by different vendors (Azure, Google, AWS) alongside of course, your very own data centres. We’re not seeing the need for on-premise going away and nor are our customers.

Why offer an on-premise solution now, in 2018?

The answer is relatively simple: it’s because we are on a mission. Our mission at CaptionHub is to be the best online technology solution for subtitling.

Our mission drives our commercial and technology strategy. We want to fit into the video production eco-system as tightly as possible to improve efficiency and workflows for our customers, and not create yet another layer of overhead between the creative department starting, and the post-production team deploying, be it OTT or on-demand.

If this means being on our customer’s servers, or placing a component of the CaptionHub platform on their servers, then it’s simply what we have to do to fulfil our mission.

Deploying to on-premise is not trivial, but it is getting more replicable as a process. It means we can and are developing sophisticated methods for rigorous, robust and repeatable deployments and maintenance, support & updates to those deployments.

Andrew McDonough, Senior Engineer at CaptionHub is quick to evangelise the ease of platform deployment compared to years ago; ” … with the vast evolution and availability of tools like Docker, we can now bundle the components of our platform in portable containers and run it in any environment, which makes our deployment possibilities pretty interesting for on-premise, ironically driven by tools from a cloud driven marketplace.”

It means we can integrate with our partner’s video platforms such as Qumu VCC (on-premise) or Brightcove (Cloud).

It means we can support enterprises in their choice in enterprise architecture, rather than create another challenge for them. And fortunately due to the tools that have come to market from the proliferation of cloud computing, we can deploy platforms far gracefully than ever before.

We want to go deeper into the enterprise video production system and being able to deploy as cloud, and now hybrid or on-premise, lets us do exactly that. We can deploy and integrate more tightly and broadly than every before to make the captioning process faster, more secure and more available.

It helpfully guides our roadmap with recent or near-term features for integrating video platforms, offering Custom Vocabulary or using Auth0 to harness the full range of authentication variables we might bump into when deploying CaptionHub.

And of course CaptionHub On-Premise is available now – so please get in touch if you would like to know more.

CaptionHub’s built-in renderer offers the fastest and easiest way to create burnt-in subtitles, otherwise known as open captions. Just select the language you need in the Download tab, and click on the “Render” button. Once it’s done, click on the “Download render” button.

That said, sometimes you need more control over look and feel; for instance, you might need to use a particular corporate font to format your subtitles with.

It makes a lot of sense to create burnt-in subtitles as part of the last stage of the edit workflow, where all of the other elements that constitute the video are being assembled. Many of our clients use Adobe’s Premiere as their editing tool of choice, and this post explains how to import subtitles directly into Premiere, where you’ll plenty of control over control over look and feel.

Adobe Premiere is a powerful editor, but it’s fair to say that Adobe’s support for captioning and subtitling workflows has been historically patchy. Even now, with Adobe Premiere Pro CC 2018, if you import an SRT, you’ll find that the very last caption gets ignored. If that SRT is in Cyrillic, say, then it’ll be imported as garbled characters.

Fortunately, we have a good workaround. Here’s what you’ll need to do to get captions into Premiere:

From CaptionHub, download captions as the broadcast standard EBU-STL. (Sorry: this is only available to Pro and Enterprise customers). Ensure that your character limit is set to a maximum of 38 characters per line – the EBU-STL specification doesn’t accept any more than that.

Making good tools work together

Businesses are exposed to an ever growing range of platform options that can improve how they do business. Speeding things up, improving quality, automating, evolving. As we increase the number of tools, the transactional overhead can become costly. More training, more authentication, security issues, and ultimately more time and cost to manage between systems.

We wanted to work on bringing together internal systems that currently exist and sit next to each other in the business ecosystem. These systems in the enterprise video ecosystem (for internal, external and broadcast usage) essentially work next to each other but in relative isolation in terms of data flow and authentication – carrying out sequential tasks with the same outcome – creating and distributing the best possible video content in front of employees and customers. Systems not speaking to each other, in our minds is akin to not collaborating with your own employees. Without the communication there’s a cost and with the communication between systems there’s a a tangible gain. In this case, time lost or time gained in the end-to-end workflow, tighter security and overall better experience and joy for the person operating.

CaptionHub, we think, should play its part to bring happiness to the enterprise video ecosystem. So working with CaptionHub partner Qumu we built the CaptionHub-Qumu integration. We’re very excited to be creating this technology partnership with Qumu who are leading the way in enterprise video, their credentials speak for themselves. We wanted employees and freelancers using Qumu, to be able to move between QumuCloud and CaptionHub seamlessly.

Global teams, global problems

CaptionHub teams, are by and large geographically dispersed. Working across timezones and borders which is especially true when it comes to linguists and translators, who more often that not are located in their home country. This means having to accommodate permanent and freelancing staff in multiple locations, in the office and of course at home. Security can’t be compromised and so enterprise tools have to flex. Access, onboarding and authentication needs to be as seamless and effortless as it possibly can be.

So, goal number 1 for us was to ensure marrying of the two systems created enhanced enterprise security. Since new projects in CaptionHub can be created directly from QumuCloud, no video asset has to be manually downloaded from Qumu, stored locally or off secure-grid, then up-loaded into CaptionHub. Video’s are selected and securely sent between the two platforms.

Security is important but time is our global currency

When we’re talking about a lot of video – and often we are dealing with volume – removing the need for manual upload of video to CaptionHub, and at the other end of the workflow, being able to send multiple caption sets, in multiple languages, straight back to the original video in Qumu – is a huge timesaver. With the Qumu integration, the user simply selects which video they want to work on from CaptionHub (listed as all available videos from Qumu) and then can go to work on that video. Once captions are approved, the ‘Send to Qumu’ button – send the caption sets back and integrates directly into the original video making the captions globally live, instantly.

So, selecting a video project directly from Qumu means no assets are taken out of platform – secure and instant:

But also as important – caption assets are sent back directly from CaptionHub to Qumu (we talk a bit more about time saved below):

We are obsessively talking about the year of the API at CaptionHub HQ for good reason. We released the CaptionHub API in 2017 and have been developing how it can lock into the ecosystem better at an exciting velocity. In 2018 we’ll be integrating with systems to help bring the video ecosystem together. If you’d like to find out more about our plans, or suggest an idea to us – we’d love to talk to you. You can contact us here.

Edit Detection

We’ve written earlier about the importance of frame accurate captions. On a human level, captions that don’t cross frame boundaries are significantly less jarring, they’re much easier to read. So there’s a moral imperative to do what you can to help people who may be hard of hearing, or people who might not be speaking the same language. (more…)

When I’m explaining CaptionHub to people who haven’t spent their professional lives in broadcast video, I often have to pause. Some things about broadcast video, frankly, are just not that interesting. (more…)

Automatically align a transcript

We bang on a lot about our amazing speech recognition, now in 28 languages. Quite rightly: with well recorded audio, it’s a massive time saver. We’ve had people write in to say that they’re seeing savings of up to 80%. read more…

Introducing burnt-in subtitles

We’re delighted to announce that CaptionHub now has the ability to export burnt-in subtitles (open captions) directly from a web browser. At last! If it’s something you’d like to road test, then please contact us. “Burned In Subtitles” refers to caption/subtitle text that is baked into the video.

The decision to use subtitles has always, until recent years, been one of practicality or necessity. For example foreign language films that are being distributed globally, programming for the hard of hearing and even documentary style programmes where the speech which is muffled or difficult to understand. (more…)