Video Accessibility and SEO

How do you implement video meta data, closed captioning and transcripts to ensure both search engines and screen readers can crawl/read?

For example, in a mostly text-based video with a simple audio track hosted one brightcove and embedded into our site, we want to make sure 1) google can crawl the text on the video and 2) a vision-impaired viewer would be able to use a screen reader to hear the text on the video.

... particularly in that, the author is considering WAI ARIA which makes web-content more accessible for disabled people.

You can see where the author starts getting into WAI ARIA here. He's just doing very basic things like, making sure the web player itself (the buttons and stuff for playback) can be operated. I'm sure though, that there are ways to take this ind of stuff further

The problem is, this is a basic HTML 5 video player. If your videos are embedded using an external video player (like the YouTube one, or like this Brightcove thing) - then often you have no opportunity to recode the player. If you can't do that, then you can't make it more accessible than it is out of the box

If you're happy to supply extra content along-side the video which helps search engines to read it, then you'd be looking for the Transcript schema which is part of the VideoObject type. Google does use the Video schema, but they don't explicitly state that they look for the transcript in their primary documentation

On the official SEMRush blog, they seem to think that transcript schema is 'fantastic' for SEO (ctrl + F for "transcript" on the post). This 'relatively' recent study seems to find that, transcript schema doesn't really boost rankings all that much (though I have my suspicions that it could heighten relevance, if nothing else!)

This page from the University of Washington is quite detailed in terms of creating more accessible videos. It seems that, some support at least would be required from Brightcove. Even if you managed to caption your video, on the Brightcove end - their system would have to be capable of 'adding' your captions to their web-live video files. The post also contains information on generating accessible transcripts

From the sounds of it, the best implementation for the transcript would be to have it visible on the page (in a place where a blind user's reader would hook into it, and read it - as if it were standard content). You could then wrap it in transcript schema, so for this one I'd probably use a microdata implementation rather than JSON-LD

After that you could do more research and see how much stuff you could push into it from accessibility initiatives like WAI ARIA

Captioning... I've tried to provide you with some resources, but I'm still not quite sure myself