With this fix we could autoplay a video element that has audio as long as the user previously approved a getUserMedia request.
In current Safari stable on desktop it's not enough to approve a getUserMedia request in the beginning of the session. The user has to actively capture mic or webcam to make video autoplay work.
This a regression. In our app we have one user broadcasting and several viewers. Capturing their mic just to make autoplay work doesn't make sense.

> In current Safari stable on desktop it's not enough to approve a
> getUserMedia request in the beginning of the session. The user has to
> actively capture mic or webcam to make video autoplay work.
> This a regression. In our app we have one user broadcasting and several
> viewers. Capturing their mic just to make autoplay work doesn't make sense.
The principle is that a user should make a gesture to activate sound.
It can be the getUserMedia prompt, it can also be a click on a video element, play button, "activate sound" button.
Once a page is producing audio content, other video elements should autoplay.
I am not sure what your exact request is and what the regression you are pointing at is.

To summarize, your issue is:
- AudioContext is started on user click and produces audio
- video element is being added later on and will not autoplay even though web audio is producing audio
There are two workarounds I can think of right now:
- play the audio of the video element through AudioContext instead of video elements
- When AudioContext is being clicked, call play() on the video element

Thank you for the workaround. Mixing all the audio and playing with a single AudioContext work with WebRTC streams but I think will break lip sync.
It also doesn't help with autoplay HLS and YouTube videos on the web conference.