I posted a video of my son the other day, and I was able to both shoot and participate in the video. I shot the video using an Insta360 ONE 360 camera on a light stand and "re-shot" the video as traditional video using Insta360's "FreeCapture" feature in the iOS app. All I did was frame the video in real time while watching it in FreeCapture mode and export it as a traditional video. It worked pretty well, but export took many minutes, which was challenging to complete on a smartphone (if it goes to sleep, the camera turns off, and it cancels the export).

In "FreeCapture," the 360 video is projected as fisheye or rectilinear and cropped in to look like normal video, and camera movements are keyframed and smoothed out. This yields a bit of a robotic feel, but I expect this style of shooting, re-framing, and sharing to be a common way to tell stories in the future, especially as camera resolutions scale up and software is refined. 4K 360 video is barely high enough resolution for this to look good when outputting traditional video in 720p. GoPro's upcoming Fusion camera is 5.2K; they are calling this feature "OverCapture." In theory, this can be done with any wide-angle or 360 footage (I used to do this using an iPhone + Reflector / AirPlay recordings, and using specialized software like Assimilate SCRATCH VR), but in the end, it's going to be consumer software that turns the idea into reality for (mass-market) end users.

I want physical zoom controls in the form of a connected hardware interface (Bluetooth would be fine) that allows the simulated use of real camera controls.

Depth data and volumetric/lightfield capture would make a lot more possible, including fully-virtualized shooting sessions after the fact! This sort of virtual shooting is already commonplace in big-budget films and animated movies, but it's specialized and expensive.