I would like to develop a mobile (IOS/Android) program that edits these images and then encodes them back to an .mp4file, without having to use the desktop DualfishBlender application.

It seems DualfishBlender applies some warp transformation and stitching of both spherical 180 degree images to create a collection of equirectangular images, that make up the .mov file

Would it be possible to get these algorithms or have an API to do so ?Also what are the image sizes of the frames in the .mov file, are these lossless images ? Are they cropped in some manner ? is extracting an image file from a .mov - pre-conversion the best way to get the images from a video file on the Ricoh Theta M15 ?, Is there a way to get a RAW video file ?

They are OpenGL instructions, although I'm not sure how to implement them.

I'm writing a Python script that breaks the .MOV to individual frames, splits, rotates, and de-fisheyes each frame, then recombines them to a x.264 video, then runs the YouTube .py script to enable the spherical video paramaters on the video as a whole.

The de-Fisheye method is the one i'm having the hardest time with, primarily because I don't have the calibration constants for the camera in video mode. If anyone has these constants, I'll be happy to share my Python script so we can convert on whatever hardware we choose. Similarly, if someone can help me port mbirth's scripts to PyOpenGL commands, I'll implement them there as well or instead.

Looking at this video: https://www.youtube.com/watch?v=QUkt4y1idpY it seems there's yaw/pitch/roll information for each frame in the MOV file somewhere. At least that's the only plausible reason for the image tilts in the corners. This metadata would be needed, too, for successful conversion.

mbirth wrote:Looking at this video: https://www.youtube.com/watch?v=QUkt4y1idpY it seems there's yaw/pitch/roll information for each frame in the MOV file somewhere. At least that's the only plausible reason for the image tilts in the corners. This metadata would be needed, too, for successful conversion.

When convert some video file which is not recorded by theta using dfb,There is an error message output like this:"loadTiltStream(1.MOV) failed, but ignored.loadAfnTable[A](1.MOV) failed."So I think the tilt information is stored in the "TiltStream" of the MOV file.

Hey mistapottaOHS, I'm trying to stitch together a dual fish eye stream to output an equirectangular video stream. You mentioned a Python script? Have you managed to do it? Where are you at? Could you share your script for turning dual fish eye into a x.264 video? It would be really helpful to find out how you did it. Thanks a lot!

These OpenGL instructions are very similar if not identical to the syntax used on Quartz Composer custom plugins. I've been hacking away at this for a few hours and this is what I've found so far:https://github.com/kfarr/theta-s-quartz

In theory we could replace the fish2sphere code with the opengl code directly from thetas pasted above from hex editor...

A simple algorithm for correcting lens distortionOne of the new features in the development branch of my open-source photo editor is a simple tool for correcting lens distortion. I thought I’d share the algorithm I use, in case others find it useful. (There are very few useful examples of lens correction on the Internet – most articles simply refer to existing software packages, rather than explaining how the software works.)

Lens distortion is a complex beast, and a lot of approaches have been developed to deal with it. Some professional software packages address the problem by providing a comprehensive list of cameras and lenses – then the user just picks their equipment from the list, and the software applies a correction algorithm using a table of hard-coded values. This approach requires way more resources than a small developer like myself could handle, so I chose a simpler solution: a universal algorithm that allows the user to apply their own correction, with two tunable parameters for controlling the strength of the correction....office.com/setup

They are OpenGL instructions, although I'm not sure how to implement them.

I'm writing a Python script that breaks the .MOV to individual frames, splits, rotates, and de-fisheyes each frame, then recombines them to a x.264 video, then runs the YouTube .py script to enable the spherical video paramaters on the video as a whole.

The de-Fisheye method is the one i'm having the hardest time with, primarily because I don't have the calibration constants for the camera in video mode. If anyone has these constants, I'll be happy to share my Python script so we can convert on whatever hardware we choose. Similarly, if someone can help me port mbirth's scripts to PyOpenGL commands, I'll implement them there as well or instead.