Would a import video feature be considered to support e.g. action cameras ? Not for reading the cameras online using wifi but for importing videoes from a set of cameras after recording. If this could be imported into the ipi-recorder software and export in the format ipi studio requires it would ease recording and getting a steady frame rate (some issues with syncroniztion but that can be manually set in the import as an offset).

HiWe have support for offline videos in our plans, with no specific time frame.As for action cameras, they usually have very wide angle lenses, and therefore videos suffer from significant distortion. This will require an additional work from us to make action videos usable for tracking.

Oh, I never even thought about GoPro for Mocap Studio. I agree about the distortion issue but the center region should be usable and it's still much higher res than PS Eye after cropping.

I have an older GoPro Hero 3+. It was way too expensive to get multiples for Mocap Studio and I still feel the same about the newer models, but I wonder if any of the cheaper 'knock offs' selling for $80-$90 would work. Here's one:

I asked iPi about higher res cameras, I was told the resolution doesn't really need to be higher than PS Eyes offer, but maybe the focal length for a higher res camera would give the Capture volume more area, but I am limited to 6 cams with Basic, so wouldn't really be beneficial to me.

Yeah, the cost would be too great for any action cams, even at $80, or DSLR cams for the majority of users, but any cams that use the Direct Show drivers should work.

Good point. Where resolution is important is in the visual coverage. Higher resolutions allow you to capture a larger space. That said, processing time will increase dramatically depending on the number of cameras used. Besides, the PS3 Eye cameras can already cover about 20 x 20 feet and most people don't even have that much space. (Me included. Even back when I was recording with PS3 Eye cameras in our 'two-car' garage, my capture space was pretty limited.) :)

Technically, these cameras are capable of a higher frame rates which does improve motion quality, especially with very fast motions. I know Go Pro is capable of 60, 90 and 120 fps at 1080p, and up to 240 fps at 720p.

If you dropped the res down to the camera's lowest resolution (WVGA or 848 x 480), it's still slightly higher res than PS3 eye and capable of up to 240 fps. (Curiously, I don't think you can use lower frame rates at the WVGA setting though.)

I'm not sure what the knock-offs are capable of. I imagine the specs are somewhat comparable but I haven't actually looked.

(And, no, I'm not seriously thinking of purchasing multiple Go Pro knock offs...at least not in the near future.) :)

I believe higher recording frame rate would only apply really to higher Hz monitors, pertaining to iPi processing, most are still on 1080p 60 Hz refresh rate monitors, (soon to not be the norm shortly though), so they only refresh at 60 fps anyway and higher fps would only really adds to timeline length for iPi, so just adding to processing time, if it were a marker based system higher frame rate recording would help for the much smaller marker tracking.

Playback with a higher frame rate would give a smoother look, but not sure that would apply to iPi processing, or quality of the tracking for the most part.

Yes, I see your point, if someone had a large recording area to set up in to get the full focal length for the PS Eyes, or other cameras, but they would also need many more cameras, which also would add to the processing time for most high end machines even, but if had that much area and able to buy many higher res cameras, and processing machine to match, that could definitely be a plus to use them, or if recording in outside conditions, I could see that as viable.

Maybe some larger studios using iPi are already doing this method, but you don't hear much from them in this forum :)

iPi based their system for use in smaller areas which gives more appeal to the small studio and home user groups, as well as cost for the basic version anyway, and in my opinion have done pretty well with the output quality for just cheap PS Eye cams or Depth sensors.

Recent processed video using the new algorithm using 6 PS Eyes in a 10' x 12' capture volume in a 12' x 22' room: Any glitching usually from screen recording process.

https://youtu.be/VJ8X8PCIcEI ... Basic loop tested using iPi copy/paste pose 2x on end of timeline, then basic adjusting using Webanimate, but speed off a little.

Because Mocap Studio is not a realtime capture system, monitor playback rate shouldn't have anything to do with how Mocap Studio constructs its point cloud.

The higher framerate during capture provides the program with more motion data (i.e., 'smoothness,') so the program can rely less on interpolation between frames. This affects the time it takes to process the data but the result should be a higher degree of accuracy.

Early on, the developers investigated using the PS3 Eye's ability to record 120 fps to improve capture quality. The reason they dropped pursuing 120 fps support was because of the poor video quality from PS3 Eye when capturing frames at that speed. It wasn't that the video was too fast for iPi DMC (1.0), it was just too grainy to be useful.

A better camera would not have this problem. I believe higher end optical systems use frame capture rates of 120 to 160 fps.

A 'cheap' Go Pro type camera might do this trick if you had a good way to sync the data.

Side note: Many years ago, before mocap was really affordable or practical for many commercial productions, we would use multlple video cameras to record 'motion data' of a performance. Then we could bring this footage into our 3D animation program, project the images against reference planes, and manually 'roto-capture' our rigged characters to the frames. This was tedious but it worked surprisingly well in several video game commercials I worked on.

Anyway, the way we syncronized the multiple footage was by simply clapping our hands over our head and using that clap as the start point. Funny how laughingly 'low-tech' this seems now but, technically, we were doing the same thing. :)

Yep, exactly how mocap was done years ago, hand positioning on every 3-5 video frames as pose to pose animation and yes, does work well and yes, tedious, many still use this process even with mocap data as a key reduction technique in clean up and to create animations in different programs.

You are correct, I wasn't disputing the 120 fps point, higher fps capture from a device could result in smoother tracking depending on cameras used, but referring to iPi and 2D cameras, I was just referring to 2D video and the amount of time that would add to iPi tracking to process 2x as many frames, maybe another reason they backed off 120 fps and went with a better trajectory smoothing filter, but I don't know.

A PS Eye could run at 120 fps I am sure, but right, not as useful for how iPi Studio works, (I always set my cam exposure to 60 on all of them), but with iPi, too fast of a frame rate would possibly cause a duplicate frame scenario also? Maybe causing a stutter look in the animation, sometimes it even does this at 60 fps.

Most higher end optical systems mostly use IR and luminescent balls now, the video is just used as a video witness reference capture, not for the tracking, with the exception of Organic Motions marker-less system, but they use an add on piece of hardware to process at real-time.

There have been many marker-less mocap systems developed using the video tracking method, many looked very viable and good, but never made it to the production stage it seems.

I just meant a monitor playback can not refresh faster than what it is, 60 Hz is 60 fps, so you will only see 60 fps of a 120 fps recorded video, that can not be made faster, no matter the speed of the capturing device, not the tracking process iPi uses, but that isn't the resolution you see though, 4K, 1440p, 1080p or 720p depends on the monitor processor also, not the camera used, but I think 120 fps video would only add frames to the timeline for mocap and for iPi to me that would be bad, from a purely processing time point of view.

Also a reason why Kinect processing looses tracking much easier, not always keeping up with the motion speed performed during processing, especially on faster motions due to its 30 fps video frame rate in what I experienced with them, it makes the processing speed faster, but less accurate, but again, I haven't tried the Kv2 with the new updates, or the new algorithm, so maybe they remedied some of this.

Who is online

Users browsing this forum: No registered users and 1 guest

You cannot post new topics in this forumYou cannot reply to topics in this forumYou cannot edit your posts in this forumYou cannot delete your posts in this forumYou cannot post attachments in this forum