My original proposal is to allow Hugin to take into account camera shifts between shots. However, I am open to modify the scope of my proposal as the community sees fit. We have been discussing this on hugin-ptx, so this page will be updated to reflect the final agreement.

My original proposal is to allow Hugin to take into account camera shifts between shots. However, I am open to modify the scope of my proposal as the community sees fit. We have been discussing this on hugin-ptx, so this page will be updated to reflect the final agreement.

+

+

After playing around with Hugin's various tools trying to conceptualize how my code may work, it seemed to me that there is room for improvement in Hugin's preview rendering. Keeping up with my original concept of accounting for camera movements, I would like to suggest the following idea:

+

+

Hugin is currently efficient in detecting stitching points (and allowing for control points to be manually tuned) between multiple photos. They are, however, all displayed in 2D and then transformed to a panorama. What we could do to simplify stitching photos taken from different points (or just to make it easier to visualize current control points) is to render the images in 3D space instead. One photo will be used as an anchor and its orientation is specified, each of the remaining photos will be represented by a quad in space, with its stitching points identified by Hugin, we can orient each of these quad so that it is displayed in its expected position (for example above the anchor and tilted 20 degrees to match its next neighboring photo). This would allow us to estimate the spatial configuration of the different photos or, if there is no sufficient overlap, allow the user to define how the photos relate to one another in space.

+

+

We will need to define an actual scope for this project and nail down more specific objectives.

Latest revision as of 10:29, 6 April 2009

SoC2009 Mokhtar Khorshid

My original proposal is to allow Hugin to take into account camera shifts between shots. However, I am open to modify the scope of my proposal as the community sees fit. We have been discussing this on hugin-ptx, so this page will be updated to reflect the final agreement.

After playing around with Hugin's various tools trying to conceptualize how my code may work, it seemed to me that there is room for improvement in Hugin's preview rendering. Keeping up with my original concept of accounting for camera movements, I would like to suggest the following idea:

Hugin is currently efficient in detecting stitching points (and allowing for control points to be manually tuned) between multiple photos. They are, however, all displayed in 2D and then transformed to a panorama. What we could do to simplify stitching photos taken from different points (or just to make it easier to visualize current control points) is to render the images in 3D space instead. One photo will be used as an anchor and its orientation is specified, each of the remaining photos will be represented by a quad in space, with its stitching points identified by Hugin, we can orient each of these quad so that it is displayed in its expected position (for example above the anchor and tilted 20 degrees to match its next neighboring photo). This would allow us to estimate the spatial configuration of the different photos or, if there is no sufficient overlap, allow the user to define how the photos relate to one another in space.

We will need to define an actual scope for this project and nail down more specific objectives.