On Wed, 07 Feb 2007 15:25:25 +0100, Pablo dAngelo wrote:
> Hi Yuv,
>
>> [quoted text muted]
>
> I agree that time spend by the human in front of the computer needs to be
> minimized.. However, have you measured the time that the optimizer takes
> in your workflow? I estimate the time spend in the optimizer is in the
> order of 1-2 seconds, at least for the typical 360 deg panorama with
> fisheye images. IMHO, saving 50% of that time is a marginal improvement
> that is not really worth working many hours on it. I guess it needs longer
> to read the optimisation result and press the ok button.
>
> For a 200 image panorama this is different, and a to archive dramatic
> improvements, an optimization algorithm that is designed for problems with
> many variables is required.
For me autopano-sift is the bottleneck: you can't get working until it
completes, and it's *slow*.
It occurs to me that Hugin could include some statistical methods to
suppress the impact of bad control points and greatly speedup
"wall-clock time" optimization, which often involves more time
trimming crummy CPs than any actual calculation time for me (which, as
Pablo mentioned, takes only 1-2 seconds).
After the rough first optimizer run, it could provisionally disable
all control points which deviate in their image (or across all images)
by N times the standard deviation of CP offsets, where N is
user-configurable (3-5 might be a good range). Since the number of
CPs is small in a given image, using the whole pano might improve the
statistics.
These CPs wouldn't be deleted, just disabled, and their offsets would
still be computed, so that a new round of rejection could commence.
Some might "come back to life" after a given round. An iterative
optimizer that repeatedly ran in this way until the average control
point distance converges would be very useful. I.e. press "Robust
Optimize" and the optimizer:
1. Runs once, with all CPs active.
2. Evaluates and disables `bad' CPs.
3. Runs again.
4. Repeats step 2 + 3 until convergence.
JD

I am running optimizations that can take upwards of 1 week to complete
(thousands of images, tightly packed).
I was also going to ask about a sparse optimizer, so I'm glad you
brought it up. I've seen sba, Sparse Bundle Adjustment
(http://www.ics.forth.gr/~lourakis/sba/) and it looked like it might
be just the thing.
I'll guess I'll dig a little deeper and see if I can figure out what's
involved in replacing the optimizer with a parallel or sparse version.
Fortunately, there appears to be a completetly clean divide at the
LM interface, since the current LM solver is straight out of MINPACK.
-- Noel
On 2/7/07, Pablo dAngelo <Pablo.dAngelo@...> wrote:
> Hi Yuv,
>
> > Setting CP and the optimizer are the bottleneck for
> > me, because they impose a waiting time on the user.
> >
> > I could not care less about the CPU time required for
> > remapping and blending, because it is a batch process
> > that does not require my time.
>
> I agree that time spend by the human in front of the
> computer needs to be minimized.. However, have you
> measured the time that the optimizer takes in your workflow?
> I estimate the time spend in the optimizer is in the order of
> 1-2 seconds, at least for the typical 360 deg panorama
> with fisheye images. IMHO, saving 50% of that time is a
> marginal improvement that is not really worth working
> many hours on it. I guess it needs longer to read
> the optimisation result and press the ok button.
>
> For a 200 image panorama this is different, and a to
> archive dramatic improvements, an optimization
> algorithm that is designed for problems with many variables
> is required.
>
> > Whatever does the job (i.e. reduce
> > the time I spend in front of the computer to achieve
> > the same result) is good for me. My benchmark is 5
> > minutes HIT (Human Input Time) for my typical
> > 6@...=B0/8Mpx equirectangular shot. If you get me down to
> > 4 minutes HIT you'd save me easily 10-20 hours per
> > year.
>
>
> ciao
> Pablo
> _______________________________________________________________________
> Viren-Scan f=FCr Ihren PC! Jetzt f=FCr jeden. Sofort, online und kostenlo=
s.
> Gleich testen! http://www.pc-sicherheit.web.de/freescan/?mc=3D022222
>
>

On Wed 07-Feb-2007 at 05:15 -0800, Yuval Levy wrote:
> --- Pablo dAngelo <Pablo.dAngelo@...> wrote:
> > For me the optimizer is normally not the bottleneck,
> > but remapping and blending is.
>
> For me (and this is *the* main reason why I use PTgui
> and not hugin, sorry Pablo) it is exactly the
> opposite.
>
> Setting CP and the optimizer are the bottleneck for
> me, because they impose a waiting time on the user.
The optimizer is almost instantaneous since Rik fixed it. The only
time I remember waiting for it recently was when calculating TCA and
that was with 40000 control points.
Setting control points is a real bottleneck, the other things that
take time for me are editing masks for enblend and manually
retouching parallax errors in hand-held shots.
--
Bruno

Hi Yuv,
> Setting CP and the optimizer are the bottleneck for
> me, because they impose a waiting time on the user.
>=20
> I could not care less about the CPU time required for
> remapping and blending, because it is a batch process
> that does not require my time.
I agree that time spend by the human in front of the
computer needs to be minimized.. However, have you
measured the time that the optimizer takes in your workflow=3F
I estimate the time spend in the optimizer is in the order of
1-2 seconds, at least for the typical 360 deg panorama
with fisheye images. IMHO, saving 50% of that time is a
marginal improvement that is not really worth working
many hours on it. I guess it needs longer to read
the optimisation result and press the ok button.
For a 200 image panorama this is different, and a to
archive dramatic improvements, an optimization
algorithm that is designed for problems with many variables
is required.
> Whatever does the job (i.e. reduce
> the time I spend in front of the computer to achieve
> the same result) is good for me. My benchmark is 5
> minutes HIT (Human Input Time) for my typical
> 6@...=B0/8Mpx equirectangular shot. If you get me down to
> 4 minutes HIT you'd save me easily 10-20 hours per
> year.
ciao
Pablo
=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F
Viren-Scan f=FCr Ihren PC! Jetzt f=FCr jeden. Sofort, online und kostenlos.
Gleich testen! http://www.pc-sicherheit.web.de/freescan/=3Fmc=3D022222

--- Pablo dAngelo <Pablo.dAngelo@...> wrote:
> For me the optimizer is normally not the bottleneck,
> but remapping and blending is.
For me (and this is *the* main reason why I use PTgui
and not hugin, sorry Pablo) it is exactly the
opposite.
Setting CP and the optimizer are the bottleneck for
me, because they impose a waiting time on the user.
I could not care less about the CPU time required for
remapping and blending, because it is a batch process
that does not require my time.
> However, the performance of the optimizer could
> probably improved much more by using a sparse
> optimisation routine, that calculates only the
> first and second derivates that are not zero, than
> by utilizing multiple cpu's
you lost me in outer space here, but I trust your
technical judgment. Whatever does the job (i.e. reduce
the time I spend in front of the computer to achieve
the same result) is good for me. My benchmark is 5
minutes HIT (Human Input Time) for my typical
6@...°/8Mpx equirectangular shot. If you get me down to
4 minutes HIT you'd save me easily 10-20 hours per
year.
Yuv
____________________________________________________________________________________
The fish are biting.
Get more visitors on your site using Yahoo! Search Marketing.
http://searchmarketing.yahoo.com/arp/sponsoredsearch_v2.php

Noel Gorelick <gorelick@...> wrote:
> I'm writing to see if anyone's done any work on making the optimizer
> utilize more than 1 CPU, or more to the point, if anyone's thought
> about how to go about this.
>=20
> As I understand it, at each step the optimizer evaluates a gazillion
> functions, one for each control point pair. Presumably these could be
> done in parallel on a multi-CPU system.
For me the optimizer is normally not the bottleneck, but remapping and
blending is.
However, it is probably not hard to use OpenMP to parallelize
the current code, this just requires a recent compiler with OpenMP support=
.
Obviously, all functions executed in parallel need to be reentrant, which =
I'm not
sure if they are.
However, the performance of the optimizer could probably improved much
more by using a sparse optimisation routine, that calculates only the
first and second derivates that are not zero, than by utilizing multiple c=
pu's
ciao
Pablo
=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F
Viren-Scan f=FCr Ihren PC! Jetzt f=FCr jeden. Sofort, online und kostenlos.
Gleich testen! http://www.pc-sicherheit.web.de/freescan/=3Fmc=3D022222

I'm writing to see if anyone's done any work on making the optimizer
utilize more than 1 CPU, or more to the point, if anyone's thought
about how to go about this.
As I understand it, at each step the optimizer evaluates a gazillion
functions, one for each control point pair. Presumably these could be
done in parallel on a multi-CPU system.
Additionally, there appear to be a number of parallel-enabled LM
solvers available. Anyone have any thoughts towards this?