Yes,
I was testing on a very small sample and realized that some of your sequences not giving the right results. Doing a new run right now, will check the results tomorrow (should look a whole lot better).

Yes, I was thinking about this problem and how you might get around it.

Would it be possible to calculate some expected angle buckets? In the editor, you can normalize a sequence, which aligns all images with the direction of the associated line. If you created a bucket for images that match that forward angle, and a bucket for everything else, that might fix the issue.

I guess I’m not sure if you want to create scripts that would solve this issue for all cases, or if you’re looking to just run a refresh on my region. If the latter, I could give you more information on my techniques, and you could tailor the scripts to match that? I’m the only contributor in this area except for one sequence by JB.

For each day of the problem sequences (which would have occurred between late July and early September of 2018), I ran two cameras concurrently, one facing forward (0/360 degrees) and one to the left (at 300 degrees, I think). I interpolated location for each camera using the same GPX, so that’s why they coincide. I tended to add a slight offset of a fraction of a second so the images weren’t on top of each other, but this wasn’t always the case. Angles were calculated by averaging the look-behind angle (to the previous image), and the look-ahead angle (to the next one) for each image, except for the first and last images (first just looks ahead, last repeats second-to-last angle).

So to fix my area, you might be able to look for the images that occur within two seconds of each other (my cameras had a two-second delay), with one sequence matching the expected angles, and one with an offset. So if you start with an image of angle 0, the next one should be to the north. That image might have an angle of 45, which would put the next image 90 degrees away (to the east). And so on.

EDIT TO ADD:
I looked at some single sequences too, not concurrent with other angles. These suffer some of the same issue, though it’s not as clear why. Perhaps limiting the next image in a sequence to a distance threshold might help? Time threshold would be possible too, but there would be some pauses at lights and whatnot that would throw that off. Though a split to a new sequence in these cases wouldn’t be the worst. Again, for my area, the real issues occurred really only in August (during the q3 challenge).

Hope this information helps.

Last ditch option, you could also possible batch delete all the offending sequences from August 2018, but that would kind of be a pain to reupload. I’d get to it eventually, though.

@pkoby, thanks for the details! We want to solve this problem for the generic usecase, so I will try to cluster the offsets in a time-calculated sequence in order to find the right offset-buckets (which are in this case not (-45 to +45 degrees, pointing ahead) and (-125 to -45 degrees, pointing to the left) but rather ahead and a bit more to the right. If I can find these values (let’s say assuming 4 buckets in total), we should catch most of the outliers that are creating the spiderwebs right now.

I have some other things to do today but will get back to this ASAP - this is a very good training ground for getting things right, and we want to solve this in a good way, for obvious reasons.

Thanks for the help @pkoby - I’m actually thrilled to finally get to this.

So,
today I hacked this little debugging html to see the recalculated sequences (different colors per sequence so you can see how they stretch and where they cross) before putting them into the backend system. This is the result after todays work @pkoby - look MUCH better. Now I’m applying this to the system, let’s see what you think!

I think you lost a lot of photos. Hopefully they’re still in the database?
Some of those empty roads showing up weren’t empty before. I didn’t note that these regions were broken or missing prior, so they seem to be just left out.

@pkoby now I rerendered the whole area with dynamic clustering regarding the camera offset variations within a time-cut sequence, separating the 2 cameras quite nicely, please let me know what you think. Also, the 7/8th of Aug 2018 etc are back, see screenshot.

This definitely is an improvement. I’m still seeing some issues around tight corners, but again, it sounds like that’s tough for a computer to guess. There are also some weird short bits in sequences that don’t really make sense to me (e.g. here), but I don’t see any conjoined directions nor photos where there aren’t any, and those were the two big issues for me.

Thanks for the Feedback! I’m now trying to sort out the recalculation of tracks with more than 2 cameras with their own tracks which involves averaging the (e.g. 4) tracks and then calculating the offset against that track in order to reduce noise. When that is working, we are ready to reprocess the older imagery without shredding things.

It requires some more work but I think this is a first good step, thanks for the patience and testing so far!

Yeah, no problem! I went through all of August checking my tracks, and I notice that the sequences often split at traffic lights. I assume that’s because I delete photos during processing, so there’s a jump of a number of seconds. Other than that, I find no issues.

Good input, will try to push for that. If we do that I believe we need to be more stringent in not changing the topic too much, otherwise categories might fall short to describe the discussion adequately.