Topics - driftertravel

Howdy, since no one responded to my last post that I shouldn't post a for sale up here I thought I'd go ahead. Mod, take it down if you deem it necessary. This workstation was built primarily for photoscan, and does better on the anandtech benchmark than anything I've seen them review. It works quite well for video editing and rendering as well. PM or email for more info. Shortlist specs are,

Hey all, earlier this year I built a behemoth rig calibrated to run photoscan. It just crushes large projects. The long-term project I had lined up that required all that processing power turned out to not be as long-term as I thought, so I'm thinking I may want to sell the computer, even though I put a lot of time/research/money into the build. Just wondering if people thought it would be appropriate to put up a 'for sale' post here, as I think, since it's really geared towards working with photoscan and people are always asking about builds, there may be some interest. It'll also be a pretty good deal.

Hi, I'm curious about pair pre-selection. I'm under the impression, and let me know if I'm wrong here, that pair preselection is really only helpful when one has 'a lot' of photos. My question is: "how many is 'a lot'?" At what point does it become beneficial to processing time to use this function? Thanks!

I'd love to be able to set different regions (via a 'new region' tool or similar) that I could then set to batch dense cloud and mesh creation with export. The function would need to select the appropriate cameras for that region and not process the other cameras in the scene. Would make a lot of things much easier and pretty much eliminate memory problems for those steps. There may already be a way to do this with chunks, so let me know if that's true.Thansk!

Hello all, I'm currently working on a project that will involve 3-4 thousand photos, preferably processed at high quality, I can process in chunks, but I'm curious about when I go to merge the dense point clouds... I've got 128gb ram, but I'm wondering if anyone has an idea of where the upper limit of the size of the point cloud lies... I'm thinking I can generate a mesh in another program so I don't have to worry about running out of ram there, but I'm hoping I can get a quality point cloud of the whole thing to export... Any thoughts?

Hello all, I haven't tried this yet, so forgive me if it's a stupid question. I have a rather large project coming up (inside/outside of medium sized house for real-time applications) and, unlike most of what I have been doing, this will require quite a bit more than simply one texture map, as I can only use 4K maps in-engine and my target texel density is quite high. That being the case, I am quite used to using multiple uv spaces on a single model to up the texture quality, however, I'm not quite sure how to project multiple UV maps in photoscan. I know there is an option to make multiple uv maps in the texture generation process, but will this automatically recognize where the uv tiles are on the model when I select "keep uvs"?

The alternatives if "auto-magic" is not an option that I can think of are: A. to to export an FBX with camera locations into mari and re-project the texture, but seeing as there will be thousands of images I'm not sure how viable that is...B. Export a vertex colored FBX to ZBrush, reproject onto re-topologized model and use their multi-tile uv feature to bake textures from vertex colors. The problem here is I lose a lot of the texture quality with vertex colors that is the purpose of using photoscan in the first place, as well as the inevitable screw-ups with zbrush re-projection that either take forever or are impossible to clean up...C. Export many copies of the retopo model with each set of uv's moved to the 0-1 space one at a time and bake textures one at a time... kind of a pain but should work.

Hello, motorized panoheads seem like an easy way to capture images of interiors and such, I'm wondering if this is a good idea from a technical perspective. If I put the panohead in various orientations in a room and took sets of 360degree shots with a 50% or so overlap would photoscan be apt to process these well? I ask because one of the no-no capturing scenarios in the manual is taking photos in stationary 360degree positons...

This might be beyond the scope of photoscan, but I'm finding that stripping frames from 4k video works pretty well, the main problem is that every second or so frame stripped winds up blurry and rates a zero on the image quality analysis in photoscan. If one could load a video clip into photoscan, set a stripping rate (ie. every 10 frames) and during import photoscan could run the image quality analysis on each frame and if it is worthless discard it and take an neighboring frame, it would make taking large scans from video extremely viable, and speed up the process dramatically...

So, today I was up for a challenge and wanted to see what I could do scanning objects that are typically difficult if not impossible to scan. I chose my phone, B/C it was in my pocket. The phone is made up of a glass front face, with a little bit of shiny plastic above and below the screen. The sides and back are one piece of machined and polished aluminum.

Overall the items in question is quite reflective, which in my past experiments has left me unable to align photos properly, or given me a mesh with holes, missing chunks, or (in the case of glass) is simply not recognized.

I feel my results were pretty good, I threw everything at it to try and make it come out well. Considering what it is, it could have been worse. Check the attached pics, they show the dense cloud, sparse cloud and renders.

Still, the experiment got me thinking about how people go about using photoscan on reflective objects. So,I thought I'd post here and see what experience people have with scanning shiny stuff. I'd love to hear tips, tricks and techniques for doing so.

Hello all, I've noticed that the 35mm focal exif slot is empty for my images. Should photoscan be calculating this and showing it there? Is 35mm focal a exif tag? I'm copying exif data to these tiffs and I'm afraid something is being lost in the conversion. Should I put the crop factor adjusted focal length in the focal length field under Tools>Camera Calibration?

I'm using exr files and they get stripped of exif data. It would be great if I could just input focal length/f-stop/iso/35mm focal into the appropriate fields for multiple selected photos. Doesn't seem too hard, but exiftool won't write to .exr and I'm having trouble figuring out a way to batch change the exif data of these files. I'd rather not convert the files, but I suppose I can if necessary. Let me know if there is any other way of accomplishing this. Thanks.

Hello all, I just began experimenting with photoscan, this is my first attempt. I didn't have much around to scan, so I took off my shoe and did that. 1 man, one camera, das boot

I'm looking for suggestions to improve here. I can see the potential, just want to keep improving. A few things went wrong that I'm aware of, the laces moved as I turned the shoe to get the shots, the photos used were only 10MP, I didn't spend any time cleaning the mesh in zbrush, and only took a few min on the low poly. Nonetheless, here it is for your consideration.

Just wondering, if it were possible to set up an automated focus stacking system for a photogrammetry rig would this massively increase the usable data present in an image, or would it be just as easy to run the separately focused images through photoscan, or would it otherwise mess up the depth calculations?