06-05-2016 09:53 AM
06-06-2016 01:17 AM
06-08-2016 12:07 PM
nosys70 said:
well you mix several concept here.
photogrammetry is the use 2D picture to measure objects (or distances between objects).
photogrammetry VR is a 3D extension to make 3D reconstruction with some scanning device (lidar)
The result from photogrammetry VR are numeric values (cloud of points) .
Using textures shot from another camera, you can recreate the illusion of 3D objects (room or object)
but a cloud of points is not a 3D object.
The first problem is you need to to scan the place in order to avoid occlusion, or else you only get partial object data. That is easy for some objects like a statue, but even a simple chair will require a lot of work.
The second problem is that your cloud of point makes no difference between the chair and the carpet it is laying on. So at a moment, you need to convert your cloud of points to a mesh, and edit the mesh to separate object.
(you can also do that at the cloud level)
This can be relatively easily automated for some situation (assuming flat floor is an object, and things on floor are different objects) but it will definitively requires a huge editing, to separate books in a library for example)
So usually , what you see is a mix of technology, where spaces are scanned to get a rough idea of a volume, and usual object are replaced by real 3D object, so you can eventually interact with).
06-08-2016 03:33 PM
Microsoft got a project like this where they are able to recreate a place from the thousands pictures taken by people and posted on the web( https://photosynth.net/ ). So you can imagine that the capture work is pretty already done for most of touristic places.
You already have the people scanning archeologic site in Africa , to save them (digitally) from destruction because the war.
on the other hand, if you need a quick and dirty reconstruction, you can do that with with one of the many lidar camera available, like the Kinect, the primesens, the intel realsens etc....
For example , in a few minutes I can scan my living room and get a perfect 3d picture that I can use for a VR visit. (check programs like Kscan3D, Scenect or Reconstructme).
unfortunately as posted before, this kind of scan is pretty unable to differentiate objects, so if you just want to visit a room, it could do the job, but if you expect some interaction (opening a door, moving a chair) it is pretty useless.
The main problem with scanning vs 3d editing is mesh topology.
When you scan, you got a random cloud of point, that is reduced to a mesh (basically, reducing the number of point to a reasonable amount and then linking them).
so imagine you scan your arm, and get a very convincing result.
The problem is when you want to apply deformation to your arm (when you bend it).
in the reality your arm is bending particularly at the elbow in a particular way.
if you bend your digital arm, there is absolutely no chance it give a realistic result, because the mesh is giving no indication where and how it must bend, so it will likely bend like a garden hose at best.
actually, this is a very common issue and some people are working on it
https://3dideas.wordpress.com/tag/topology/
Retopolgy is a time consuming process, so most of the time it is faster to take a generic shape (like a human body) with correct mesh topology and edit it (you can use the scan as a guide)
there is a company doing it from two shots (face/profile) https://www.bodylabs.com/
06-08-2016 07:27 PM
nosys70 said:
Microsoft got a project like this where they are able to recreate a place from the thousands pictures taken by people and posted on the web( https://photosynth.net/ ). So you can imagine that the capture work is pretty already done for most of touristic places.
You already have the people scanning archeologic site in Africa , to save them (digitally) from destruction because the war.
on the other hand, if you need a quick and dirty reconstruction, you can do that with with one of the many lidar camera available, like the Kinect, the primesens, the intel realsens etc....
For example , in a few minutes I can scan my living room and get a perfect 3d picture that I can use for a VR visit. (check programs like Kscan3D, Scenect or Reconstructme).
unfortunately as posted before, this kind of scan is pretty unable to differentiate objects, so if you just want to visit a room, it could do the job, but if you expect some interaction (opening a door, moving a chair) it is pretty useless.
The main problem with scanning vs 3d editing is mesh topology.
When you scan, you got a random cloud of point, that is reduced to a mesh (basically, reducing the number of point to a reasonable amount and then linking them).
so imagine you scan your arm, and get a very convincing result.
The problem is when you want to apply deformation to your arm (when you bend it).
in the reality your arm is bending particularly at the elbow in a particular way.
if you bend your digital arm, there is absolutely no chance it give a realistic result, because the mesh is giving no indication where and how it must bend, so it will likely bend like a garden hose at best.
actually, this is a very common issue and some people are working on it
https://3dideas.wordpress.com/tag/topology/
Retopolgy is a time consuming process, so most of the time it is faster to take a generic shape (like a human body) with correct mesh topology and edit it (you can use the scan as a guide)
there is a company doing it from two shots (face/profile) https://www.bodylabs.com/
06-09-2016 04:55 AM
06-09-2016 09:12 PM
06-09-2016 11:00 PM
06-10-2016 02:37 AM
06-11-2016 07:26 AM
You can minimize this particular issue with better quality photos. The higher the resolution (makes it easier to do the point cloud calc); clarity of the image (spacial resolution and lack of DoF blur, lack of motion blur, all again helps with calc); and good, bright, even lighting with no specular highlights or artifacts (gives less sensor noise to mess with point cloud calc, and better quality textures); then the fewer images you will need. Most consumer lenses and camera sensors (well, pretty much all) are not ideal for photogrammerty, and the worse the photo images are, the more photos will be needed and more clean-up needed once you get your point cloud/mesh. This means, you need to at least go into prosumer-level lenses (try to go with primes rather than zooms) and sensors, ideally at or above 36Mpixels, if you want to minimize the number of photos and minimize post-calc clean-up.
nosys70 said:
the problem with pure photogrammetry (taken from plain 2d photos), is the huge number of pictures required, and the issue you could get to capture a crowdy place while taking dozen of pictures