Forum Discussion

7 Replies

  • That would probably work, although for stuff that close to the cameras 2 cameras would probably get you superior results. Thanks for sharing about hyperlapse. that is really cool. I'd love to see a stereoscopic hyperlapse video, either by shifting the images or by having 2 actual camera angles
  • Didn't look at the link; the technology to do this is several years old.. I remember looking up After Effects techniques to use 2D footage and create a 3D video project. If I am not mistaken, the idea is to create 2 different layers of the same video. Bottom layer has a red curve, top layer has a blue curve. Set both layers to overlay, and offset them.

    For the shakiness, that is just a matter of shake correction, easy enough to do, even with windows media player (I did a tutorial for some people at mtbr.com for GoPro cams).

    I believe there is an AE plugin to transform the 2d video into 3d if I am not mistaken.

    Found the article, dated in 2011.

    http://www.fxguide.com/featured/art-of-stereo-conversion-2d-to-3d/
  • Nexa's avatar
    Nexa
    Honored Guest
    There's also software that'll turn some images into a 3d model.

    Anyway I would love for msoft to release a 2D video to 3D model conversion software.
  • Please look at the link. nothing to to with a 2d to 3d conversation! this technic creates smooth 2d shake free first person videos. impressive!
  • Nexa's avatar
    Nexa
    Honored Guest
    If it's the same video I'm thinking of (cant watch), then they convert the video to point cloud based models, then convert those to 3d meshes. Then they make the virtual camera travel a path along those meshes. Which are compiled from the video etc, and its used to determine the approximate camera path and smooth it out. Something like that.
  • "noquarter" wrote:
    Please look at the link. nothing to to with a 2d to 3d conversation! this technic creates smooth 2d shake free first person videos. impressive!


    I admit, I only saw your thread title and the word microsoft. :oops: Even still, I'm sure they acquired it from somewhere :lol:
  • "Nexa" wrote:
    If it's the same video I'm thinking of (cant watch), then they convert the video to point cloud based models, then convert those to 3d meshes. Then they make the virtual camera travel a path along those meshes. Which are compiled from the video etc, and its used to determine the approximate camera path and smooth it out. Something like that.


    The key point is that they made a virtual camera travel a path along those meshes. If they made two virtual cameras, it would enable stereoscopy, and these two 'virtual cameras' could even be re-rendered with different off-sets per individual IPD.

    Is my understanding correct?