Forum Discussion

🚨 This forum is archived and read-only. To submit a forum post, please visit our new Developer Forum. 🚨
BlueSpeedwell's avatar
BlueSpeedwell
Honored Guest
10 years ago

Browsing the Web with the Rift

I've started tinkering with C++/OpenGL/Windows in the evenings with ideas for a web browsing app specifically for the Rift - I believe there is something similar for the Gear on mobile.

I have a simple concept working pretty well that creates a number of "browser surfaces" in 3D space, renders web content to them in real time and allows input via mouse and keyboard. The underlying browser technology is the same as the latest release version of Chrome so it renders all modern web technology including HTML5, CSS3, WebGL, PDFs, video and soon, WebVR. The next step is to make it work with the Rift - trouble is I don't have a DK2 and I don't think you can buy them any more.

My question: Is it worth diving into this wholeheartedly and trying to borrow one from a friend or are there already dozens of similar products out there like this already? Searching suggests there isn't but I don't know if I'm looking in the right places.

Then, if it is worth doing and I can find one to borrow for a little while, is there a canonical starting point people tend to use for an C++ OpenGL/Rift app that isn't Unity based.

I think this could be pretty interesting to work on so any insight or advice much appreciated.

Cheers.

7 Replies

  • nosys70's avatar
    nosys70
    Expert Protege
    reading text with a rift is painful, and mouse navigation inexistent or useless
    find another idea
  • good point re: text - planning to use gaze tracking or voice for input though.
  • nosys70's avatar
    nosys70
    Expert Protege
    the interesting feature of the rift is stereoscopic picture.
    That means you can add depth to information.
    for example you culd extract valuable information from a web page and show the depth of it (how many amount of imformation is linked to that subject.)
    it would look like a game where you go into a maze, and you can guess where you go.
    shaping the information like giving color if this is a forum a pdf document or a video, so you can navigate selectively.
    turn you browser surface into browser space and you got your product.
  • interesting ideas - i get access to both the raw RGB(A) pixels and the web content itself so parsing the latter and modifying the former based on a set of rules is not impossible.

    it would certainly be neat to get this working and let people experiment with ideas like that - just means i'll have to get up early on wednesday :)
  • "BlueSpeedwell" wrote:
    My question: Is it worth diving into this wholeheartedly and trying to borrow one from a friend or are there already dozens of similar products out there like this already? Searching suggests there isn't but I don't know if I'm looking in the right places.


    If you want to create an app that renders lots of 2D UI into a 3D space I'd look into Qt and it's support for QuickRenderControl. This lets you render UI directly to an OpenGL texture and inject mouse and keyboard events into it. I used this in my ShadertoyVR application, and my company is using it to develop a metaverse style application.
  • Yep, thanks. An older version of this used Qt - specifically Qt/WebKit but I wanted a more modern browser. The version of WebKit (now Blink??) is still pretty old and gets integrated infrequently. The browser I'm using matches the Release channel of Chrome.
  • "BlueSpeedwell" wrote:
    Yep, thanks. An older version of this used Qt - specifically Qt/WebKit but I wanted a more modern browser. The version of WebKit (now Blink??) is still pretty old and gets integrated infrequently.

    Yes, which is why Qt is deprecating it.

    "BlueSpeedwell" wrote:
    The browser I'm using matches the Release channel of Chrome.

    Qt's new web rendering functionality is called WebEngine and is also based on Chrome (it actually runs a chromium process in the background to render the content).

    Here's a screenshot of the app I work on. The starry background is part of the 3D environment. Everything else is rendered using Qt's QML to an offscreen OpenGL context, in a separate thread. This includes the cursor, so I'm able to switch from an HMD mode and present the UI elements on a surface inside the 3D envionment and still interact with them.

    Note, the particular bright and large visual style of the windows and elements is specifically so they're functional even in VR, where you can't really use small text. QML is really easy to style so that your elements can look like whatever you want, including something very close to the native controls, so it doesn't have to look like that.