Forum Discussion

🚨 This forum is archived and read-only. To submit a forum post, please visit our new Developer Forum. 🚨
mptp's avatar
mptp
Explorer
11 years ago

[Hypothetical] Input Device API

I just had a thought...
There seem to be two big issues with controllers for VR right now:
    1. There are so damn many of them

    2. Oculus have hinted very strongly that they will be releasing a controller, so nobody is willing to bet on a particular controller in case they pick the wrong (often very expensive) one.

    Both of these problems could be solved if Oculus released some kind of API layer for all controllers in VR.
    What I mean by this is game developers code their game only for the degree of control players will have, rather than having to implement a specific controller setup. Controller manufacturers ensure that their software talks correctly to the OVR Input API, and can be assured that all the control information that their device can produce will be used correctly.
    The way this would be achieved is by the setting of several 'standards' for VR experiences. A game developer picks the standard that their game will require, and goes for it. A standard is made up of certain device requirements, and each controller will contribute one or more device requirements.
    Here are some example standards:
      Standard A might require '10 buttons, two analog sticks'

      Standard B might require 'hand position tracking, hand rotation tracking, 10 buttons, analog stick'

      Standard C might require 'finger tracking, leg-based locomotion'


And here's a little table of how a few of the devices that are currently on the market would fit in:

Don't worry about the clipping, the Xbox controller was the last one on the table anyway

So, to meet standard A, you could use an Xbox One controller. However, you could also use a Razer Hydra and just not use the motion tracking data.
To meet standard B, you'd need a razer hydra, or ControlVR + an Xbox One controller
To meet standard C, you'd need ControlVR and the Omni, or Leap Motion and the Omni
(obviously there are more ways to meet these hypothetical standards right now, I'm just using examples from the table)

I feel like a system like that would solve a lot of the issues everyone's having with VR input 'how much should I spend, what's worth developing for...' etc. Since one device can't do everything Oculus doesn't have to worry about shooting themselves in the foot by encouraging competition, and this would allow more people to enter the controller-design scene than otherwise. And obviously, for the consumer, you get lower-priced input devices, with a much greater flexibility.

8 Replies

  • nuB's avatar
    nuB
    Honored Guest
    I would not be surprised if OculusVR is developing their own tracking system like the STEM system.
    It's unlikely, as more VR peripherals enter the market, that they will all have their own tracking solutions. The market simply will not support multiple tracking solutions for every device out there, a standard tracking product that can be attached to anything seems much more likely to succeed.

    In saying this, while I am aware OculusVR purchased the Carbon Design Group, I believe their current focus is primarily with the design of CV1 as they slowly narrow down the constraints within which it is being designed.
  • "nuB" wrote:
    The market simply will not support multiple tracking solutions for every device out there, a standard tracking product that can be attached to anything seems much more likely to succeed.

    In saying this, while I am aware OculusVR purchased the Carbon Design Group, I believe their current focus is primarily with the design of CV1 as they slowly narrow down the constraints within which it is being designed.


    Well, that's kind of the whole reason for a hypothetical input API - it would mean that the market could support many different input devices, since there will be several tracking standards that any number of input devices could be used to fulfil. Of course, the current turmoil in device competition will die down (eg. Either the omni or the virtualiser will succeed, not both), but I think that VR input should work like mouse input - many manufacturers, several different standards (common mouse, trackpad, motion mouse, trackball, etc)
    Obviously we would want a certain common 'minimum level' of 3d input - something like what the STEM system delivers with two controllers, and I expect t Oculus to be the company that delivers this common minimum level. But I don't think a single input setup is going to be sufficient for the entire VR market - there will always be room for improvement, and a subset of users willing to spend the money to capitalise on this improvement.
  • As VR becomes more ubiquitous, it's inevitable that there will be a growing market for add-ons, like new layers growing on an onion. Even if a single standard control scheme emerges as dominant, users will still seek out multi-modal experiences, and niches will emerge and grow -- just as we've seen accessories for game consoles on TV-based entertainment systems, we'll see peripheral add-ons for VR-based gaming experiences.

    Like you said, there's so much diversity with regards to the types of input devices that are currently available -- it's a waiting game to see if this trend will continue. From our end, we're working to make it easier for devs to add Leap Motion support into existing Unity projects, lowering some of the development overhead.
  • mystify's avatar
    mystify
    Honored Guest
    I really don't see how an API like this will be feasible. It seems like most things are only provided by a single product right now, and squishing their input through a common API will strip it of their benefits.
    I don't think the problem right now is "there are too many controllers". the problem is "we don't know what /types/ of inputs we want". Even if we just look at hand tracking, do you want something like the hydra/STEM, which gives you coarse position on the hands, but retains access to buttons and joysticks? Do you want something like the kinect or leap that can track your hands and their precise position without anything in them? Do you want some kind of a glove to give that tracking? Do you want gesture recognition, or precise tracking of each finger? how many degrees of freedom on the fingers should you get? What if its a motion tracking gun?
    Say you do create an API that can encompass every device out there. What happens when somebody invents a new controller that has even more information? What happens when the precision differences between devices makes it unworkable; i.e. the gun controller is designed to be very precise, especially in terms of rotation, but not necessarily perfectly accurate, so you can compensate for its drift while still aiming well, but another hand controller isn't particularly precise, but it keeps your hands at the right distance from you, and if you try to use the latter for a game designed for the former it will go every poorly, even though they both provide the same basic information.
    We can't normalize all of this yet because we don't know where to normalize to.Cramming a more powerful device through an API will make the result more restrictive and cripple the device.

    What we need is for developers to try out various input methods, and find a set where they go "This. This provides the best experience", and then make a game to cater to it. Then customers who have those inputs can try it out and go "Yes, here is where it is at", and then spread that by word of mouth. Once we have narrowed down which inputs are great, we can start trying to normalize it so you have multiple options to meet the demands. Trying to do that now will make an API that is cumbersome, unwieldy, limiting, and not useful.
  • You know what, I have decided I agree with you like 90%. Enough to say that yes, an input API wouldn't work right now.
    I think that it will probably wind up being important later on once people know what works in VR and what they want to be able to do (for the reasons stated in the original post). But right now there's too much experimentation and innovation going on to attempt to encapsulate everything into a single API.

    So basically...yeah. Touché! :D
  • "mptp" wrote:

    There seem to be two big issues with controllers for VR right now:
      1. There are so damn many of them

      2. Oculus have hinted very strongly that they will be releasing a controller, so nobody is willing to bet on a particular controller in case they pick the wrong (often very expensive) one.

      Both of these problems could be solved if Oculus released some kind of API layer for all controllers in VR.
      What I mean by this is game developers code their game only for the degree of control players will have, rather than having to implement a specific controller setup. Controller manufacturers ensure that their software talks correctly to the OVR Input API, and can be assured that all the control information that their device can produce will be used correctly.

      mptp, It looks like you and I have been thinking about the same topics (c.f. the locomotion issue),
      but you are more motivated to write up a post on the topic :D

      In fact I am quite surprised that Oculus has not announced that they will be releasing a common API for interfacing VR peripheral hardware. Since given their position as pioneers and champions of the new VR era, I would think it is in their interest to create a large and vibrant eco-system for hardware, and not just software.

      It should not matter what form factor or capturing method the particular device itself is taking.

      They are all just trying to represent the position, orientation, and maybe action, that a body part is performing.

      Therefore, each body part could be represented by its name and its current position in space, for example:

      Right Hand Wrist (x, y, z, yaw, pitch, roll)
      Right Hand Index Finger (x, y, z, yaw, pitch, roll)
      Right Hand Middle Finger (x, y, z, yaw, pitch, roll)
      Left Knee (x, y, z, yaw, pitch, roll)
      Left Ankle (x, y, z, yaw, pitch, roll)
      Navel (x, y, z, yaw, pitch, roll)
      Seventh Cervical Vertebrae (x, y, z, yaw, pitch, roll)

      and so on...
      for any part of the body, which a particular device wishes to capture/emulate.

      As long as Oculus defines the format of these data points in a common API,
      then it should be just a matter for the device makers to format their data accordingly, so that they can interface with it.

      The game and software makers, can then rest assured, that they have a common data format which represents the players body parts, REGARDLESS of the particular device they maybe using.
      And it is simply up to them to decide whether to use that body part data as a control mechanism for their game/software, not worry about whether that particular device is going to sell, or still be around next year.

      I maybe over-looking some technical or political issues that are standing in the way,
      but I am imagining, and hoping that it is an issue that Oculus could quite easily solve. :)
  • mystify's avatar
    mystify
    Honored Guest
    "NeoTokyoNori" wrote:

    mptp, It looks like you and I have been thinking about the same topics (c.f. the locomotion issue),
    but you are more motivated to write up a post on the topic :D

    In fact I am quite surprised that Oculus has not announced that they will be releasing a common API for interfacing VR peripheral hardware. Since given their position as pioneers and champions of the new VR era, I would think it is in their interest to create a large and vibrant eco-system for hardware, and not just software.

    It should not matter what form factor or capturing method the particular device itself is taking.

    They are all just trying to represent the position, orientation, and maybe action, that a body part is performing.

    Therefore, each body part could be represented by its name and its current position in space, for example:

    Right Hand Wrist (x, y, z, yaw, pitch, roll)
    Right Hand Index Finger (x, y, z, yaw, pitch, roll)
    Right Hand Middle Finger (x, y, z, yaw, pitch, roll)
    Left Knee (x, y, z, yaw, pitch, roll)
    Left Ankle (x, y, z, yaw, pitch, roll)
    Navel (x, y, z, yaw, pitch, roll)
    Seventh Cervical Vertebrae (x, y, z, yaw, pitch, roll)

    and so on...
    for any part of the body, which a particular device wishes to capture/emulate.

    As long as Oculus defines the format of these data points in a common API,
    then it should be just a matter for the device makers to format their data accordingly, so that they can interface with it.

    The game and software makers, can then rest assured, that they have a common data format which represents the players body parts, REGARDLESS of the particular device they maybe using.
    And it is simply up to them to decide whether to use that body part data as a control mechanism for their game/software, not worry about whether that particular device is going to sell, or still be around next year.

    I maybe over-looking some technical or political issues that are standing in the way,
    but I am imagining, and hoping that it is an issue that Oculus could quite easily solve. :)

    This type of api is even less useful. It only encompasses body tracking, and not other forms of input, and the specifics are constraining. Each device will fill a different subset of those inputs, and so as a developer I still couldn't make an input scheme with it that will work with whatever devices people have. I will, say, pick the STEM controller to develop for, use it to track hand position/orientation, then work with all of the buttons, and someone who has a hand controller that gives hand position/orientation without all of the buttons will fail. Or if I design something to use finger tracking for each finger, devices that give a coarser feedback on hand position won't work. The end effect is that you still have to design your program for specific devices, but instead of doing it directly with the device and being able to tune it to that device's properties, you are constrained by an API that neither the device creator nor the developer designed.
  • Yeah, I'm with mystify on this one - it's all very well to have the API serve, say, the quaternion of the rotation of the wrist joint to an application, and the API calculating that value based on the input of a STEM controller if you want the stem controller to be controlling the rotation of the wrist. But if you want to use the STEM rotation to control the look-rotation of a camera for some VR cinema viewer (just hypothetically), you're shit out of luck.
    Not only that, but say you're developing a motion control system that doesn't output a rotation at all, but uses some revolutionary new way of tracking motion. Suddenly you find that your device needs to be specifically supported by Oculus in the Input API, and you have this huge barrier to entering the market.

    That being said, I do think that manufacturers of 3D input devices should standardise on a format to output values in. For example, when I'm outputting the euler angles from my Wiimote, they are given in values from -180 to 180. If I'm outputting the same angles from my iPad via Control, they're given in -π/2 to π/2 (from memory). That kind of thing is what really shoots developers in the foot when it comes to developing for multiple devices. If a dev could just say 'wrist rotation of avatar is controller by gyroscope 1', and gyroscope 1 can then be set to such-and-such device, the whole process becomes easier and faster.
    But that's a whole separate issue. :P