Forum Discussion

pathfinderpathfinder's avatar
28 days ago

What are the differences from the interaction design structure of the XR Interaction Toolkit

Before start the discussion, 

To avoid the usual “hardware support” discussion: I’m not asking about platform/device compatibility. I’m trying to understand the interaction system design differences versus Unity XR Interaction Toolkit, and why Meta Interaction SDK is structured the way it is.

I would like to ask for advice from anyone who has experience using Unity’s XR Interaction Toolkit (XRI), or who has been involved in designing or implementing the Meta Interaction SDK.

Since I am still in the process of understanding the architecture of both XRI and the Meta SDK, my questions may not be as well-formed as they should be. I would greatly appreciate your understanding.

I’m also aware of other SDKs such as AutoHand and Hurricane in addition to XRI and the Meta SDK. Personally, I felt that the architectural design of those two SDKs was somewhat lacking; however, if you have relevant experience, I would be very grateful for any insights or feedback regarding those SDKs as well.Thank you very much in advance for your time and help.

 

Question 1) Definition of the “Interaction” concept and the resulting structural differences
I’m curious how the two SDKs define “Interaction” as a conceptual unit.
How did this way of defining “Interaction” influence the implementation of Interactors and Interactables, and what do you see as the core differences?

Question 2) XRI’s Interaction Manager vs. Meta SDK’s Interactor Groupㄽ
For example, in XRI the Interactor flow is dependent on the Interaction Manager.
In contrast, in the Meta Interaction SDK, each Interactor has its own flow (the Drive function is called from each Interactor’s update), and when interaction priority needs to be coordinated, it’s handled through Interactor Groups.
I’m wondering what design philosophy differences led to these architectural choices.

Question 3) Criteria for Interactor granularity
Both XRI and the Meta SDK provide various Interactors (ray, direct/grab, poke, etc.).
I’d like to understand how each SDK decides how to split or categorize Interactors, and what background or rationale led to those decisions.

In addition to the questions above, I’d greatly appreciate it if you could point out other aspects where differences in architecture or design philosophy between the two SDKs become apparent.

→ Find helpful resources to begin your development journey in Getting Started

→ Get the latest information about HorizonOS development in News & Announcements.

→ Access Start program mentor videos and share knowledge, tutorials, and videos in Community Resources.

→ Get support or provide help in Questions & Discussions.

→ Show off your work in What I’m Building to get feedback and find playtesters.

→ Looking for documentation?  Developer Docs

→ Looking for account support?  Support Center

→ Looking for the previous forum?  Forum Archive

→ Looking to join the Start program? Apply here.

 

Recent Discussions