OSC App to allow VRChat avatars to interact with eye and facial tracking hardware
**Is this a BUG REPORT or FEATURE REQUEST?** Potentially both: Bug if existing mechanisms should prevent this, Feature Request if new logic is needed. **App version, VRCFaceTracking Module** * VRCFaceTracking Version: 5.2.3.0 * QuestProOpenXRTrackingModule Version: Latest **The Issue:** When using the Meta Quest Pro with "Natural Facial Expressions" (which provides data via `XR_FACE_TRACKING_DATA_SOURCE2_VISUAL_FB` according to [Meta's OpenXR documentation](https://developers.meta.com/horizon/documentation/native/android/move-ref-api/)), there is noticeable audio interference. Even when the user is silent, background noise or breathing can cause the avatar's mouth to move. These audio driven movements are seemingly blended into the visual data stream by Meta's API and are passed through VRCFaceTracking to VRChat as regular facial expressions (e.g., `jawOpen`, `mouthPucker`). This makes it difficult to achieve true silent expressions or purely camera driven lip sync, as the application cannot easily distinguish these audio induced movements from intentional, camera tracked facial movements. **Meta's API Context:** Meta's documentation for `XrFaceTrackingDataSource2FB` states that `XR_FACE_TRACKING_DATA_SOURCE2_VISUAL_FB` "may also use audio to further improve the quality of the tracking." While this can be beneficial, its current implementation appears to introduce unwanted artifacts during user silence. The API also offers `XR_FACE_TRACKING_DATA_SOURCE2_AUDIO_FB` for purely audio driven expressions. **Feature Request/Desired Behavior:** Could the VRCFaceTracking Quest Pro module investigate the following: 1. **Current Handling of `XrFaceTrackingDataSource2FB`:** Is the module aware of this flag? If the data source is `XR_FACE_TRACKING_DATA_SOURCE2_VISUAL_FB`, is there any current logic to address the "optional audio" component, especially during periods of user silence? 2. **Filtering/Mitigation Option:** Would it be possible to introduce an option within VRCFaceTracking (perhaps a sensitivity threshold or a toggle) to aggressively filter or reduce the impact of minor lip movements when overall facial movement and voice activity are below a certain threshold, specifically when using the Quest Pro's visual tracking? This could help suppress the phantom audio movements. 3. **Exposing Data Source Information:** Could VRCFaceTracking potentially expose or log the reported `XrFaceTrackingDataSource2FB` state? This might help users and developers in diagnosing issues. 4. **Advocacy for Upstream API Improvement:** If the Meta OpenXR API itself doesn't provide sufficient means to separate or disable this audio component within the `VISUAL_FB` stream, could the VRCFaceTracking developers consider providing this feedback to Meta? More granular control from Meta's side would be the ideal solution. **Actual Behavior:** When using Quest Pro with "Natural Facial Expressions" enabled via VRCFaceTracking, the avatar's mouth often moves in response to ambient sounds or breathing, even when the user is intentionally silent and making no facial expression. This is passed to VRChat as valid expression data. **Steps to Reproduce (User-Provided Example):** 1. Enable "Natural Facial Expressions" on Quest Pro. 2. Use VRCFaceTracking 3. Enter VRChat with a compatible avatar. 4. Remain silent in a quiet environment, then introduce subtle background noise or vary breathing, without speaking or intentionally moving the mouth. 5. Observe the avatar's mouth reacting to these sounds. **Environment:** * **Hardware:** Meta Quest Pro * **PCVR Connection Method:** Virtual Desktop * **Operating System:** Windows 11
This issue appears to be discussing a feature request or bug report related to the repository. Based on the content, it seems to be still under discussion. The issue was opened by hardtokidnap and has received 2 comments.