The present disclosure generally relates to a mechanism for providing a calibrating mechanism, in particular, to a method for calibrating a virtual object and a calibrating device.
See
In the virtual world of
However, as time of the user immersing in the virtual world prolongs, the tracking error of the poses of the input device 101 may be accumulated, such that the virtual object 102 and the ray 103 may not be accurately rendered.
As shown in
Accordingly, the disclosure is directed to a method for calibrating a virtual object and a calibrating device, which may be used to solve the above technical problems.
The embodiments of the disclosure provide method for calibrating a virtual object, applied to a calibrating device. The method includes: determining, by the calibrating device, a reference direction associated with a first virtual object in a virtual world; obtaining, by the calibrating device, a current motion data of an input device corresponding to the first virtual object from a motion detection circuit of the input device; obtaining, by the calibrating device, a predetermined heading of the motion detection circuit of the input device; determining, by the calibrating device, a current pose of the first virtual object with respect to the input device based on the predetermined heading of the motion detection circuit and the current motion data of the input device; determining, by the calibrating device, a calibrating factor based on the reference direction and the current pose of the first virtual object; and calibrating, by the calibrating device, an object pose of the first virtual object based on the calibrating factor.
The embodiments of the disclosure provide a calibrating device including a storage circuit and a processor. The storage circuit stores a program code. The processor is coupled to the storage circuit and accesses the program code to perform: determining a reference direction associated with a first virtual object in a virtual world; obtaining a current motion data of an input device corresponding to the first virtual object from a motion detection circuit of the input device; obtaining a predetermined heading of the motion detection circuit of the input device; determining a current pose of the first virtual object with respect to the input device based on the predetermined heading of the motion detection circuit and the current motion data of the input device; determining a calibrating factor based on the reference direction and the current pose of the first virtual object; and calibrating an object pose of the first virtual object based on the calibrating factor.
The accompanying drawings are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification. The drawings illustrate embodiments of the invention and, together with the description, serve to explain the principles of the disclosure.
Reference will now be made in detail to the present preferred embodiments of the invention, examples of which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers are used in the drawings and the description to refer to the same or like parts.
See
In various embodiments, the calibrating device 200 may be any smart device and/or computer devices. In some embodiments, the calibrating device 200 can be any input device that can be worn on the user for performing specific operations.
In
The processor 204 may be coupled with the storage circuit 202, and the processor 104 may be, for example, a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Array (FPGAs) circuits, any other type of integrated circuit (IC), a state machine, and the like.
In a first embodiment, the calibrating device 200 may be an input device, and the calibrating device 200 may be connected with a host for providing the visual content of a reality service, wherein the host may be, for example, a head-mounted display (HMD), and the reality service may be a VR service, an augmented reality (AR) service, a mixed reality (MR) service, and/or an extended reality (ER) service, etc.
In this case, the input device (which may be a wearable device worn on the user) may be used by the user to interact with the visual content (e.g., a VR world) provided by the host. For example, the calibrating device 200 may be the input device 101 in
In a second embodiment, the calibrating device 200 may be the host (e.g., the HMD) that can display the visual contents of the considered reality services for the user to see. In the second embodiment, the user can also use an input device (e.g., the input device 101 in
In the first and/or second embodiment, the host can render a virtual object corresponding to the input device in the visual content, such as the virtual object 102 in
Accordingly, the embodiments of the disclosure provide a method for calibrating a virtual object, which may be used to resolve the problem, and the associated details would be provided in the following.
In the embodiments of the disclosure, the processor 204 may access the modules and/or the program code stored in the storage circuit 202 to implement the method for calibrating a virtual object provided in the disclosure, which would be further discussed in the following.
See
In step S310, the processor 204 determines a reference direction (represented by Vref) associated with a first virtual object in a virtual world.
In one embodiment, the first virtual object has an indicator in the virtual world, and the reference direction Vref corresponds to a designated direction of the indicator.
In various embodiments, the reference direction Vref may be determined in different ways.
See
In
In the first embodiment, since the calibrating device 200 may be an input device as mentioned in the above, the calibrating device 200 may be assumed to be the input device 301 in
In the embodiment, the host may render a first virtual object 302 (e.g., a virtual model of the input device 301) according to the tracked pose of the input device 301, and the first virtual object 302 may have an indicator 303 (e.g., a ray) in the virtual world.
As mentioned in the above, the first virtual object 302 and the associated indicator 303 may be drifted as time goes by. In this case, the user may activate a calibration function of the host, wherein the calibration function may be used to calibrate the object pose of the first virtual object 302.
In one embodiment, after the calibration function has been activated, the host may provide a calibration mode that instructs the user to input the sliding direction 399. In the embodiment, the sliding direction 399 may be the direction corresponding to the required direction of the indicator 303 of the user.
From another perspective, the user may determine any required direction where the indicator 303 should indicate (e.g., point to) as the sliding direction 399, but the disclosure is not limited thereto.
In one embodiment, the input device 301 (e.g., the calibrating device 200) may be disposed with an optical sensor that is capable of sensing the sliding operation inputted by the user. In this case, the user may perform the sliding operation corresponding to the sliding direction 399 to the optical sensor for inputting the sliding direction 399 to the calibrating device 200.
In one embodiment, the optical sensor may be an optical finger navigation (OFN) sensor. In this case, the user may use, for example, the thumb thereof to slide toward the sliding direction 399 on the OFN sensor, such that the OFN sensor can detect the sliding direction 399.
However, since the object pose of the first virtual object 302 may be mainly determined based on the motion data provided by the motion detection circuit (e.g., an inertia measurement unit (IMU)) disposed in the input device 301, the representation of the sliding direction 399 with respect to the motion detection circuit needs to be derived.
In the embodiment, the representation of the sliding direction 399 with respect to the motion detection circuit may be understood as the designated direction of the indicator 303, but the disclosure is not limited thereto.
Therefore, after the input device 301 (e.g., the calibrating device 200) has detected the sliding direction 399 by using the optical sensor, the calibrating device 200 may convert the sliding direction 399 into the designated direction based on a relative position between a first coordinate system of the optical sensor and a second coordinate system of a motion detection circuit (e.g., an inertia measurement unit (IMU)) of the input device 301.
See
In the embodiment, it is assumed that the first coordinate system 310 of the optical sensor and the second coordinate system 320 of the motion detection circuit have the relative position shown in
In this case, the calibrating device 200 may accordingly convert the sliding direction detected by the optical sensor into the corresponding designated direction.
For example, if the detected sliding direction is (0, 1, 0) in the first coordinate system 310 (i.e., the direction corresponding to the Y-axis of the first coordinate system 310), the corresponding designated direction in the second coordinate system would be (1, 0, 0) (i.e., the direction corresponding to the X-axis of the second coordinate system 320).
For another example, if the detected sliding direction is (1, 0, 0) in the first coordinate system 310 (i.e., the direction corresponding to the X-axis of the first coordinate system 310), the corresponding designated direction in the second coordinate system would be (0, −1, 0) (i.e., the direction corresponding to the minus Y-axis of the second coordinate system 320).
In the embodiment of
In the first embodiment, the calibrating device 200 may determine the designated direction as the reference direction Vref considered in step S310.
See
In the embodiment, the indicator 303 may not be originated from the front end of the first virtual object 302 due to the variety of human hand shape/contour. Nevertheless, the input device 301 may still detect the sliding direction 399 during the calibration mode as described in the discussions of
Likewise, the calibrating device 200 may determine the designated direction as the reference direction Vref considered in step S310.
See
In the second embodiment where the calibrating device 200 may be the host (e.g., the HMD), the calibrating device 200 may track a hand gesture 520 of a hand 510, wherein the hand 510 has a first joint wearing the input device 301.
In
In the embodiment, the hand gesture 520 of the hand 599 may be characterized by the corresponding skeleton map, but the disclosure is not limited thereto.
In one embodiment, the calibrating device 200 may determine whether the hand gesture 520 has performed a target gesture. If yes, the calibrating device 200 may obtain a joint pose 521 of the first joint 511 of the hand 510.
In some embodiments, the target gesture may be determined based on the requirements of the designer, such as one of the predetermined gestures 531-533, but the disclosure is not limited thereto.
For better understanding, the predetermined gesture 532 (e.g., an OK gesture) would be used as an example of the target gesture, but the disclosure is not limited thereto.
In this case, when the calibrating device 200 determines that the hand gesture 520 indicates that the hand 510 has performed the predetermined gesture 532, the calibrating device 200 may obtain the joint pose 521 of the first joint 511 based on the tracked hand gesture 520.
See
In
In the scenario of
In this case, the reference direction 599 may be regarded as the reference direction Vref considered in step S310, but the disclosure is not limited thereto.
In the embodiments where other predetermined gestures are used as the target gesture, the predetermined relative pose between the reference direction 599 and the joint pose 521 of the first joint 511 may be adapted accordingly, but the disclosure is not limited thereto.
In step S320, the processor 204 obtains a current motion data of the input device 301 corresponding to the first virtual object 302 from the motion detection circuit of the input device 301.
In the embodiment, the current motion data may be the current reading of the motion detection circuit. In the embodiment where the motion detection circuit of the input device 301 is the IMU, the current motion data may be the current IMU reading provided by the IMU of the input device 301, which may be represented by a quaternion form of (Qw, Qx, Qy, Qz), but the disclosure is not limited thereto.
In step S330, the processor 204 obtains a predetermined heading of the motion detection circuit of the input device 301.
In one embodiment, the predetermined heading of the motion detection circuit may also be understood as the initial heading of the motion detection circuit, which may be characterized by:
In one embodiment, the predetermined heading of the motion detection circuit may be, for example, (0, 0, 1), but the disclosure is not limited thereto.
In step S340, the processor 204 determining a current pose of the first virtual object 302 with respect to the input device 301 based on the predetermined heading (e.g., Vinit) of the motion detection circuit and the current motion data of the input device 301.
In one embodiment, the processor 204 may derive a transforming matrix based on the current motion data of the input device 301.
In one embodiment, the transforming matrix may be characterized by:
but the disclosure is not limited thereto.
Afterwards, the processor 204 may transform the predetermined heading of the motion detection circuit into the current pose of the first virtual object 302 with respect to the input device 301 by using the transforming matrix (e.g., Q).
In one embodiment, the current pose of the first virtual object 302 with respect to the input device 301 may be characterized by:
but the disclosure is not limited thereto.
In step S350, the processor 204 determines a calibrating factor based on the reference direction and the current pose of the first virtual object 302.
In one embodiment, the processor 204 may determine a first angle based on an inner product result of the reference direction Vref and the current pose of the first virtual object 302.
For example, the first angle may be characterized by θ=cos−1(Vref·Vimu), but the disclosure is not limited thereto.
In addition, the processor 204 may determine a normal direction based on an outer product result of the reference direction Vref and the current pose of the first virtual object 302.
For example, the normal direction may be characterized by n=(nx, ny, nz)=Vref⊗Vimu, but the disclosure is not limited thereto.
Next, the processor 204 may determine the calibrating factor based on the inner product result and the outer product result.
In one embodiment, the calibrating factor may be characterized by Qfix=(cos(θ/2), nx sin θ/2, ny sin θ/2, nz sin θ/2), but the disclosure is not limited thereto.
In step S360, the processor 204 calibrates the object pose of the first virtual object 302 based on the calibrating factor Qfix.
In one embodiment, the processor 204 may calibrate an orientation of the first virtual object 204 by the calibrating factor Qfix.
In the first embodiment where the calibrating device 200 is the input device 301 (e.g., the calibrating device 200 and the input device 301 are the same device), the calibrating device 200 may provide the reference direction Vref and the calibrated object pose of the first virtual object 302 to the host (e.g., the HMD).
In this case, the host may accordingly display the first virtual object 302 with the calibrated object pose and display the indicator 303 of the first virtual object 302 that indicates the reference direction Vref.
See
In
In the second embodiment where the calibrating device 200 is the host connected with the input device 301, the calibrating device 200 may display the first virtual object 302 with the calibrated object pose and display the indicator 303 of the first virtual object 302 that indicates the reference direction Vref. The associated results may be referred to
See
In the first embodiment where the calibrating device 200 is the input device 301, the calibrating device 200 may further include the motion detection circuit 206 and the optical sensor 208 coupled to the processor 204, and how the calibrating device 200 implements the proposed method of the disclosure may be referred to the above descriptions, which would not be repeated herein.
See
In the second embodiment where the calibrating device 200 is the host 799 connected with the input device 301, the calibrating device 200 may implement the proposed method of the disclosure based on the information provided by the motion detection circuit 206 and the optical sensor 208 in the input device 301, and how the calibrating device 200 implements the proposed method may be referred to the above descriptions, which would not be repeated herein.
In summary, the embodiments of the disclosure provide a solution to re-align the object pose of the virtual object in the virtual world with the input device in the real world based on the inputted direction of the user. Accordingly, the user experience would not be affected by the virtual model and/or ray drifted away from the input device in the real world.
It will be apparent to those skilled in the art that various modifications and variations can be made to the structure of the present invention without departing from the scope or spirit of the disclosure. In view of the foregoing, it is intended that the present disclosure cover modifications and variations of this invention provided they fall within the scope of the following claims and their equivalents.
This application claims the priority benefit of U.S. provisional application Ser. No. 63/598,935, filed on Nov. 14, 2023. The entirety of the above-mentioned patent application is hereby incorporated by reference herein and made a part of this specification.
Number | Date | Country | |
---|---|---|---|
63598935 | Nov 2023 | US |