The present disclosure generally relates to a simulation in the virtual world, in particular, to a virtual object operating system and a virtual object operating method.
Technologies for simulating senses, perception, and/or environment, such as virtual reality (VR), augmented reality (AR), mixed reality (MR), and extended reality (XR), are popular nowadays. The aforementioned technologies can be applied in multiple fields, such as gaming, military training, healthcare, remote working, etc.
In a virtual world, a user may interact with one or more virtual persons. In general, these virtual persons are configured with predefined actions, such as specific dialogues, deals, fights, services, etc. However, merely one type of predefined action would be configured for one virtual person. For example, a virtual soldier merely fights with the avatar of the user, even the avatar performs a handshaking gesture. In actual, there are lots of interacting behaviors for a person in the real world. Therefore, the interacting behaviors in the virtual world should be improved.
Accordingly, the present disclosure is directed to a virtual object operating system and a virtual object operating method, to simulate the behavior of a virtual object and/or an avatar of the user in a virtual reality environment.
In one of the exemplary embodiments, a virtual object operating method includes the following steps. A manipulating portion on a virtual object pointed by a user and an object type of the virtual object are identified. The object type includes a virtual creature in a virtual reality environment. A manipulating action performed by the user is identified. The manipulating action is corresponding to the virtual object. An interacting behavior of an avatar of the user with the virtual object is determined according to the manipulating portion, the object type, and the manipulating action.
In one of the exemplary embodiments, a virtual object operating system includes, but not limited to, a motion sensor and a processor. The motion sensor is used for detecting a motion of a human body portion of a user. The processor is coupled to the motion sensor. The processor identifies a manipulating portion on the virtual object pointed by the user and an object type if the virtual object, identifies a manipulating action based on a sensing result of the motion of the human body portion detected by the motion sensor, and determines an interacting behavior of an avatar of the user with the virtual object according to the manipulating portion, the object type, and the manipulating action. The object type includes a virtual creature in a virtual reality environment. The manipulating action is corresponding to the virtual object.
It should be understood, however, that this Summary may not contain all of the aspects and embodiments of the present disclosure, is not meant to be limiting or restrictive in any manner, and that the invention as disclosed herein is and will be understood by those of ordinary skill in the art to encompass obvious improvements and modifications thereto.
The accompanying drawings are included to provide a further understanding of the disclosure, and are incorporated in and constitute a part of this specification. The drawings illustrate embodiments of the disclosure and, together with the description, serve to explain the principles of the disclosure.
Reference will now be made in detail to the present preferred embodiments of the disclosure, examples of which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers are used in the drawings and the description to refer to the same or like parts.
The motion sensor 110 may be an accelerometer, a gyroscope, a magnetometer, a laser sensor, an inertial measurement unit (IMU), an infrared ray (IR) sensor, an image sensor, a depth camera, or any combination of aforementioned sensors. In the embodiment of the disclosure, the motion sensor 110 is used for sensing the motion of one or more human body portions of a user for a time period. The human body portion may be a hand, a head, an ankle, a leg, a waist, or other portions. The motion sensor 110 may sense the motion of the corresponding human body portion, to generate motion-sensing data from the sensing result of the motion sensor 110 (e.g. camera images, sensed strength values, etc.) at multiple time points within the time period. For one example, the motion-sensing data comprises a 3-degree of freedom (3-DoF) data, and the 3-DoF data is related to the rotation data of the human body portion in three-dimensional (3D) space, such as accelerations in yaw, roll, and pitch. For another example, the motion-sensing data comprises a relative position and/or displacement of a human body portion in the 2D/3D space. In some embodiments, the motion sensor 110 could be embedded in a handheld controller or a wearable apparatus, such as a wearable controller, a smartwatch, an ankle sensor, an HMD, or the likes.
The display 120 may be a liquid-crystal display (LCD), a light-emitting diode (LED) display, an organic light-emitting diode (OLED) display, or other displays. In the embodiment of the disclosure, the display 120 is used for displaying images, for example, the virtual reality environment. It should be noted that, in some embodiments, the display 120 may be a display of an external apparatus (such as a smartphone, a tablet, or the likes), and the external apparatus can be placed on the main body of an HMD.
The memory 130 may be any type of a fixed or movable Random-Access Memory (RAM), a Read-Only Memory (ROM), a flash memory, or a similar device or a combination of the above devices. The memory 130 can be used to store program codes, device configurations, buffer data, or permanent data (such as motion sensing data, sensing result, predetermined interactive characteristics, etc.), and these data would be introduced later.
The processor 150 is coupled to the motion sensor 110, the display 120, and/or the memory 130, and the processor 150 is configured to load the program codes stored in the memory 130, to perform a procedure of the exemplary embodiment of the disclosure.
In some embodiments, the processor 150 may be a central processing unit (CPU), a microprocessor, a microcontroller, a digital signal processing (DSP) chip, a field programmable gate array (FPGA), etc. The functions of the processor 150 may also be implemented by an independent electronic device or an integrated circuit (IC), and operations of the processor 150 may also be implemented by software.
It should be noticed that the processor 150 may not be disposed at the same apparatus with the motion sensor 110 or the display 120. However, the apparatuses respectively equipped with the motion sensor 110 and the processor 150 or the display 120 and the processor 150 may further include communication transceivers with compatible communication technology, such as Bluetooth, Wi-Fi, and IR wireless communications, or physical transmission line, to transmit or receive data with each other. For example, the display 120 and the processor 150 may be disposed in an HMD while the motion sensor 110 is disposed outside the HMD. For another example, the processor 150 may be disposed in a computing device while the motion sensor 110 and the display 120 is disposed outside the computing device.
To better understand the operating process provided in one or more embodiments of the disclosure, several embodiments will be exemplified below to elaborate the operating process of the virtual object operating system 100. The devices and modules in the virtual object operating system 100 are applied in the following embodiments to explain the virtual object operating method provided herein. Each step of the virtual object operating method can be adjusted according to actual implementation situations and should not be limited to what is described herein.
Referring to
In another embodiment, the manipulating portion of the virtual object is related to a collision event with an avatar of the user. The processor 150 may form a first interacting region acted with a body portion of the avatar of the user and form a second interacting region acted with the virtual object. The first and second interacting regions are used to define the positions of the human body portion and the virtual object, respectively. The shape of the first or the second interacting region may be a cube, a plate, a dot, or other shapes. The first interacting region may surround or just be located at the human body portion, and the second interacting region may surround or just be located at the virtual object. The processor 150 may determine whether the first interacting region collides with the second interacting region to determining the manipulating portion, and the manipulating portion is related to a contact portion between the first interacting region and the second interacting region. The collision event may happen when two interacting regions are overlapped or contacted with each other.
For example,
In one embodiment, the object type of the virtual object could be a virtual creature (such as virtual human, dog, cat, etc.), an abiotic object, a floor, a seat, etc. created in the virtual reality environment, and the processor 150 may identify the object type of the virtual object pointed by the user. In some embodiments, the object type of the virtual object for interaction is fixed, the identification for the object type may be omitted.
In one embodiment, the virtual object is formed from a real object such as a real creature, a real environment, or an abiotic object. The processor 150 may scan the real object in a real environment through the motion sensor 110 (which is an image sensor) to generate a scanning result (such as the color, texture, and geometric shape of the real object), identify the real object according to the scanning result to generate an identification result (such as the real object's name, type, or identifier), create the virtual object in the virtual reality environment corresponding to the real object in the real environment according to the scanning result, and determine at least one predetermined interactive characteristic of the virtual object according to the identification result. The predetermined interactive characteristic may include a predefined manipulating portion and a predefined manipulating action. Each predefined manipulating portion could be located at a specific position on the virtual object. For example, a predefined manipulating portion of the virtual coffee cup is its handle. The predefined manipulating action could be a specific hand gesture (called as a predefined gesture later) or a specific motion of a specific human body portion of the user (called as a predefined motion later). Taking
In some embodiments, the virtual object could be predefined in the virtual reality environment.
In one embodiment, one virtual object may be defined with multiple predefined manipulating portions. In some embodiments, these predefined manipulating portions maybe not overlapped with each other. The processor 150 may determine one of the predefined manipulating portions matches with the manipulating portion. For example, the processor 150 may determine a distance or an overlapped portion between the manipulating portion formed by the collision event and each predefined manipulating portion, and the processor 150 may select one predefined manipulating portion with the nearest distance or the largest overlapped portion with the manipulating portion.
Referring to
In one embodiment, the manipulating action could be a hand gesture of the user. The processor 150 may identify the hand gesture in the image captured by the motion sensor 110. The processor 150 may further determine one of the predefined gestures matches with the hand gesture. Taking
In another embodiment, the manipulating action could be the motion of a human body portion of the user. The motion may be related to at least one of the position, the pose, the rotation, the acceleration, the displacement, the velocity, the moving direction, etc. The processor 150 may determine the motion information based on the sensing result of the motion sensor 110 and determine one of the predefined motions matches with the motion of the human body portion. For example, the motion sensor 110 embedded in a handheld controller, which is held by the user's hand, obtains 3-DoF information, and the processor 150 may determine the rotation information of the hand of the user based on the 3-DoF information.
In some embodiments, the manipulating action could be the speech of the user. The processor 150 may detect one or more predefined keywords from the speech.
After the manipulating portion, the object type, and the manipulating action are identified, the processor 150 may determine an interacting behavior of an avatar of the user with the virtual object according to the manipulating portion, the object type, and the manipulating action (step S250). Specifically, different manipulating portions, different object types, or different manipulating actions may result in different interacting behaviors of the avatar. The interacting behavior could be a specific motion or any operating behavior related to the virtual object. For example, the interacting behavior could be handshaking, a hug, a talk, a grabbing motion, a throwing motion, a teleport action, etc.
In one embodiment, the object type is identified as the virtual creature, and different predefined manipulating portions are corresponding to different interacting behaviors. Taking
In one embodiment, the manipulating action is a hand gesture of the user, and different predefined gestures are corresponding to different interacting behaviors. Taking
In one embodiment, the manipulating action is the motion of a human body portion of the user, and different predefined motions are corresponding to different interacting behaviors. Taking
In one embodiment, the processor 150 may further determine the speed of the motion of the human body portion, and different predefined speeds are corresponding to different interacting behaviors. Taking
In another embodiment, the processor 150 may further determine the moving direction of the motion of the human body portion, and different predefined moving direction s are corresponding to different interacting behaviors. Taking
It should be noted that there are still lots of factors to change the motion status, for example, the factor could be the rotation, the acceleration, continuity time, etc., and different factors of the motion of the human body portion may result in different interacting behaviors.
In one embodiment, the object type is identified as a floor or a seat, and the processor 150 may determine the interacting behavior as a teleport action to the floor or the seat. For example,
For another example, it is assumed that a manipulating portion is located at the floor. The hand gesture of the user's hand 601 becomes the fist gesture from the one index finger up gesture and is conformed to the predefined gesture for selecting the object. The teleport location would be the floor. The avatar 605 of the user would teleport into the floor and stand on the floor.
In one embodiment, the object type is identified as am abiotic object (such as a virtual ball, a virtual dart, etc.), and the processor 150 may determine the interacting behavior as a grabbing motion or a picking motion to grab or pick up the virtual object.
It will be apparent to those skilled in the art that various modifications and variations can be made to the structure of the present disclosure without departing from the scope or spirit of the disclosure. In view of the foregoing, it is intended that the present disclosure cover modifications and variations of this disclosure provided they fall within the scope of the following claims and their equivalents.