The disclosure claims the right of priority to Chinese patent application No. 202210253972.6, filed on Mar. 15, 2022 and entitled “Interaction Method and Apparatus, Device, and Computer-Readable Storage Medium”, which is incorporated in its entirety herein by reference.
The disclosure relates to the technical field of virtual reality, and particularly relates to a method, apparatus, device, and computer-readable storage medium for interaction.
In a related technology, during the realization of interaction between a user and an interface in a virtual reality scene, a point of alignment of a ray associated with a pointing direction of a gamepad with the interface in the virtual scene is generally determined based on the orientation of the joystick, and a corresponding response is made directly to a further operation on the gamepad by the user. In this way, it is prone to false responses in determining an operation on the interface by the user, with poor anti-interference capability.
The examples of the disclosure provide an implementation solution different from the related art, so as to solve the technical problem of poor anti-interference capability in the related art in a way of interaction between a user and an interface in a virtual reality scene.
In a first aspect, the disclosure provides a method for interaction. The method includes:
In a second aspect, the disclosure provides an apparatus for interaction. The apparatus includes:
In a third aspect, the disclosure provides an electronic device. The electronic device includes:
In a fourth aspect, an example embodiment of the disclosure provides a computer-readable storage medium. The computer-readable storage medium stores a computer program. The computer program implements any one of the methods in the first aspect or the possible embodiments of the first aspect when executed by a processor.
In a fifth aspect, an example of the disclosure provides a computer program product. The computer program product includes a computer program. The computer program implements any one of the methods in the first aspect or the possible embodiments of the first aspect when executed by a processor.
The disclosure provides a solution including: obtaining a target control instruction corresponding to a target object; determining a target control region corresponding to the target control instruction in a virtual scene with the target control instruction, and the target control region displays at least one element; determining a target element according to the target control instruction and the target control region, where the target element is comprised in the at least one element; and responding to the target control instruction based on the target element. With the solution, the target element can be determined in combination with recognition of the region where the element is located, the anti-interference capability is also strong, and the user experience is improved.
In order to more clearly illustrate technical solutions in the examples of the disclosure or in the related art, a brief introduction to the accompanying drawings required for the description of the examples or the related art will be provided below. Apparently, the accompanying drawings in the following description are some of the examples of the disclosure, and those ordinary skills in the art would also be able to derive other drawings from these drawings without making creative efforts. In the drawings:
Examples of the disclosure are described in detail below, and instances of the examples are illustrated in the drawings. The examples described below by reference to the drawings are illustrative for explaining the disclosure and are not to be construed as limiting the disclosure.
The terms “first”, “second” and so forth, in the description and claims of the examples of the disclosure and in the accompanying drawings, are used to distinguish between similar objects and not necessarily to describe a particular order or sequential order. It should be understood that the data used in this way may be interchanged where appropriate, such that the examples of the disclosure described herein can be, for example, implemented in other sequences than those illustrated or described herein. Furthermore, the terms “comprise”, “include”, “have”, and any variations thereof are intended to cover non-exclusive inclusions, for example, processes, methods, systems, products, or devices that includes a series of steps or units are not necessarily limited to those explicitly listed steps or units, but may include other steps or units not explicitly listed or inherent to these processes, methods, products, or devices.
First, some terms in the examples of the disclosure are explained below to facilitate understanding by those skilled in the art.
VR: virtual reality. The virtual reality technology is a brand-new practical technology, and includes computer, electronic information, and simulation technologies. Its basic implementation mode is that a computer simulates a virtual environment to provide people with a sense of environmental immersion.
The technical solutions of the disclosure and how the technical solutions of the disclosure solve the above technical problem are described in detail below with specific examples. The following specific examples may be combined with each other, and the same or similar concepts or processes may not be repeated in some examples. The examples of the disclosure will be described in detail below with reference to the accompanying drawings.
Specifically, the control device 11 is configured for a user to trigger a target control instruction and transmit the target control instruction to the display device 10.
The display device 10 is configured to obtain a target control instruction corresponding to a target object; determine a target control region corresponding to the target control instruction in a virtual scene with the target control instruction, and the target control region displays at least one element; determine a target element according to the target control instruction and the target control region, and the target element is comprised in the at least one element; and respond to the target control instruction based on the target element.
In some alternative examples of the disclosure, the control device 11 may be a gamepad 112 or a pen-shaped holding device 111.
In some examples, a user may also directly recognize a gesture of a hand of the user through the display device, so as to give a target control instruction through the gesture action of the hand of the user, and to control the display device.
Alternatively, the display device 10 may be a head-mounted device.
Alternatively, the display device 10 may include a data processing device and a head-mounted device. The data processing device is configured to obtain a target control instruction corresponding to a target object; determine a target control region corresponding to the target control instruction in a virtual scene with the target control instruction, and the target control region displays at least one element; determine a target element according to the target control instruction and the target control region, and the target element is comprised in the at least one element; and respond to the target control instruction with the head-mounted device based on the target element. The head-mounted device is configured to display a corresponding picture. The data processing device may be a user terminal device, a personal computer (PC), or other devices having a data processing function.
Reference may be made to the description of each method example below for specific working principles and specific interaction processes of the component units in this system example, such as the control device 11 and the display device 10.
Alternatively, the target object may be the control device in the example corresponding to
Alternatively, the target object may also be a hand of the user.
When the target object is the control device, the target control instruction may be a control instruction received from the control device. When the target object is the hand of the user, for a determination mode of the target control instruction, the method further includes:
Specifically, the image information may be captured by a camera set arranged on the display device, and alternatively, may be captured by a capturing apparatus arranged outside the display device and transmitted to the display device.
Alternatively, in S02, the step of determining action information of a user according to the image information includes:
Alternatively, the action information includes one or more of movement track information of the hand, and motion information of at least one finger joint of the hand. The movement track information of the hand includes movement track information of a palm center. The motion information of the finger joint may be motion information of the finger joint relative to the palm center. The motion information of the finger joint may be movement information of a three-dimensional position of a joint point corresponding to the finger joint.
Alternatively, with respect to the gesture information set in S03, the gesture information set includes a plurality of groups of hand action information, and gesture types corresponding to the groups of hand action information. The method further includes:
A gesture type corresponding to the target result is a target gesture type corresponding to the action information of the user.
The control instruction corresponding to the target gesture type is a control instruction corresponding to the action information (that is, action information of the user).
In some alternative examples of the disclosure, the pose information may include position information and pose angle information. The position information is three-dimensional coordinate information. The pose angle information is rotation information of corresponding coordinate axes about the three-dimensional coordinate information. For example, the pose angle information includes angle information rotating around an X axis, angle information rotating around a Y axis, and angle information rotating around a Z axis.
Alternatively, in S202, the step of determining a target control region corresponding to the target control instruction in a virtual scene with the target control instruction may include:
Alternatively, in S221, the step of obtaining pose information corresponding to the target object includes:
Alternatively, when the target object is a control device, the pose information received from the target object is determined as the pose information corresponding to the target object. The pose information may be included in the target control instruction or not.
Alternatively, the pose information received from the target object is self pose information determined by the target object according to its own measurement module.
Alternatively, when the target object is the hand of the user, the image information corresponding to the target object is obtained; and the pose information corresponding to the target object is determined according to the image information.
It should be noted that the target object in the disclosure may also be other parts of the user, such as arms, eyes, a face, etc.
Alternatively, the pose information corresponding to the hand may include three-dimensional position information of the palm center and pointing information of the hand. The pointing information of the hand is direction information of a line connecting a wrist joint of the hand and a root joint of the middle finger towards the root joint of the middle finger. The direction information may include pose angle information of three coordinate axes corresponding to the three-dimensional position information of the palm center.
Alternatively, in S222, the step of determining a region corresponding to the pose information in the virtual scene with the pose information includes:
Specifically, the first corresponding relation information includes a plurality of pose intervals, and a region corresponding to each pose interval in the plurality of pose intervals.
Alternatively, in S2023, the step of determining the region corresponding to the pose information according to the target region includes: taking the target region as a region corresponding to the pose information.
Specifically, the pose information and a size of the corresponding region may be related to sensitivity of the control device. Accordingly, the target control instruction and a size of the corresponding target control region may also be related to the sensitivity of the control device. Specifically, as shown in
Alternatively, the method further includes:
Alternatively, in S203, the step of determining a target element according to the target control instruction and the target control region includes:
Alternatively, when the target object is the control device, the corresponding target control instruction may be an instruction of clicking a key, an instruction of double-clicking a key, an instruction of pressing a key for a long time, etc.
Alternatively, when the target object is the hand, the corresponding target control instruction may be first clenching, index finger pressing, etc.
Alternatively, in S2032, the step of determining the target element corresponding to the target control instruction according to the target control instruction and the set of control instructions comprises: taking an element corresponding to a first control instruction identical to the target control instruction in the set of control instructions as the target element.
In some alternative examples of the disclosure, when the target object is the control device and the target control instruction is an instruction of clicking a preset key, the target element may be a heart identifier “”.
Furthermore, in S204, the step of responding to the target control instruction according to the target element includes:
Alternatively, the target task information may be control information for the target element, such as like, or task information for controlling over a screen content of the target control region implemented by an operation on the target element, such as deletion, enlargement, pause, etc.
Alternatively, in S2043, the step of responding to the target control instruction according to the target task information includes: control execution of the target task information.
In some alternative examples of the disclosure, when the target object is the control device and the target control instruction is an instruction of clicking a preset key, the target element is a heart identifier “”. When the target task information is like, the step of responding to the target control instruction according to the target task information includes: control the heart identifier “
” to change color.
Alternatively, the method further includes:
Specifically, the update information may include any one or more of addition, deletion, and modification, a second control instruction for the second corresponding relation information, or update information of task information corresponding to the second control instruction.
Alternatively, to further enhance the user experience, the method further includes:
Specifically, the step of displaying the target ray element based on the pose information includes: determine feature information of the target ray element based on the pose information; and display the target ray element according to the feature information. The feature information includes at least one or more of the following: start position information, center position information, color information, orientation information, length information, and thickness parameter information. The center position information is three-dimensional coordinate information of a center of the target ray element. The orientation information is pose angle information of the target ray element. The pose angle information is rotation information of coordinate axes corresponding to three-dimensional coordinate information and surrounding the center of the target ray element.
Alternatively, the method further includes:
Alternatively, the step of displaying a corresponding effect according to the receiving timing and the target ray element includes:
Alternatively, the step of displaying a corresponding effect according to the receiving timing and the target ray element includes:
Alternatively, a model corresponding to the target object may also be displayed. In some alternative examples of the disclosure, a display mode of the target ray element may be as shown in
Alternatively, the method further includes: display a corresponding special collision effect in a case that it is detected that a distance between position information of the effect element and position information of the target element is less than a first preset distance.
Alternatively, the method further includes: display a corresponding special collision effect if it is detected that a distance between position information of the effect element and position information of a center of the target control region is less than a second preset distance.
Alternatively, an identifier of the effect element may be set by the user, such as a heart identifier, a cartoon identifier, etc.
The position information of the effect element and the position information of the target element may respectively refer to position information of the effect element in the virtual scene and position information of the target element in the virtual scene. Correspondingly, the position information of the center of the target control region is also the position information of the center of the target control area in the virtual scene.
The special collision effect includes any one or more of the following:
In some alternative examples of the disclosure, a display mode of the special collision effect may be as shown in
Alternatively, the special collision effect may be a special stereoscopic effect. This solution may achieve multi-directional display of the special collision effect in a virtual reality space, thereby making the scene more realistic, further improving immersion of the user, and improving the user experience.
Alternatively, after the effect element is controlled to move at a preset rate along the target ray element for a second preset duration, in a case that it is not detected that the distance between the position information of the effect element and the position information of the target element is less than the first preset distance, no response is made, and the target ray element may stop being displayed.
Alternatively, after the effect element is controlled to move at a preset rate along the target ray element for a second preset duration, in a case that it is not detected that the distance between the position information of the effect element and the position information of the center of the target control region is less than the second preset distance, no response is made, and the target ray element may stop being displayed.
Specifically, the regions in the virtual scene involved in the disclosure are display regions in the virtual scene. Each display region corresponds to a coordinate range relative to the virtual scene.
Alternatively, the display region in the virtual scene in the disclosure may include a screen region and an operating region. The operating region may be at least a part of the screen region, cover the screen region, partially overlap the screen region, or be independent of the screen region.
Alternatively, an angle between a display surface corresponding to the operating region and a display surface corresponding to the screen region may be a preset value.
Alternatively, one operating region may be provided. The target control region determined according to the solution of the disclosure may be the operating region or correspond to the operating region.
Alternatively, a plurality of operating regions may be provided. Display surfaces corresponding to the plurality of operating regions may be a same surface. The target control region determined according to the solution of the disclosure may be one of the plurality of operating regions or correspond to one of the operating regions.
Alternatively, a relation between the target control region and the corresponding operating region satisfies the following preset conditions: a center of the target control region coincides with a center of the operating region, an area of the target control region is a preset multiple of an area of the operating region, and the operating region is located within the target control region, as shown in
Alternatively, the target control region may also be determined by detecting a target ray element displayed in the virtual scene, for example, by obtaining an intersection of a target photographing element and a display surface in the virtual scene; and determining the target control region according to position information of the intersection.
The solution is further described below in combination with an application scene of the disclosure:
After a user wears a headset and enters a virtual reality (VR) scene, the user can open a corresponding application program. A display region of the application program may include a screen region and an operating region. The operating region may be located at a side of the screen region. The user can control the operating region by operating a gamepad to interact with and a virtual reality scene.
After a user wears a headset and enters a VR scene, a screen region and an operating region are displayed in a virtual scene, and a ray determined according to an operation by the user on a gamepad is displayed. When the user moves the gamepad, the ray moves correspondingly. When the ray intersects with a target control region corresponding to an operating region in the virtual scene and the user triggers a control instruction through the gamepad, the headset responds to the control instruction and executes a task, such as like, forward, etc., corresponding to the control instruction.
The disclosure provides a solution including: obtain a target control instruction corresponding to a target object; determine a target control region corresponding to the target control instruction in a virtual scene with the target control instruction, and the target control region displays at least one element; determine a target element according to the target control instruction and the target control region, and the target element is comprised in the at least one element; and respond to the target control instruction based on the target element. With the solution, the target element can be determined in combination with recognition of the region where the element is located, the anti-interference capability is also strong, and the user experience is improved.
According to one or more examples of the disclosure, when the apparatus is configured to determine the target control region corresponding to the target control instruction in a virtual scene with the target control instruction, the apparatus is specifically configured to:
According to one or more examples of the disclosure, the apparatus is further configured to:
According to one or more examples of the disclosure, when the apparatus is configured to obtain the pose information corresponding to the target object, the apparatus is specifically configured to:
According to one or more examples of the disclosure, when the apparatus is configured to determine the region corresponding to the pose information in the virtual scene with the pose information, the apparatus is specifically configured to:
According to one or more examples of the disclosure, when the apparatus is configured to determine the target element according to the target control instruction and the target control region, the apparatus is specifically configured to:
According to one or more examples of the disclosure, when the apparatus is configured to respond to the target control instruction according to the target element, the apparatus is specifically configured to:
According to one or more examples of the disclosure, the apparatus is further configured to:
According to one or more examples of the disclosure, the apparatus is further configured to:
It should be understood that the apparatus example and the method example may correspond to each other, and reference may be made to the method example for similar descriptions, which will not be repeated herein to avoid repetition. Specifically, the apparatus may execute the above method example, and the foregoing and other operations and/or functions of each module in the apparatus are respectively for the corresponding flow in each method in the above method example, and are not repeated here for brevity.
An apparatus of the example of the disclosure is described above in conjunction with the accompanying drawings from the perspective of functional modules. It should be understood that the functional modules may be implemented in a form of hardware, may be implemented in a form of a software instruction, or may also be implemented in a form of combination of hardware and software modules. Specifically, each step of the method example in the examples of the disclosure may be completed by an integrated logic circuit in hardware and/or instructions in a software form in a processor. The steps of the methods disclosed in conjunction with the example of the disclosure may be directly completed by a hardware coding processor, or completed by a combination of hardware and software modules in a coding processor. Alternatively, the software module may be located in a random access memory, a flash memory, a read-only memory, a programmable read-only memory, an electrically erasable programmable memory, a register, and other storage media well known in the art. The storage medium is located in the memory, and the processor reads information from the memory and completes the steps of the method example described above in conjunction with its hardware.
For example, the processor 402 may be configured to execute the method example described above according to an instruction in the computer program.
In some examples of the disclosure, the processor 402 may include, but is not limited to:
In some examples of the disclosure, the memory 401 includes, but is not limited to:
In some examples of the disclosure, the computer program may be divided into one or more modules. The one or more modules are stored in the memory 401 and are executed by the processor 402, to complete the method provided in the disclosure. The one or more modules may be a series of instruction segments of computer program that may implement specific functions, where the instruction segments are configured to describe an execution process of the computer program in the electronic device.
As shown in
The processor 402 may control the transceiver 403 to communicate with other devices, specifically, to transmit information or data to other devices, or to receive information or data sent from other devices. The transceiver 403 may include a transmitter and a receiver. The transceiver 403 may further include antennas, and one or more antennas may be provided.
It should be understood that various components in the electronic device are connected together with a bus system. The bus system includes a data bus, and further includes a power bus, a control bus, and a status signal bus.
The disclosure further provides a computer storage medium, storing a computer program. The computer program causes the computer to execute the method of the above method example when executed by a computer. Alternatively, an example of the disclosure further provides a computer program product containing an instruction. The instruction causes the computer to execute the method of the above method example when executed by a computer.
When implemented by software, the examples can be implemented in whole or in part as a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, the computer program instruction generates in whole or in part the flows or functions described in accordance with the examples of the disclosure. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatuses. The computer instruction may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium. For example, the computer instruction can be transmitted from one website, computer, server, or data center to another website, computer, server, or data center in a mode of a wire (for example, coaxial cable, optical fiber, digital subscriber line (DSL)) or radio (for example, infrared, radio, microwave, etc.). The computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device including one or more available media integrated as a server, data center, etc. The available medium may be a magnetic medium (for example, floppy disk, hard disk, magnetic tape), an optical medium (for example, digital video disk (DVD)), or a semiconductor medium (for example, solid state disk (SSD)), etc.
According to one or more examples of the disclosure, a method for interaction is provided. The method includes:
According to one or more examples of the disclosure, the step of determining a target control region corresponding to the target control instruction in a virtual scene with the target control instruction includes:
According to one or more examples of the disclosure, the method further includes:
According to one or more examples of the disclosure, the step of obtaining pose information corresponding to the target object includes:
According to one or more examples of the disclosure, the step of determining a region corresponding to the pose information in the virtual scene with the pose information includes:
According to one or more examples of the disclosure, the step of determining a target element according to the target control instruction and the target control region includes:
According to one or more examples of the disclosure, the step of responding to the target control instruction according to the target element includes:
According to one or more examples of the disclosure, the method further includes:
According to one or more examples of the disclosure, the method further includes:
According to one or more examples of the disclosure, an apparatus for interaction is provided. The apparatus includes:
According to one or more examples of the disclosure, when the apparatus is configured to determine the target control region corresponding to the target control instruction in a virtual scene with the target control instruction, the apparatus is specifically configured to:
According to one or more examples of the disclosure, the apparatus is further configured to:
According to one or more examples of the disclosure, when the apparatus is configured to obtain the pose information corresponding to the target object, the apparatus is specifically configured to:
According to one or more examples of the disclosure, when the apparatus is configured to determine the region corresponding to the pose information in the virtual scene with the pose information, the apparatus is specifically configured to:
According to one or more examples of the disclosure, when the apparatus is configured to determine the target element according to the target control instruction and the target control region, the apparatus is specifically configured to:
According to one or more examples of the disclosure, when the apparatus is configured to respond to the target control instruction according to the target element, the apparatus is specifically configured to:
According to one or more examples of the disclosure, the apparatus is further configured to:
According to one or more examples of the disclosure, the apparatus is further configured to:
According to one or more examples of the disclosure, an electronic device is provided. The electronic device includes:
According to one or more examples of the disclosure, a computer-readable storage medium is provided. The computer-readable storage medium stores a computer program. The computer program implements step of the above method when executed by a processor.
Those of ordinary skill in the art may appreciate that the modules and algorithm steps of the instances described in conjunction with the examples disclosed herein may be implemented in electronic hardware, or a combination of computer software and electronic hardware. Whether these functions are performed with hardware or software depends on the specific application and design constraints of the technical solution. Those skilled can implement the described functions with different methods for each particular application, but such implementation should not be considered to fall beyond the scope of the disclosure.
In the several examples provided in the disclosure, it should be understood that the disclosed systems, apparatuses and methods can be implemented in other ways. For example, the apparatus examples described above are merely illustrative. For example, a division of the modules is merely a division of logical functions, and in practice there can be additional ways of division. For example, a plurality of modules or assemblies can be combined or integrated into another system, or some features can be omitted or not executed. Furthermore, coupling or direct coupling or communication connection between each other as shown or discussed can be achieved with some interfaces, and indirect coupling or communication connection between apparatuses or modules can be in an electrical form, a mechanical form or other forms.
The modules illustrated as separate components can be physically separated or not, and the components shown as modules can be physical modules or not, that is, can be located in one place, or can also be distributed over a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solutions of the examples. For example, the functional modules in the examples of the disclosure can be integrated into one processing module, or each module can be physically present separately, or two or more modules can be integrated into one module.
What are described above are merely being particular embodiments of the disclosure, and are not intended to limit the scope of protection of the disclosure, and any changes or substitutions that can readily occur to those skilled in the art within the scope of technology disclosed in the disclosure should fall within the scope of protection of the disclosure. Therefore, the scope of protection of the disclosure shall be subject to the scope of protection of the claims.
Number | Date | Country | Kind |
---|---|---|---|
202210253972.6 | Mar 2022 | CN | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/CN2023/080020 | 3/7/2023 | WO |