The present disclosure relates to the technical field of games, and in particular, to a method and apparatus for controlling an AR game, an electronic device and a storage medium.
With the development of game technology, many types of games (for example, shooting games, racing games, and battle games) begin to incorporate augmented reality (AR) technology to realize game interaction.
At present, the control over AR games is usually triggered by a hardware device, such as a keyboard, mouse, handle and touch screen. Particularly, when an AR game is played on a mobile phone, the AR game is usually controlled by triggering trigger controls on a display interface of a touch screen of the mobile phone.
However, the AR game needs a large area to display a real interface or a virtual interface. The way of controlling the game through the trigger controls laid out in the display interface would make the display area of the screen to be occupied, which affects the display effect of the AR game. Moreover, the user needs to memorize locations and menus of these trigger controls, which would also affect the user's interactive experience.
The present disclosure provides a method and apparatus for controlling an AR game, an electronic device and a storage medium, to solve the technical problem that the display effect and user experience of AR games are affected due to occupation of the display area of the screen by trigger controls when the game is controlled through such trigger controls.
In a first aspect, embodiments of the present disclosure provide a method for controlling an AR game, and the method includes:
In a second aspect, the embodiments of the present disclosure provide an apparatus for controlling an AR game, and the apparatus includes:
In a third aspect, the embodiments of the present disclosure provide an electronic device, and the electronic device includes:
In a fourth aspect, the embodiments of the present disclosure provide a computer-readable storage medium storing computer-executable instructions thereon. When a processor executes the computer-executable instructions, the method for controlling an AR game described in the first aspect and various possible designs of the first aspect above is implemented.
In a fifth aspect, the embodiments of the present disclosure provide a computer program product, including a computer program carried on a non-transitory computer-readable medium. The computer program, when being executed by a processor, causes the method for controlling an AR game described in the first aspect and various possible designs of the first aspect above to be implemented.
In a sixth aspect, the embodiments of the present disclosure provide a computer program. The computer program, when being executed by a processor, causes the method for controlling an AR game described in the first aspect and various possible designs of the first aspect above to be implemented.
In the method and apparatus for controlling an AR game, the electronic device and the storage medium provided by the embodiments of the present disclosure, a game control instruction is determined, according to a voice instruction acquired during running of the AR game and a preset instruction mapping relationship; and then a virtual object in the AR game is controlled according to the game control instruction. It can be seen that, during the running of the AR game, the game operation instruction can be rapidly input though voice technology, and a virtual object in the game can be operated and controlled without triggering a game control. Therefore, the user does not need to memorize a placement location of a specific interactive control corresponding to the virtual object. For an instant AR game, the operation efficiency can be greatly improved. Furthermore, there is no need to display a specific interactive control on a main screen, and more game contents can be displayed in a limited display space of the screen.
In order to explain technical solutions of the embodiments of the present disclosure or the prior art more clearly, drawings that need to be used in the description of the embodiments or the prior art will briefly introduced in the following. Obviously, the drawings in the following description are some embodiments of the present disclosure; and for those of ordinary skill in the art, other drawings can be obtained according to these drawings without any creative effort.
The embodiments of the present disclosure will be described in more detail below with reference to the drawings. Although some embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be implemented in various forms and should not be construed as being limited to the embodiments set forth here. On the contrary, these embodiments are provided for more thorough and comprehensive understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are only for illustrative purposes, and are not intended to limit the protection scope of the present disclosure.
It should be understood that steps described in the method embodiments of the present disclosure may be performed in a different order and/or in parallel. In addition, the method embodiments may include additional steps and/or omit a step shown. The scope of the present disclosure is not limited in this respect.
As used herein, the term “include” and its variations are intended for an open inclusion, that is, “including but not limited to”. The term “based on” means “based at least in part on”. The term “one embodiment” means “at least one embodiment”; the term “another embodiment” means “at least one other embodiment”; and the term “some embodiments” means “at least some embodiments”. Relevant definitions of other terms will be given in the following description.
It should be noted that the modifiers of “one” and “a plurality of” mentioned in the present disclosure are illustrative rather than restrictive, and those skilled in the art should understand that they should be understood as “one or more”, unless explicitly indicated in the context otherwise.
AR is a technology in which information of the real world is acquired through a camera and it is calculated, so as to link the virtual world to the real world, thereby enabling the user to interact with the real world through the virtual world. Most of AR games are played in a way that, an image of the real world is acquired by a camera of a terminal device, and after being fused with game elements in a virtual game, it is displayed on the screen of the terminal device for interaction. It is illustrated by taking an AR shooting game as an example, where the AR shooting game is a kind of AR game having high requirements for real-time performance, which is a game that uses the AR technology to generate obstacles or enemies in the virtual world according to some position information in the real world, and allows the users to shoot.
For the AR shooting game, it is usually necessary to provide a large area for displaying the real or virtual world. However, the size of screens of mainstream terminal devices available in the market is usually limited. For example, for the screen having a size of about 6 inches of a mobile phone, the area available for interactive operations is relatively limited. With the increasing demands of users, in order to pursue higher interest, the gameplay of the AR shooting games would become more diverse. Accordingly, more and more interactive operations need to be performed, which will become more and more complicated.
At present, the mainstream solution is to place commonly used interactive controls on the main screen, and place the other infrequently-used interactive controls in a secondary menu. However, on one hand, the immediacy of interactive operations is greatly reduced, which would reduce the user experience for the AR shooting game that has high requirements for immediacy; and on the other hand, it also requires the user to memorize a specific placement position of a specific interactive control, which increases the operation difficulty of the user.
In order to solve the above problems, the present disclosure aims to provide a solution for controlling an AR game, by which an input game operation instruction can be rapidly executed through voice recognition technology, and the game can be operated and controlled without triggering a game control. Therefore, the user does not need to memorize a placement location of a specific interactive control. For an instant AR game, the operation efficiency can be greatly improved. Furthermore, there is no need to display a specific interactive control on the main screen, and more game contents can be displayed in a limited display space of the screen.
The present disclosure aims to provide a way of controlling a virtual object in any AR game through a voice instruction, without any specific limitation on the specific types of the AR games or on the game scenes on which the AR games are based. The specific types of AR games involved in the following embodiments are only for the convenience of understanding the implementations of the related technical solutions, and for other types of AR games that are not specifically described, there is no doubt that the virtual objects thereof can also be controlled according to the specific description in the present disclosure.
In one scenario, it is illustrated by taking a case where the AR game is an AR shooting game as an example. The AR game may be an AR shooting game. The user may switch to a corresponding weapon by inputting a weapon name through voice. For example, by inputting “rifle” through voice, the weapon of a target object in the game is witched to “rifle”. The corresponding weapon may also be triggered to attack, by inputting an onomatopoeic word. For example, by inputting “bang bang bang” though voice, the “rifle” is triggered to shoot.
In addition, the AR game may also be an AR tower defense game. The user may switch to a corresponding weapon by inputting a weapon name through voice. For example, by inputting “fort” through voice, the attack weapon in the game may be switched to “fort”. The corresponding weapon may also be triggered to attack, by inputting an onomatopoeic word. For example, by inputting “boom boom boom” through voice, the “fort” is triggered to shell.
It is worth explaining that, in the embodiment, the target object controlled through voice in the above-mentioned AR game is a virtual object in the AR game, that is, a virtual game element superimposed and displayed on the image of the real environment, which may be a virtual game character or a virtual game prop. In addition, the above-mentioned exemplary types of AR games are only to illustrate the control effect of the embodiment, but not to limit the specific types of AR games involved in the present disclosure. The voice control instructions for different types of AR games may be configured adaptively according to the specific characteristics of the games.
However, for the above-mentioned scenes, in the prior art, the way of triggering through a specific interactive control requires complex clicks on the main screen. For example, when a target object is equipped with a plurality of props (e.g., weapons), the user usually cannot directly switch the current weapon to a desired target weapon. Rather, the user needs to enter a prop setting interface (e.g., an arsenal interface) first, and then selects a weapon. Alternatively, the current weapon may be switched to the target weapon only after switching is performed through a weapon switching control, where the switching process is related to the order of the arranged weapons. If a weapon ranked immediately behind the current weapon is not the target weapon, the switching operation would need to be triggered for many times to achieve the purpose of weapon switching. In addition, for controlling a weapon to attack, at present, it is necessary to constantly trigger a shooting control to realize the attack, where frequent click operations are easy to reduce the user's game experience.
In addition, because there may be some game play of dodging enemy attacks and finding a suitable shooting angle in the AR shooting game, the user needs to hold the mobile phone for a long time to move or sheer off. Therefore, the holding mode of the mobile phone is also very important for the game experience of the AR shooting game.
At step 101, a voice instruction is acquired during running of an AR game.
When a user plays an AR game, real objects and scenes, such as a real desktop scene, may be captured through a camera on a terminal device. Then, game elements are fused on an image of the real desktop through a processor in the terminal device, and an AR game interface is displayed on a display screen of the terminal device. During the running of the AR game, the user may input corresponding voice instructions according to the control requirements for the game.
At step 102, a game control instruction is determined, according to the voice instruction and a preset instruction mapping relationship.
After the voice instruction is acquired, the game control instruction may be determined according to the voice instruction and the preset instruction mapping relationship. For example, based on a predefined keyword set, the voice instruction may be recognized through voice recognition technology; and when a valid keyword is obtained from the input, a game control instruction corresponding to the keyword may be obtained through an established mapping relationship between keywords and the game control instructions. As such, the obtained game control instruction may be used to control the AR game. In this way, the AR game can be quickly controlled by inputting the voice instruction.
At step 103, a virtual object in the AR game is controlled according to the game control instruction.
It's worth explaining that, the AR game synthesizes the image of the real environment with the virtual objects, and outputs the generated AR game interface to the display of the terminal device for display. The user may see the final enhanced AR game interface from the display. In the game interaction of the AR game, it is mainly to control the virtual objects in the interface, which usually requires high real-time interactivity. Specifically, the control over the virtual objects in the AR game may be control over virtual game props or virtual game characters. For example, it may control a game character in the AR shooting game, it may control an attack weapon in the AR tower defense game, it may control a racing vehicle in an AR racing game, or it may control a musical instrument in an AR music game. The types of games and the types of virtual objects are not limited in the present disclosure.
In the embodiment, the game control instruction is determined according to the voice instruction acquired during the running of the AR game and the preset instruction mapping relationship, and then the AR game is controlled according to the game control instruction. It can be seen that, during the running of the AR game, the game operation instruction can be rapidly input though voice technology, and the virtual object in the game can be operated and controlled without triggering a game control. Therefore, the user does not need to memorize a placement location of a specific interactive control corresponding to a game element. For an instant AR game, the operation efficiency can be greatly improved. Furthermore, there is no need to display a specific interactive control on a main screen, and more game contents can be displayed in a limited display space of the screen.
As for the above-mentioned voice instruction, it may not only be a noun or verb voice instruction corresponding to the game prop, but may also be an onomatopoeic word-type voice instruction corresponding to a target action to be performed by the game prop. When using the onomatopoeic word as the voice instruction to control the game prop, a frequency of voice input may be first determined according to the voice instruction including the onomatopoeic word, that is, a frequency at which the game prop executes the target action may be determined by determining an input speed of the onomatopoeic word. For example, taking the shooting game as an example, if the game prop (for example, an attack weapon) is controlled through an onomatopoeic word, the frequency of voice input may be determined according to the voice instruction including the onomatopoeic word, that is, an attack frequency of the attack weapon may be determined by determining the input speed of the onomatopoeic word. Referring to
In the embodiment, in addition to controlling the game prop to execute the target action through the voice instruction, a new game prop in the game may also be awakened or triggered through the voice instruction. It is still illustrated by taking the shooting game as an example. During the game, through a voice instruction, a drone may be summoned to attack, or a character may be triggered to throw a grenade.
In addition, in a case where the awakened or triggered game prop has an area of effect or an intensity attribute, a volume of voice input may be first determined according to the voice instruction, where the volume of voice input is used to represent a sound intensity of a target audio in the voice instruction. Then, the area of effect and/or intensity of the target action of the game prop may be determined according to the volume of voice input. It is still illustrated by taking the shooting game as an example, the volume of voice input may be first determined according to the voice instruction, where the volume of voice input is used to represent the sound intensity of the target audio in the voice instruction; and then, the attack intensity and/or attack range of the attack weapon may be determined according to the volume of voice input. It is still illustrated by taking the shooting game as an example, when the grenade is triggered to be thrown, the intensity and/or attack range of the attacking of the grenade may be determined according to the volume of the input voice “boom boom boom”; and when the drone is awakened to bomb the ground, the intensity and/or attack range of the ground bombing may also be determined according to the volume of the input voice “boom boom boom”. It is worth noting that the above onomatopoeic word is only for illustrative purposes, and do not limit the specific forms of the voice.
In a possible implementation, besides controlling a corresponding attack weapon through voice, a related auxiliary prop may also be triggered by the voice instruction in the embodiment. For example, taking the shooting game as an example, a blood replenishing operation may be triggered by inputting “first-aid kit” through voice. In addition, a new weapon may be added through the voice instruction. For example, a firing point of a machine gun may be set by inputting “machine guns” through voice.
It can be seen that, in the embodiment, before the AR game is controlled according to the game control instruction, the target object may be first determined according to the voice instruction, where the target object has been displayed in the combat interface before the voice instruction is input. Referring to
Furthermore, for the case where the target object needs to be generated and displayed in the combat interface, position information of the target object may be determined according to the voice instruction, and then the target object may be generated and displayed at a position corresponding to the position information in the combat interface. For example, when the user's voice input is “boom boom boom at lower left”, the grenade may be generated in the lower left area of the combat interface and the generated grenade may be exploded for attack.
An interface trigger control corresponding to the controlled target object may be located in a secondary interface of the current interface, where the secondary interface is an interface that is invoked and displayed after a specific control is triggered in the current interface. It can be understood that, in the related art, if the interface trigger control corresponding to the controlled target object is not in the current interface, the target object cannot be directly triggered; rather, the user usually needs to trigger a relevant interface jump control in the current interface to enter another interface linked with the current interface (i.e., the secondary interface of the current interface), and then continue to trigger a required interface trigger control in the secondary interface. For this case, by controlling the interface trigger control located in the secondary interface through the voice instruction, the user does not need to perform the complicated interface trigger operations, which may greatly improve the trigger efficiency. It is still illustrated by taking the shooting game as an example. For example, when a target object is equipped with multiple weapons, in the related art, the user usually cannot directly switch the current weapon to a required target weapon; rather, the user needs to enter an arsenal interface first, and then select the required weapon. For this case, the operation efficiency can be greatly improved by perform the controlling through the voice instruction in the embodiment.
At step 201, a voice instruction is acquired during running of an AR game.
When a user plays an AR game, real objects and scenes, such as a real desktop, may be captured through a camera on a terminal device. Then, game elements are fused on an image of the real desktop through a processor in the terminal device, and an AR game interface is displayed on a display screen of the terminal device. During the running of the AR game, the user may input corresponding voice instructions according to the control requirements for the game.
At step 202, the voice instruction is converted into a text instruction.
After the voice instruction is acquired, voice input may be converted into text input through a voice recognition function module, and then a game control instruction may be determined according to the text obtained through voice recognition and a preset instruction mapping relationship. Specifically, the voice recognition module may be queried, at intervals, whether it recognizes a text instruction corresponding to the user's voice instruction. After it is monitored that the voice recognition module has recognized the text instruction, a registered monitor may be informed to make comparison for the instruction, where the registered monitor is a program module that compares the voice recognition result obtained in the voice recognition function module with preset instructions.
In addition, on the basis of converting the voice instruction into the text instruction, a voice characteristic may also be determined according to the voice instruction, where the voice characteristic may be used to distinguish users from each other (for example, the user's identity characteristic). For example, the user's identity characteristic may be determined according to a voiceprint characteristic of a target audio in the voice instruction, and then according to the determined user's identity characteristic, a game character corresponding to the identity characteristic may be controlled. For example, when the AR game is a multiplayer game, multiple game characters are usually needed in the game interface, and each game character corresponds to a control user. In this embodiment, the voice characteristic may be first determined according to the voice instruction, and then the target game character may be determined according to the voice characteristic, to control the target game character according to the game control instruction.
It is illustrated by taking the AR shooting game as an example, where this type of game may enable multiple persons to control multiple different game characters on a same terminal device. For example, user A controls character A and user B controls character B. In this case, after user A and/or user B issues a voice instruction, voiceprint recognition may be first performed on the voice instruction, and then a virtual object to be controlled may be determined according to the recognized voiceprint characteristic, so as to control a corresponding game character or attack weapon to attack. Alternatively, multiple persons may control multiple different game characters on different terminal devices. In this case, the distances between multiple users are usually close, and the voice instructions issued by the individual users are easy to interfere with each other. For example, user A controls terminal A and user B controls terminal B, but the voice instruction issued by user A is easily executed by terminal B by mistake due to a close distance therebetween. Therefore, after user A and/or user B issues a voice instruction, it is necessary to first perform voiceprint recognition on the voice instruction, and then control the game character or attack weapon in the corresponding terminal to attack.
At step 203, a target keyword matching the text instruction is determined from a preset keyword set.
Specifically, the text instruction may be first determined according to the voice instruction, then the target keyword matching the text instruction is determined from the preset keyword set, and finally a game control instruction is determined according to the preset instruction mapping relationship and the target keyword.
At step 204, a game control instruction is determined according to the preset instruction mapping relationship and the game control instruction.
The preset keyword set may be defined in advance. When a valid keyword is obtained from the input, though the established mapping relationship between keywords and game control instructions, the AR game can be quickly controlled by inputting the voice instruction.
As shown in
During the running of the AR game, video information may be acquired through a front camera of the device, and then a current holding mode may be determined according to the video information. If the holding mode is inconsistent with a target holding mode, a prompt message is displayed, where the prompt message is used to instruct the holding mode of the device to be adjusted. It is worth noting that, when the mobile phone is held in the holding mode as shown in
According to one or more embodiments of the present disclosure, the acquiring module 301 is specifically configured to:
According to one or more embodiments of the present disclosure, the processing module 302 is further configured to determine the target object according to the voice instruction, where the target object has been displayed in the combat interface before the voice instruction is input.
According to one or more embodiments of the present disclosure, the processing module 302 is further configured to: determine the target object according to the voice instruction, and generate and display the target object in the combat interface.
According to one or more embodiments of the present disclosure, the processing module 302 is further configured to: determine position information of the target object according to the voice instruction; and generate and display the target object at a position corresponding to the position information in the combat interface.
According to one or more embodiments of the present disclosure, an interface trigger control corresponding to the target object is located in a secondary interface of the combat interface, and the secondary interface is an interface that is invoked after a specific control is triggered in the combat interface.
According to one or more embodiments of the present disclosure, the target object includes a game prop, and the controlling module 303 is specifically configured to:
According to one or more embodiments of the present disclosure, the target audio includes an onomatopoeic word corresponding to execution of the target action by the game prop.
According to one or more embodiments of the present disclosure, the target object includes a game prop, and the controlling module 303 is specifically configured to:
According to one or more embodiments of the present disclosure, the controlling module 303 is specifically configured to:
According to one or more embodiments of the present disclosure, the processing module 302 is specifically configured to:
According to one or more embodiments of the present disclosure, the controlling module 303 is specifically configured to:
It is worth noting that the apparatus for controlling an AR game provided by the embodiment shown in
As shown in
Generally, the following means may be connected to the I/O interface 405: an input means 406 including, for example, a touch screen, a touch panel, a keyboard, a mouse, a camera, a microphone, an accelerometer, a gyroscope, and the like; an output means 407 including a liquid crystal display (LCD), a speaker, a vibrator and the like; the storage means 408 including a magnetic tape, a hard disk and the like; and a communication means 409. The communication means 409 may allow the electronic device 400 to perform wireless or wired communication with other devices for data exchange. Although the electronic device 400 shown in
Particularly, according to the embodiments of the present disclosure, the process described above with reference to the flowchart may be implemented as a computer software program. For example, the embodiments of the present disclosure include a computer program product, which includes a computer program carried on a non-transitory computer readable medium, and the computer program contains program code for implementing the method shown in the flowchart of the embodiments of the present disclosure. In such an embodiment, the computer program may be downloaded and installed from the network through the communication means 409, or installed from the storage means 408 or the ROM 402. When the computer program is executed by the processor 401, the above functions defined in the method of the embodiments of the present disclosure is implemented.
It should be noted that the above-mentioned computer-readable medium of the present disclosure may be a computer-readable signal medium or a computer-readable storage medium or any combination of the above two. The computer-readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above. More specific examples of the computer-readable storage medium may include, but are not limited to, an electrical connection with one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read only memory (ROM), an erasable programmable read only memory (EPROM), a flash memory, an optical fiber, a portable compact disk read only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combinations of the above. In the present disclosure, the computer-readable storage medium may be any tangible medium that contains or stores a program, where the program may be used by or in connection with an instruction execution system, apparatus or device. And in this disclosure, the computer-readable signal medium may include a data signal propagated in a baseband or as part of a carrier wave, in which computer-readable program codes are carried. This propagated data signal may be in various forms, including but not limited to an electromagnetic signal, an optical signal or any suitable combination of the above. The computer-readable signal medium may also be any computer-readable medium other than the computer-readable storage medium, and the computer-readable signal medium may send, propagate or transport a program for use by or in connection with the instruction execution system, apparatus or device. The program codes contained on the computer readable medium may be transmitted by any suitable medium, including but not limited to an electric wire, an optical cable, a radio frequency (RF) and the like, or any suitable combination of the above.
The computer readable medium may be included in the electronic device, or it may exist separately without being assembled into the electronic device.
The computer-readable medium carries one or more programs, and the one or more programs, when being executed by the electronic device, cause the electronic device to: acquire a voice instruction during running of an AR game; determine a game control instruction, according to the voice instruction and a preset instruction mapping relationship; and control a virtual object in the AR game according to the game control instruction.
Computer program codes for implementing the operations of the present disclosure may be written in one or more programming languages or their combinations, where the programming languages include but are not limited to: object-oriented programming languages, such as Java, Smalltalk, and C++; and conventional procedural programming languages, such as “C” language or similar programming languages. The program codes may be executed completely on the user's computer, partially on the user's computer, as an independent software package, partially on the user's computer and partially on a remote computer, or completely on a remote computer or a server. In the case where a remote computer is involved, the remote computer may be connected to the user computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it may be connected to an external computer (for example, using an internet service provider to connect through the internet).
In some embodiments, the client and the server may communicate by using any currently known or future developed network protocol such as hypertext transfer protocol (HTTP), and may be interconnected with any form or medium of digital data communication (e.g., communication network). Examples of the communication networks include local area network (“LAN”), wide area network (“WAN”), Internet network (e.g., the Internet) and peer-to-peer network (e.g., peer-to-peer ad hoc network), as well as any currently known or future developed networks.
The flowchart and block diagram in the drawings illustrate the architecture, functions and operations of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagram may represent a module, program segment, or part of codes, where the module, program segment, or part of codes contains one or more executable instructions for implementing the specified logical functions. It should also be noted that, in some alternative implementations, the functions marked in the blocks may also occur in a different order from those marked in the drawings. For example, two consecutive blocks may actually be executed substantially in parallel, and sometimes they may be executed in a reverse order, depending on the functions involved. It should also be noted that each block in the block diagram and/or flowchart, and the combination of blocks in the block diagram and/or flowchart, may be implemented by a dedicated hardware-based system that performs specified functions or operations, or may be implemented by a combination of dedicated hardware and computer instructions.
The modules involved in the embodiments described in the present disclosure may be implemented in software or hardware. In some cases, the name of the module does not limit the unit itself. For example, a display module may also be described as “a unit that displays a target face and a face mask sequence”.
The functions described above in the context may be at least partially performed by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field programmable gate arrays (FPGA), application specific integrated circuits (ASIC), application specific standard products (ASSP), system on chips (SOC), complex programmable logic devices (CPLD), etc.
In the context of the present disclosure, a machine-readable medium may be a tangible medium that may contain or store a program for use by or in connection with an instruction execution system, apparatus or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or any suitable combination of the above. More specific examples of the machine-readable storage medium may include an electrical connection based on one or more wires, a portable computer disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM), a flash memory, an optical fiber, a compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the above.
The embodiments of the present disclosure also provide a computer program which, when being executed by a processor, causes the method for controlling an AR game provided by any of the above embodiments to be implemented.
In a first aspect, according to one or more embodiments of the present disclosure, a method for controlling an AR game is provided, and the method includes:
According to one or more embodiments of the present disclosure, the acquiring a voice instruction during running of the AR game includes:
According to one or more embodiments of the present disclosure, before controlling the AR game according to the game control instruction, it further includes:
According to one or more embodiments of the present disclosure, before controlling the AR game according to the game control instruction, it further includes:
According to one or more embodiments of the present disclosure, the generating and displaying the target object in the interface includes:
According to one or more embodiments of the present disclosure, an interface trigger control corresponding to the target object is located in a secondary interface of the interface, and the secondary interface is an interface that is invoked and displayed after a specific control is triggered in the interface.
According to one or more embodiments of the present disclosure, the target object includes a game prop in the interface, and the controlling the virtual object in the AR game according to the game control instruction includes:
According to one or more embodiments of the present disclosure, the target audio includes an onomatopoeic word corresponding to execution of the target action by the game prop.
According to one or more embodiments of the present disclosure, the target object includes a game prop in the interface, and the controlling the virtual object in the AR game according to the game control instruction includes:
According to one or more embodiments of the present disclosure, the determining a game control instruction according to the voice instruction and a preset instruction mapping relationship includes:
According to one or more embodiments of the present disclosure, the method for controlling an AR game further includes:
According to one or more embodiments of the present disclosure, the target object includes a plurality of game characters in the interface, and the controlling a virtual object in the AR game according to the game control instruction includes:
In a second aspect, according to one or more embodiments of the present disclosure, an apparatus for controlling an AR game is provided, and the apparatus includes:
According to one or more embodiments of the present disclosure, the acquiring module is specifically configured to:
According to one or more embodiments of the present disclosure, the processing module is further configured to determine the target object according to the voice instruction, where the target object has been displayed in the combat interface before the voice instruction is input.
According to one or more embodiments of the present disclosure, the processing module is further configured to: determine the target object according to the voice instruction, and generate and display the target object in the combat interface.
According to one or more embodiments of the present disclosure, the processing module is further configured to: determine position information of the target object according to the voice instruction; and generate and display the target object at a position corresponding to the position information in the combat interface.
According to one or more embodiments of the present disclosure, an interface trigger control corresponding to the target object is located in a secondary interface of the combat interface, where the secondary interface is an interface that is invoked after a specific control is triggered in the combat interface.
According to one or more embodiments of the present disclosure, the target object includes a game prop, and the controlling module is specifically configured to:
According to one or more embodiments of the present disclosure, the target audio includes an onomatopoeic word corresponding to execution of the target action by the game prop.
According to one or more embodiments of the present disclosure, the target object includes a game prop, and the controlling module is specifically configured to:
According to one or more embodiments of the present disclosure, the controlling module is specifically configured to:
According to one or more embodiments of the present disclosure, the processing module is further configured to:
According to one or more embodiments of the present disclosure, the controlling module is specifically configured to:
In a third aspect, the embodiments of the present disclosure provide an electronic device, including:
In a fourth aspect, the embodiments of the present disclosure provide a computer-readable storage medium storing computer-executable instructions thereon. When a processor executes the computer-executable instructions, the method for controlling an AR game described in the first aspect and various possible designs of the first aspect above is implemented.
In a fifth aspect, the embodiments of the present disclosure provide a computer program product, including a computer program carried on a computer-readable medium. When the computer program is executed by a processor, the method for controlling an AR game described in the first aspect and various possible designs of the first aspect above is implemented.
In a sixth aspect, the embodiments of the present disclosure provide computer program which, when being executed by a processor, causes the method for controlling an AR game described in the first aspect and various possible designs of the first aspect above to be implemented.
The above description merely illustrates preferred embodiments of the present disclosure and the technical principles applied. It should be understood by those skilled in the art that the scope involved in the present disclosure is not limited to the technical solutions formed by specific combinations of the above technical features, but should also cover other technical solutions formed by any combination of the above technical features or their equivalent features without departing from the above disclosed concept, for example, a technical solution formed by replacing the above features with (but not limited to) technical features with similar functions disclosed in the present disclosure.
In addition, although the operations are depicted in a specific order, this should not be understood that these operations are required to be performed in the specific order shown or in a sequential order. Under certain circumstances, multitasking and parallel processing may be beneficial. Similarly, although several specific implementation details are included in the above discussion, these should not be interpreted as limiting the scope of the present disclosure. Some features described in the context of separate embodiments may also be implemented in a single embodiment in combination. On the contrary, various features described in the context of a single embodiment may also be implemented in multiple embodiments alone or in any suitable sub-combination.
Although the subject matter has been described in language specific to structural features and/or logical acts of methods, it should be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. On the contrary, the specific features and actions described above are only example forms for realizing the claims.
Number | Date | Country | Kind |
---|---|---|---|
202011182612.9 | Oct 2020 | CN | national |
This application is a national stage of the International application PCT/CN2021/111872, filed on Aug. 10, 2021. This International application claims priority to Chinese Patent Application No. 202011182612.9, filed on Oct. 29, 2020, and the contents of these applications are hereby incorporated by reference in their entireties.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/CN2021/111872 | 8/10/2021 | WO |