The invention relates to a method of recognizing objects for a video game system.
Such a system is known from document WO 01/95568 A1. That document describes a video hunter game involving two remotely-guided vehicles with on-board video cameras. One of the two remotely-guided vehicles is the hunter and the other is the prey. The video images from the video camera of the hunter vehicle are transmitted to a control unit where they are displayed. The video images delivered by the hunter vehicle are scanned in order to detect the image of the adversary vehicle. If the adversary vehicle is detected in the video image, the adversary vehicle in the image is replaced by a character of the virtual game. Thus, the player using the control unit to drive the hunter vehicle sees on the video image, not an image of the adversary vehicle but a virtual image of a game character that the player is chasing with the vehicle.
The object recognition method disclosed in document WO 01/95988 A1 is nevertheless not applicable to a shooter game. The aim of the present invention is thus to propose an object recognition method for a video shooter game.
According to the invention, this aim is achieved by a method of recognizing objects for a video shooter game system, the system comprising:
a first remote-controlled vehicle including an on-board video camera;
a second remote-controlled vehicle; and
an electronic video display entity serving to remotely control the first vehicle;
the method comprising the following steps:
displaying the image delivered by the on-board camera on the video display of the electronic entity;
displaying virtual cross-hairs that are movable in the video image;
detecting a command to fire a virtual shot being input into the electronic entity;
acquiring the position A of the virtual cross-hairs in the video image;
recognizing the second remote-controlled vehicle in the video image; and
if recognition fails, invalidating the virtual shot; or
if recognition succeeds:
The electronic entity of the system is preferably a portable console with a video screen.
By means of the method of the invention, it is possible to provide a video shooter game with a plurality of real remote-controlled toys. Each participant drives a remote-controlled vehicle and can make use of a real environment or setting in which the remote-controlled vehicles move.
In particular, the players may make use of real obstacles in order to attempt to protect their remote-controlled vehicles from shots fired by other players. In the invention, the shots fired by the vehicles are fictitious only, and they are simulated by the game system. The invention thus provides a novel combination between the aspect of a conventional video shooter game that is entirely virtual, and a conventional remote-controlled vehicle game that is entirely real.
By means of the invention, the players driving the remote-controlled vehicles can make use of elements of the real setting as elements of the game.
Preferably, the second vehicle is recognized by recognizing distinctive elements arranged on the second vehicle, namely light-emitting diodes (LEDs).
In a preferred application, the LEDs are arranged in the form of a specific shape. The LEDs may also flash at a determined frequency, have specific colors, and/or have colors that vary over time, so as to make it easier to recognize the second vehicle.
Recognizing the LEDs may also include measuring their total brightness in order to make it possible to estimate the fraction of the second vehicle that is visible as a function of the measured brightness value.
By measuring brightness in this way, it is possible to estimate whether a remote-controlled vehicle is partially hidden by an obstacle of the real setting. Such recognition of part of the vehicle can then influence the decision on whether or not a fictitious shot should be validated. In particular, the quantity of fictitious damage caused by the fictitious shot may be a function of the size of the detected fraction of the second vehicle.
Preferably, the method of the invention also makes use of speed and/or acceleration and/or position sensors on the target vehicle and/or on the shooter vehicle. The sensors enable the two vehicles to determine their own three-dimensional coordinates in real time. These coordinates may then be transmitted by radio means.
Preferably, the method of the invention further comprises a step of predicting the movement of the second vehicle in the video image, on the basis:
of a measurement of the movement of the first vehicle; and/or
of the earlier movement of the second vehicle in the video image.
The two above characteristics, i.e. using sensors on board the vehicles and predicting the movement of the second vehicle in the video image, may be coupled together. Software combines position information so as to obtain pertinent information that is unaffected by the drift to which the inertial units on board each of the two vehicles may be subject.
In the preferred version of the invention, the vehicles obtain fine estimates of their positions. The fact that an enemy vehicle disappears suddenly from the estimated position indicates that it is very probably hidden behind a real obstacle.
The feature whereby a virtual shooter game takes account of obstacles that are real is the preferred aspect of the invention.
Implementations of the invention are described below with reference to the accompanying drawings.
a shows the operation of a first system for simulating fictitious events in accordance with the invention;
b shows a second system enabling imaginary objects to be added, e.g. obstacles;
c shows imaginary elements added by the game on the display of the game console;
d shows a third system of the invention with an on-board autopilot;
a shows the operation of a second simulation system of the invention associated with an autopilot;
b shows the complete system: fictional events, imaginary objects, and an autopilot interacting in a very complete game;
a and 4b show the video display of a first shooter game of the invention;
a, 6b, and 6c show various different video images taken from the second shooter game of the invention;
a and 7b show the video image of the second shooter game while a player is firing a fictitious shot;
a shows the concept of the invention which is that of a “open-loop” video game.
A simulator 1 supervises the operation of a radio-controlled toy 3.
The simulator 1 of the video game modifies the instructions 5 for driving the toy 3 and coming from the player 7. The toy 3 receives driving directives 9 from the simulator 1. These directives 9 are generated by the simulator 1 while taking account of the driving instructions 5. The toy 3 depends not only on the received directives 9, but also on physical events that are external to the game.
Sensors 13 arranged on the toy 3 send information about the environment of the toy 3 to display means 15 of the video game. The information coming from the sensors 13 enables the video game system to estimate the changes of state of the toy 3 in its real environment. The display means 15 use the information from the sensors 13 to generate a display on a screen 21 of a control unit 23 handled by the player 7.
The sensors 13 comprise in particular a video camera 25 on board the remote-controlled toy 3. This video camera 25 delivers video images that are displayed by the display means 15 on the screen 21 used by the player 7. The video camera 25 thus gives the player 7 a perspective as “perceived” by the remote-controlled vehicle 3.
The toy may also be provided with other additional sensors. These may be sensors that are very simple such as accelerometers, or sensors that are extremely sophisticated, such as for example an inertial unit. By means of the sensors 13, the video game constitutes a display. For example, with a gyro and/or accelerometers and display software, the video game can reconstitute an artificial horizon if the remote-controlled vehicle is a remotely-guided airplane.
There follows a description in detail of the role and the operation of the simulator 1. The simulator 1 is situated between the player 7 and the radio-controlled toy 3. It receives driving instructions 5 from the player 7. These driving commands or actions 5 represent changes that the player 7 seeks to impart to the propulsion elements (such as the engine of the toy 3) and/or guidance elements (such as the control surface of the toy 3), e.g. for the purpose of directing the toy 3 in a certain direction.
These driving actions 5 are not transmitted directly, as is, to the remote-controlled toy. The toy 3 is decoupled from driving by the player 7 by the simulator 1. It is the simulator 1 that directly controls the toy 3 by sending it directives 9. These directives 9 are created by the simulator 1 while taking account of the instructions 5.
The particularly advantageous aspect is that the simulator 1 generates the directives 9 not only on the basis of the instructions 5, but also, and above all, on the basis of “handling characteristics” that are automatically generated by the simulator 1. These handling characteristics are created as a function of the video game that has been selected by the player 7. Depending on the selected video game, the simulator 1 simulates novel events that are absent from the physical world and that have an impact on the toy 3. These fictional events are “translated” into handling characteristics that modify the directives 9 sent to the toy 3, so that the behavior of the toy 3 is modified thereby.
For example, the simulator 1 may simulate a breakdown of the engine of the toy 3. If the toy 3 is an airplane, the simulator 1 may make the airplane artificially “heavier” by creating handling characteristics that give the player 7 the impression that the airplane 3 is responding more slowly to commands 5 than usual. The simulator 1 may also create complete fictitious scenarios such as the airplane 3 flying through a storm. In this example, the simulator 1 generates handling characteristics that give rise to directives 9 that have the effects of causing the remote-controlled airplane 3 to shake as though it were being subjected to gusts of wind.
Thus, the simulator 1 acts in complex manner and in real time in the driving of the toy 3 so as to give the player 7 a game experience that is very rich. This is not merely a question of monitoring the player's driving in order to take action in the event of errors or danger. The simulator 1 is exerting an active and deliberate influence on the driving of the toy 3 so as to give it behavior that is more varied and more entertaining.
The main difference with a conventional video game taking place entirely on a computer without any real remote-controlled vehicle is that the change in state is not operated solely by the simulation, but is the result of the change of the open loop and this change of state is measured by sensors.
b is a more complete version of the system of the invention. To further enrich the game scenarios, the simulator adds imaginary elements to the instructions in addition to combined events. These imaginary elements may for example be obstacles, or, more interestingly, virtual objects that present behavior, for example virtual enemies. The imaginary elements may also be virtual elements of the radio-controlled toy itself, such as a weapons system having a sight and a virtual weapon that fires virtual projectiles.
Under such circumstances, two additional feedback loops are added to the system. In the first loop, information from the sensors of the toy is used by the simulator so as to estimate the position of the radio-controlled toy, e.g. to determine whether a shot from a virtual enemy has hit the toy.
The second feedback loop is defined between the simulator and the display software. The display software is informed about the movement of the virtual objects so as to be able to produce a composite display. For example, it adds virtual elements to the video image: obstacles; virtual enemies; or indeed elements of the shooter system such as the elements 43 in
c shows an example of augmented reality. Virtual elements are added, namely an enemy, projectiles, and a landing zone. The image is a composite of the real world plus imaginary objects.
d shows a loop when there is an on-board autopilot. A feedback loop takes place within the drone itself. The same sensors are used as those used for the display loop.
a shows an even more complete version of the system of the invention.
The system has an autopilot 27 added thereto. This serves to servo-control the operation of the airplane 3. By virtue of the autopilot 27, the airplane 3 can be made more stable and predictable in its behavior. Thus, the sources of interaction between the video game 29 and the toy 3 are more numerous. In a system with an autopilot 27, the toy 3 may be said to be a drone since it has the ability to move autonomously in the real setting without needing to be driven by the player 7.
The autopilot 27 has a command envelope. If the vehicle 3 is a tank, then the envelope may for example define its maximum speed, its maximum acceleration, its turning speed, etc.
If the vehicle 3 is a quadricopter, the command envelope of the autopilot 27 may define its maximum rate of climb, its maximum angular speed, and a description of the main transition between hovering flight and forward flight.
The command envelope of the autopilot 27 is thus a data set defining constraints on the movement that the vehicle 3 is capable of performing. The command envelope of the autopilot 27 thus limits the capabilities of the vehicle 3.
By manipulating the envelope of the autopilot, the video game can simulate several physical magnitudes. For example, by changing the data of the envelope, it is possible to simulate a vehicle that is heavier, by simulating a larger amount of inertia by limiting acceleration of the vehicle. Such an envelope has the effect of delivering less power to the engine of the vehicle 3. This is under the control of the simulator 1.
Thus, the simulator 1 can create a wide variety of fictitious scenarios. For example, at the beginning of the game sequence, the simulator 1 may simulate a vehicle 3 that is heavier since it has a full load of fuel. As the game progresses, the simulator 1 then simulates a lightening in the weight of the vehicle 3. Or else, the simulator 1 may simulate a mission that consists in unloading fictitious equipment that is being transported by the vehicle 3. Once more, the simulator 1 generates handling characteristics that cause the directives 9 to give the player 7 the impression that the weight of the vehicle 3 changes during the game as the fictitious equipment is unloaded. The directive between the simulator and the driver in this example serves to modify the virtual weight of the drone. In this example, at the beginning of the game, the handling characteristics may impose a maximum speed and a maximum acceleration on the toy 3 that are relatively low. The player 7 will therefore not be able to cause the toy 3 to go at high speed, even when the driving commands 5 request that. As the game progresses, the handling characteristics change, thereby progressively raising the speed and acceleration limits that the toy 3 is capable of achieving. At a late stage in the game, the player can thus reach higher speeds with the toy 3. In this way, the player genuinely has the impression of driving a toy that becomes lighter as time advances, even if this is achieved solely by simulation.
In theory, the controls of the autopilot could be implemented directly by the simulator in a “remote autopilot” mode. It is much more efficient in terms of system design for the autopilot to be on board the toy 3. Because of the control of the autopilot 27, the amount of information and the critical nature thereof in real-time terms are reduced. The simulator sends higher-level directives to the autopilot, such as for example fictitious damage, a change in the weight of the vehicle, or instructions to carry out an emergency landing as a result of virtual events resulting from the video game.
The simulator 1 may modify the instructions 5 from the player 7 to a certain extent, in particular by superposing thereon novel events having the same class as the instructions. This superposition may be addition, subtraction, division, multiplication, setting limits, etc. The superposition may be carried out using any combination of arithmetic and/or logical operations.
Such superposition, e.g. by adding or subtracting signals generated by the simulator 1 to or from the command signals 5 given by the player 7 is very useful for simulating events that seek to bias the driving of the vehicle 3.
For example, if the vehicle 3 is a tank, superposition makes it possible to simulate the tank receiving a hit from an adversary. The hit is simulated as being non-fatal but as damaging the tank. The simulator 1 thus superposes signals on the actions 5 of the player that simulate a tendency for the tank to revolve slowly. The player 7 then needs to adapt the way the tank is driven in order to compensate for this superposed bias. Under such circumstances, the player 7 needs to give opposite-direction commands to the tank so as to compensate for the bias created by the simulator 1.
With a toy 3 in the form of a quadricopter, driving can be more complex by simulating wind by adding a drift component. It is possible to simulate in even more complex manner, e.g. simulating gusts of wind. The player then needs to compensate these events that follow on from one another.
In another mode of interaction, the video game takes complete control of the toy 3, either temporarily, or permanently. For example, if vehicle 3 is a tank, it is possible to simulate that it has been hit. The simulator 1 then takes exclusive control and causes the vehicle to spin round and shake. Thereafter control is returned to the player.
Another example: the vehicle receives a fatal hit, which is simulated in such a manner that the vehicle performs a final lurch before stopping completely. The game is then over.
b shows a complete simulation system in which the feedback loops combine at three different levels to create a complete augmented reality game.
The first device is the device that transforms the “instructions” from the player into directives, e.g. for the purpose of adding imaginary events.
The first feedback loop is the loop in which virtual objects are added to the system by the simulator. The display software then combines the measurements from the sensors and the information from the simulator to produce a composite display comprising the real image plus the virtual objects.
The second feedback loop is that in which the measurements from the sensors of the radio-controlled toy are used by the simulator. They enable it to simulate virtual objects, for example to verify whether the toy is colliding with a virtual obstacle, or to inform a virtual enemy seeking to chase the radio-controlled toy.
The third loop is that of the autopilot, which serves to match the behavior envelope of the vehicle to high-level directives from the simulator, for example to make the vehicle virtually heavier so that it appears different.
The tanks 31 and 33 and the consoles 35 and 37 can be used by two players to engage in a shooter game, with representations thereof appearing in
The players play against each other, each via a game console 35, 37 and a video toy 31, 33. The game consists mainly in causing the tanks 31, 33 to move, in ordering movements of the tanks' turrets and guns, in moving virtual cross-hairs 43 over the screen 39 of the game console 35, 37, in superposing the virtual cross-hairs 43 on the image 45 of the adversary, and in firing a virtual shot at the adversary.
These various steps are shown in
With the controls 41 on the console, the player moves the virtual cross-hairs 43 over the video image 47 until they are superposed on the image 45 of the adversary, as shown in
In this variant, the game takes place not between two remotely-controlled tanks, but between a toy in the form of an anti-aircraft vehicle 51 having an on-board video camera 25, and a toy in the form of a quadricopter 53.
a to 6c show the viewpoint delivered to the driver of the vehicle 51 by the video camera 25 on board that vehicle.
The game system of the invention is capable of estimating whether the quadricopter 53 is visible, visible in part, or hidden in the video images as shown in
By such recognition of objects in the image, the game system is capable of validating a fictitious shot at the quadricopter 53 if the quadricopter is recognized as being visible in the video image (and is properly aimed at). If the quadricopter 53 is recognized as being hidden, the shot in its direction is invalidated.
In this way, the players of the video game can make use of elements in the real setting as elements of the game.
a and 7b show two video images as delivered by the camera 25 on board the anti-aircraft vehicle 51 of
Once the position A of the virtual cross-hairs during firing is known, image recognition software scans the video image in order to recognize the presence of the adversary vehicle 53 in the video image, in a step 104.
The adversary may be recognized in the image in various ways. For example, the recognition software may merely scan the video image while attempting to find the known shape of the adversary vehicle 53. Nevertheless, in order to facilitate recognition, it is preferable for the adversary vehicle 53 to have recognition elements placed on its surface. By way of example, these may be reflecting elements.
In the preferred implementation of the invention, the adversary vehicle 53 has numerous flashing LEDs all around it. These LEDs are in a known configuration, they flash at a known frequency, and they are of known colors. The recognition software can thus detect the adversary vehicle 53 more easily in the video image.
It is also possible to envisage using multi-color LEDs that pass from green to red to orange so as to facilitate recognition. The LEDs may also emit in the infrared. In this way, there are no longer any elements that are visible to the human eye in the game system.
The image recognition software may be located either in the remote-controlled vehicle 51, or in the control unit, i.e. the game console.
In order to further improve recognition, the recognition software may include an algorithm for tracking the recognition elements of the adversary vehicle 53. Image after image, this algorithm measures the movements of the recognition elements. For this purpose, it is possible to rely on the movement of the shooter vehicle 51, thus enabling it to predict where the adversary vehicle 53 ought to be found in the image, by adding the previously-observed movement of the adversary vehicle 53 so as to enable the software to predict its position.
In order to further improve recognition of position between radio-controlled toys, the sensors 13 on each radio-controlled toy are used. The information is sent not only to the game console used for driving the vehicle, but also to all the other game consoles. In this way, the simulator knows the estimated position of the enemy vehicle. The video recognition means no longer serves to find the enemy vehicle in the image, but to verify whether or not there is a real object between two vehicles. This principle is of great interest in the context of a video game. It makes it easy to take account of real obstacles. In this way, as shown in
Returning to
If recognition is negative, the shot is invalidated in a step 105. In contrast, if the recognition software has recognized the adversary vehicle 53 in the video image, the next step 106 consists in acquiring the instantaneous position B of the adversary vehicle 53 in the video image. Thereafter, the position B of the adversary vehicle 53 is compared in a step 107 with the position A of the virtual cross-hairs 43 when the shot was fired. If the positions are identical, the shot is validated in a step 108, i.e. the fictitious shot has reached its target. If the positions are different, the shot is invalidated, in a step 109.
To make the recognition of objects even more effective, the adversary vehicle 53 may also transmit its instantaneous position to the shooter vehicle 51.
As described above, the vehicles have sensors, and in particular inertial units. Throughout the duration of the game the vehicles transmit their positions. When a vehicle comes within the field of view of the video camera, its position as transmitted by radio is known. This is used as an initial position by the detection algorithms. This enables them to converge more easily and faster.
The two methods of sensing and predicting video movements can be coupled together. Software makes use of the position information to obtain pertinent information that is unaffected by drift in the inertial unit.
Thus, in the preferred version of the invention, the vehicles produce fine estimates of their positions. The fact that the detection algorithm indicates that a vehicle has suddenly disappeared from the estimated position indicates that there is very probably a real obstacle between the two vehicles.
To summarize, by using software for analyzing images and position sensors, the game system knows the position of the adversary vehicle 53 in the image. It can determine whether the adversary vehicle 53 is masked or out of sight. It waits until it detects the LEDs at a precise position in the image, and otherwise it detects nothing and concludes that the adversary vehicle 53 is masked.
It is also possible to imagine a game stage in which the shooter fires indirectly at a hidden adversary vehicle. The fictitious shot is nevertheless validated because of the position known from the inertial unit and assuming that the game involves firing a parabolic shot around the obstacle behind which the adversary vehicle is hiding.
It is also possible to imagine a game stage that is even more realistic in which firing is indirect and the shooter aims the cross-hairs in front of the adversary vehicle. The simulation software then defines the position of the virtual projectile step by step. The software for recognition and movement prediction tracks the movement of the adversary to the point of virtual impact. The simulation software can simulate complex firing parameters: for example, the initial speed of a shell may be very fast on firing and may decrease rapidly. The software can also simulate complex behaviors of munitions such as the projectile exploding close to the target, or guided missiles, or homing missiles. This type of projectile simulation makes the game even more interesting, an adversary may see an adversary projectile being launched and then maneuver very quickly in order to avoid it. This also makes it possible to simulate situations that are extremely complex such as a shooter game between fighter planes in which the trajectory of the projectiles depend on the trajectories of the firing airplane, and the position of a point of impact depends on the position of the adversary airplane several seconds after the shot was fired.
Finally, the procedure for recognizing objects may also proceed with recognition that is partial, i.e. it may estimate whether or not the object is partially hidden. Such partial recognition may be performed by measuring variation in the brightness of the LEDs from one image to another. In order for this partial recognition to be effective, it is preferable to place numerous LEDs on the surfaces of the remote-controlled vehicle in question. By way of example, the system may assume that when brightness is halved, then half of the LEDs are masked, which means that the adversary vehicle is half hidden.
Naturally, even if the video games described herein involve only two remote-controlled vehicles, the invention is not limited to a game having only one or two of them. The games described may be played with more than two players and more than two remote-controlled toys.
Number | Date | Country | Kind |
---|---|---|---|
07 00998 | Feb 2007 | FR | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/FR08/00180 | 2/13/2008 | WO | 00 | 3/2/2010 |