METHOD AND DEVICE FOR DISPLAYING DATA FOR MONITORING EVENT

Information

  • Patent Application
  • 20220180570
  • Publication Number
    20220180570
  • Date Filed
    January 29, 2020
    4 years ago
  • Date Published
    June 09, 2022
    a year ago
Abstract
An augmented reality method includes acquisition of a plurality of images by an image acquisition device that at least partially cover a space that has at least two landmarks. A three-dimensional position and orientation of the space in relation to the image acquisition device is determined. The instantaneous position, within the reference frame of the space, of a mobile element moving in the space is received. At least one acquired image is displayed on the screen. An overlay is superposed on the displayed image at a predetermined distance in relation to the position of the mobile element in the image. Also, a portable electronic device implements the method.
Description
TECHNICAL FIELD OF THE INVENTION

The field of the invention is that of digital data processing.


More precisely, the invention relates to a method and device for displaying data for monitoring an event.


The invention has in particular applications for live monitoring of a sports event, such as a soccer, rugby, basketball, tennis, etc. game, in a grandstand of a sports facility such as a stadium or a hall. The invention also has applications in the field of entertainment, for example for monitoring a game, a live performance or a concert.


BACKGROUND OF THE INVENTION

Techniques are known from the prior art that make it possible to monitor an event by displaying in particular in real time data, such as statistics linked to an individual or to a group of individuals participating in the event, such as for example the number of goals scored by a player during a soccer game, the number of aces or direct faults of a player during a tennis match, the rate of success of the 3-point shots of a player during a basketball game, etc.


Such data is generally displayed on a screen of the facility wherein the event is unfolding.


The major disadvantage of these techniques is that they are hardly intuitive for an individual monitoring the event from a grandstand of the facility. Furthermore, these techniques tend to divert the attention of the individual who has to turn their head to look at a screen of the facility.


None of the current systems makes it possible to simultaneously respond to all the needs required, namely to propose a technique that allows an individual to monitor an event by intuitively displaying data associated with actors of the event without diverting the attention of the individual.


OBJECT AND SUMMARY OF THE INVENTION

The present invention aims to overcome all or a part of the disadvantages of the prior art mentioned hereinabove.


For this purpose, the invention relates to an augmented reality method in real time, comprising steps of:

    • acquiring a plurality of images by an image acquisition device that at least partially cover a space, the space having at least two landmarks, the image acquisition device being associated with a two-dimensional reference frame, referred to as the reference frame of the image, the image acquisition device being comprised in a portable electronic device also comprising a screen;
    • detecting at least two landmarks of the space in at least one image, the space being associated with a three-dimensional reference frame, referred to as the reference frame of the space;
    • determining a three-dimensional position and orientation of the space in relation to the image acquisition device thanks to the landmarks detected;
    • receiving the instantaneous position, within the reference frame of the space, of a mobile element moving in the space;
    • calculating the position of the mobile element in the reference frame of the image from transformation parameters between the reference frame of the space and the reference space of the image, said transformation parameters being calculated from the three-dimensional position and orientation of the space in relation to the image acquisition device;
    • displaying at least one acquired image on the screen; and
    • superimposing on the image displayed on the screen at least one overlay at a predetermined distance in relation to the position of the mobile element in the reference frame of the image.


Thus, by knowing the transformation between the reference frame of the space and the reference space of the image, it is possible to display data associated with said mobile element participating in a live event such as a sports game, a show or a concert, in the overlay that is displayed in the vicinity of the image of the mobile element on the screen retransmitting the image acquired by the image acquisition device.


The space can comprise for example a field of a match or a stage of a show hall. The space is generally delimited so as to allow spectators to monitor the live event unfolding in the space, for example from at least one grandstand in the vicinity of the field or stage.


The mobile element is generally an actor participating in the live event, such as a player participating in a sports game unfolding on the field, an actor in a show or a musician in a concert. The mobile element can also be an accessory such as a game accessory played with by players during a match. A game accessory is generally a ball, a puck or a shuttlecock.


It should be emphasized that the overlay makes it possible to display a piece of information, a static or animated image, a video, or any other element that makes it possible to embellish the event displayed on the screen.


The transformation parameters between the reference frame of the space which is three-dimensional and the reference space of the image which is two-dimensional are generally calculated from the three-dimensional position of the space in relation to the image acquisition device and from the three-dimensional orientation of the space in relation to the image acquisition device which can be a camera.


It should be emphasized that the image acquisition device can be represented in the reference frame of the space. This representation generally comprises the three-dimensional coordinates of the image acquisition device in the reference frame of the space and the three angles that make it possible to orient the image acquisition device in the reference frame of the space.


The transformation between the reference frame of the space and the reference space of the image generally comprises at least one translation, at least one rotation and a projection.


It should be emphasized that the determining of the three-dimensional position and orientation of the space in relation to the image acquisition device, in a reference frame associated with the image acquisition device or directly in the reference frame of the space, is carried out by detecting landmarks in an image covering at least partially the space, not by using a depth-of-field camera. Indeed, in light of the generally very substantial distance between the image acquisition device and the space, a depth-of-field camera would not be adapted as it would induce excessive imprecision in the position and the orientation of the space.


Advantageously, a landmark is chosen from:

    • a line of a marking of a field or of a stage comprised in the space;
    • a semi-circle of a marking of a field or of a stage comprised in the space;
    • an intersection between two lines of a marking of a field or of a stage comprised in the space;
    • an element standing substantially perpendicularly in relation to the surface of a field or of a stage comprised in the space;
    • an element characteristic of a structure surrounding the surface of a field or of a stage comprised in the space;
    • a logo; and
    • a marker.


Preferably, four landmarks are detected and used to determine the three-dimensional position and orientation of the space in relation to the image acquisition device.


In particular embodiments of the invention, the augmented reality method also comprises a step of automatic recognition of the type of field comprised in the space.


This step of automatically recognizing the type of field is generally based on detecting characteristic points linked to the shape of the sports field recognized which can be of any type: soccer, basketball, handball, rugby, tennis, hockey, baseball, etc. These characteristic points can be confused with the landmarks detected. It should be emphasized that this step of recognizing is not specific to one sport in particular but makes it possible to recognize any sports field of which the characteristic points are known. The characteristic points are generally the general shape of the field, the relative position of the lines in relation to the field, the presence and the relative position of a semi-circle in relation to the field, etc.


Advantageously, the automatic recognition of the type of a field is carried out via a method of deep learning trained on a plurality of field images.


Thus, it is possible to quickly recognize any type of field, regardless of its orientation, its viewing angle.


Furthermore, thanks to this method of deep learning, it is possible to recognize a field from a partial image of the field, i.e. without needing to see the entire field.


In particular embodiments of the invention, the augmented reality method also comprises steps of:

    • acquiring an instantaneous movement of the image acquisition device, in rotation and in translation in relation to the space;
    • updating the position and the orientation of the sports field in relation to the image acquisition device from the preceding position and orientation of the space in relation to the image acquisition device and of the instantaneous movement of the image acquisition device.


Thus, the data overlays in the images displayed on the screen are more stable. Furthermore, these additional steps make it possible to obtain an augmented reality method that uses less calculation time and therefore less electrical energy. Indeed, once the three-dimensional position and orientation of the space in relation to the image acquisition device are known, it is easy to update them by knowing the movements of the image acquisition device. A method that can be used to evaluate these movements is for example of the SLAM (Simultaneous Localization And Mapping) type.


In particular embodiments of the invention, the step of determining the three-dimensional position and orientation of the space in relation to the image acquisition device comprises a substep of generating parameters of a computer machine learning algorithm from a plurality of images recorded in a database, each image of the database representing all or a portion of a space of which the position and the orientation in relation to the image acquisition device having acquired said image are known.


Thus, determining the three-dimensional position and orientation of the field in relation to the image acquisition device can be carried out quickly and precisely by using the parameters generated at the acquired images so as to determine the three-dimensional position and orientation of the space.


In particular embodiments of the invention, the step of determining a three-dimensional position and orientation of the space in relation to the image acquisition device comprises a substep of superimposing a three-dimensional model of the space on at least one of the images acquired by the image acquisition device.


Thus, the reference frame of the space can be positioned and oriented in the virtual space in relation to the image acquisition device.


In particular embodiments of the invention, the augmented reality method also comprises a step of correcting the instantaneous position of the mobile element according to an instantaneous speed of the mobile element and/or of an instantaneous acceleration of the mobile element.


Thus, it is possible to improve the position of the overlay in the images, in particular when a substantial latency between the acquisition of the image and the display of the latter. Indeed, using the instantaneous speed of the mobile element and/or the instantaneous acceleration of the latter makes it possible to predict a position of the mobile element in a close interval of time.


In particular embodiments of the invention, the superimposing of the overlay on the image displayed on the screen of at least one piece of data associated with said mobile element is carried out in real time.


It should be emphasized that the data or piece of data overlaid in the image is generally transmitted by a data provider.


Advantageously, the overlay comprises at least one piece of data chosen from:

    • a name of a player;
    • a statistic associated with the player, such as a number of goals, a number of tries, a number of baskets, a number of points scored, a number of successful passes;
    • a name of a team;
    • a positioning of a group of players in relation to other players;
    • a formation of the team or of a group of players;
    • a distance between a point of the field and a player;
    • a difference between two points of the field; and/or


a graphic element such as a line, a circle, an ellipse, a curve, a square or a triangle; and/or


a fixed or animated image; and/or


a video.


An animation can for example be linked to the celebration of scoring points during a game.


In particular embodiments of the invention, the augmented reality method comprises a step of determining a clipping of a mobile element, the clipping generating an occlusion for at least one overlay superimposed on the image displayed on the screen.


Thus, it is possible to have a rendering that is much more realistic by masking a portion of the overlay displayed at a mobile element, and in particular at a player. This is in particular the case when the overlay comprises a graphic element such as a virtual line on the field, corresponding for example to an off-side line in soccer or in rugby.


In particular embodiments of the invention, the augmented reality method also comprises a step of selecting a mobile element and of displaying a piece of information relating to the mobile element in an overlay in the vicinity of the mobile element.


The invention also relates to a portable electronic device comprising a camera and a screen, implementing the augmented reality method according to any of the preceding embodiments.


The portable electronic device also generally comprises a processor and a computer memory storing the instructions of a computer program implementing the augmented reality method.


Preferably, the portable electronic device is a smartphone, augmented reality glasses or an augmented reality headset.


The portable electronic device can comprise a frame and a screen mounted on the frame that is intended for being worn on the face of an individual.


In other terms, the portable electronic device can comprise any means of reproducing an image that can be displayed in front of an eye of an individual, including a contact lens making it possible to reproduce an image.


In particular embodiments of the invention, the portable electronic device also comprises at least one accelerometer and/or a gyroscope.


Thus, the device comprises means for evaluating the movements of translation and of rotation of the camera in relation to the space.


It should be emphasized that a part of the method could be implemented by a remote server, in particular the steps of:

    • detecting at least two landmarks of the field in at least one image, the space being associated with a three-dimensional reference frame, referred to as the reference frame of the space; and
    • determining a three-dimensional position and orientation of the space in relation to the image acquisition device thanks to the landmarks detected.


In which case, the step of updating the three-dimensional position and orientation of the space according to the evaluation of the movements of the camera in relation to the space is implemented by the portable electronic device.





BRIEF DESCRIPTION OF THE FIGURES

Other advantages, purposes and particular characteristics of the present invention shall appear in the following non-limiting description of at least one particular embodiment of the devices and methods object of the present invention, in reference to the accompanying drawings, wherein:



FIG. 1 is a block diagram of an augmented reality method according to the invention; and



FIG. 2 is a view of a portable electronic device implementing the augmented reality method of FIG. 1.





DETAILED DESCRIPTION OF EMBODIMENTS OF THE INVENTION

The present description is given in a non-limiting way, with each characteristic of an embodiment able to be combined advantageously with any other characteristic of any other embodiment.


Note that the figures are not to scale.


Example of a Particular Embodiment of the Invention


FIG. 1 shows a block diagram of an augmented reality method 100 according to the invention implemented by a portable electronic device 200 shown in FIG. 2.


The portable electronic device 200 is here a smartphone held by an individual (not shown) located in a grandstand of a space 202 which is a hall where a basketball game is unfolding between two teams each comprising five players 230. The two teams play with a game accessory which is a basketball (not shown). The game unfolds on a basketball court 220, comprising a marking 221. The hall 202 corresponds in the present case to a space, the space comprising the court 220 and structures such as the grandstand and two basketball hoops 203 (only one shown in FIG. 2).


The individual uses in the present example the portable telephone 200 and sees on a screen 250 of the portable telephone 200 the image acquired in real time acquired by an image acquisition device 210 which is a camera. The camera 210 here captures a portion of the space 202, comprising in particular a portion of the basketball court 220. In the situation shown in FIG. 2, the players 230 of the two teams are located in one half of the field, one of the two teams, represented by the horizontal stripes is the offense while the other team, represented by the vertical stripes is the defense, i.e., preventing the players 2301 of the offense team from sending the basketball into the basketball hoop 203.


The method 100 thus comprises a first step 110 of acquiring a plurality of images by the camera. It should be emphasized that the images acquired generally form a video stream.


In the field of the camera 210, four landmarks 222 are detected during the second step 120 of the method. The four landmarks 222 are a corner 2221 of the field, the basketball hoop 203, a semi-circle 2223 representing the three-point line and a semi-circle 2224 surrounding a free throw line. It should be emphasized that the corner 2221, the semi-circle 2223 and the semi-circle 2224 are part of the marking 221 of the basketball court.


Optionally, the method 100 can comprise a step 115 prior to or simultaneously with step 120 of detecting landmarks 222, during which the type of field is recognized via a field recognition algorithm based on deep learning trained on a plurality of sports field images. The algorithm in particular makes it possible to recognize if it is a basketball court, a soccer field, a rugby field, a hockey field, a tennis court or any other field that comprises a plurality of landmarks. It should be emphasized that the landmarks detected during step 120 are generally according to the type of field detected.


Thanks to the landmarks 222 detected in the space 202, a three-dimensional position and orientation of the space 202 in relation to the camera 210 are determined during a third step 130 of the augmented reality method.


Determining the three-dimensional position and orientation can be carried out either by superimposing on the image a model of the space 202 thanks to the landmarks 222, or by using parameters of a computer machine learning algorithm.


The model of the space 202 generally comprises a model of the field 220 and of the marking 221, even a model of singular elements that can act as landmarks such as for example the basketball hoops 203. To superimpose the landmarks 222 detected in the image and the landmarks present in the model, a homographic method or a method of the “Perspective-n-Point” type can be used.


When a machine learning algorithm is used, the method generally comprises an additional step of generating parameters of the algorithm from images of the space 202 recorded beforehand in a database, with each image being recorded with the three-dimensional position and orientation of an image acquisition device having acquired said image. This learning step, which requires substantial calculation time, is generally carried out by a remote server. It should be emphasized that this learning step is generally carried out only once.


Moreover, the position of the space 202 in relation to the camera 210 is generally calculated in a reference frame that is associated either with the camera 210 or with the space 202. The passing from the reference frame of the camera to the reference frame of the space is generally carried out easily by a translation and a rotation, these two reference frames being three-dimensional.


Transformation parameters between the reference frame of the space and a two-dimensional reference frame associated with the camera 210, referred to as the reference frame of the image, are calculated to transform the coordinates obtained in the reference frame of the space into the reference frame of the images obtained by the camera 210. It should be emphasized that the reference space of the image is separate from the reference frame of the camera in that the reference space of the image is a two-dimensional reference frame and the reference frame of the camera a three-dimensional reference frame. Generally, a projection makes it possible to pass from the reference frame of the camera to the reference space of the image.


The instantaneous position of a mobile element 235 in the reference frame of the space is then received during the fourth step 140 of the augmented reality method 100. The mobile element 235 is here one of the five players 230 of one of the two teams confronting each other during a basketball game. The mobile element can also be the game accessory played with by the two teams, namely the basketball (not shown in FIG. 2).


A calculating of the position of the mobile element 235 in the reference frame of the image is carried out during the fifth step 150 of the augmented reality method 100.


When the position of the mobile element 235 is known in the reference frame of the image, it is possible to superimpose on the image displayed on the screen 250 an overlay 240 in the vicinity of the position of the mobile element during a sixth step 160 of the method 100. Generally, the position of the overlay 240 is carried out at a predetermined distance from the instantaneous position of the mobile element 235 in the image. In the present non-limiting example of the invention, the overlay 240 is superimposed vertically in relation to the position of the mobile element 235 in such a way that it appears above the mobile element 235 on the image displayed.


It should be emphasized that it is possible, as in FIG. 2, to repeat steps 140 to 160 in order to display overlays 240 for a plurality of mobile elements present in the space 202, here for the five players 2301 of the team which is on offense.


The overlay 240 for each player 2301 can comprise one or more pieces of data, such as the name of the player and the number of points scored since the beginning of the game. Any other statistical data useful for monitoring the game can be displayed with this method 100. The overlay 240 can also comprise an animation that is displayed as soon as the mobile element 235 has scored a point, by sending the ball into the basketball hoop of the opposing team.


Thus, the individual located in a grandstand can see the basketball game while still consulting the data of the players 2301 on the screen of their telephone 200. It should be emphasized that the image displayed on the screen 250 of their telephone 200 can advantageously be superimposed on the field seen by the individual, in such a way that the individual can monitor the game without loss of attention. Furthermore, the individual is not obligated to turn their head to look at a screen (not shown in FIG. 2) present in the hall 202.


Furthermore, as the data is displayed directly in the vicinity of the players 230, the monitoring is more intuitive. By using the screen 250, generally touch sensitive, the individual can also select the type of data that they wish to view, such as a particular statistic of a player.


It should be emphasized that the data overlay during the method 100 is advantageously carried out in real time in relation to the image acquired. In other terms, the calculation time between the acquisition of the image and the displaying of the latter with the overlay or overlays 240 is carried out in a very short lapse of time, generally less than a millisecond, in such a way that the individual can see the images acquired of the event practically simultaneously with a direct view of the event.


To this effect, steps 120 and 130, greedy in terms of computing time, can be advantageously carried out on a remote server (not shown in FIG. 2). At least one acquired image is thus transmitted to the remote server via means of telecommunication (not shown in FIG. 2) included in the telephone 200.


So as to reduce latency, the transmission of the data between the telephone 200 and the remote server can be carried out by using a telecommunication network configured according to the 5G standard. Furthermore, latency can also be reduced by using a computer server close to an antenna of the telecommunication network, the computer server then playing the role of the remote server performing the calculations of steps 120 and 130. In this case, this type of architecture is known as edge computing.


From the three-dimensional position and orientation of the field in relation to the camera 210 calculated by the remote server for a given image, the method 100 updates this position and this orientation according to the movements of the camera 210 in the three-dimensional space during a step 170 that replaces steps 120 and 130, by using for example a SLAM (Simultaneous Localization And Mapping) method.


These movements are for example acquired by a three-axis accelerometer 260 comprised in the telephone 200 during a step 175 carried out prior to step 170.


Thus, as the calculation time is faster, the display of the data associated with the players is more stable in relation to the instantaneous position of the mobile elements 230.


Also for the purpose of improving the position of the data overlay 240, in particular so as to give the impression that the mobile element 235 is monitored by the method 100, the method 100 can include an optional step 180 comprised before the step 160 in order to correct the instantaneous position of the mobile element 230 in the reference frame of the space according to an instantaneous speed and/or an instantaneous acceleration of the mobile element 230. This instantaneous speed and this instantaneous acceleration, provided for example with the data associated with the mobile elements or calculated from successive instantaneous positions, make it possible to predict the position of the mobile element 230 in a short interval of time.


This step 180 makes it possible in particular to overcome the latencies that can occur in the transmission of the data to the telephone 200.


In order to improve the realism of the display, in particular when at least one graphic element such as a line, a circle, a triangle is displayed, the method can comprise a step 190 during which an occlusion is calculated from a clipping of a mobile element, such as a player. This occlusion makes it possible to suppress a portion of the graphic element overlayed on the screen or to prevent the overlay of this portion of the graphic element from being superimposed on the mobile element. The step of occlusion 190 can be carried out before or after the overlay step 160.


Knowing the position of a mobile element at a given instant in the image, the clipping can be carried out via a detecting of a contour of this element or by any other techniques known to a person skilled in the art. Another clipping technique consists of a pose estimation of a model that represents a mobile element, such as a player. Thus, knowing the usual size of a player, an estimation of the overall posture of the player can be carried out by analyzing the visible portion of this player on an image or in a sequence of images, by detecting in particular characteristic points of the structure of the player, generally articulation points of the skeleton of the player. By estimating the overall posture of the player, it is then possible to define an estimation of the total volume occupied by the player as well as their position in the space.


From the position of the field and of the mobile elements in the image, it is possible to select, during an optional step 195 a mobile element in particular by clicking, or by touching, a zone of the screen, referred to as interaction zone, in the vicinity of the image of the mobile element. From the coordinates of the interaction zone of the screen, it is possible to display in an overlay the statistics of the player who is the closest to the coordinates of the interaction zone.

Claims
  • 1-15. (canceled)
  • 16. An augmented reality method in real time, comprising: acquiring a plurality of images by an image acquisition device that at least partially cover a space, the space comprising at least two landmarks, the image acquisition device being associated with a two-dimensional reference frame, referred to as a reference frame of the image, the image acquisition device being comprised in a portable electronic device comprising a screen;detecting said at least two landmarks of the space in at least one image, the space being associated with a three-dimensional reference frame, referred to as a reference frame of the space;determining a three-dimensional position and orientation of the space in relation to the image acquisition device based on said at least two landmarks detected;receiving an instantaneous position, within the reference frame of the space, of a mobile element moving in the space;calculating a position of the mobile element in the reference frame of the image from transformation parameters between the reference frame of the space and the reference space of the image, the transformation parameters being calculated from the three-dimensional position and orientation of the space in relation to the image acquisition device;displaying at least one acquired image on the screen; andsuperimposing at least one overlay on said at least one acquired image displayed on the screen at a predetermined distance in relation to the position of the mobile element in the reference frame of the image.
  • 17. The augmented reality method of claim 16, wherein a landmark is one of the following: a line of a marking of a field or of a stage comprised in the space;a semi-circle of the marking of the field or of the stage comprised in the space;an intersection between two lines of the marking of the field or of the stage comprised in the space;an element standing substantially perpendicularly in relation to a surface of the field or of the stage comprised in the space;an element characteristic of a structure surrounding the surface of the field or of the stage comprised in the space;a logo; anda marker.
  • 18. The augmented reality method of claim 16, further comprising an automatic recognition of a type of a field comprised in the space.
  • 19. The augmented reality method of claim 18, wherein the automatic recognition of the type of the field is performed by a method of deep learning trained on a plurality of field images.
  • 20. The augmented reality method of claim 16, further comprising: acquiring an instantaneous movement of the image acquisition device, in rotation and in translation in relation to the space; andupdating the three-dimensional position and the orientation of the space in relation to the image acquisition device from a preceding three-dimensional position and orientation of the space in relation to the image acquisition device and updating the instantaneous movement of the image acquisition device.
  • 21. The augmented reality method of claim 16, wherein the determination of the three-dimensional position and orientation of the space in relation to the image acquisition device comprises generating parameters of a computer machine learning algorithm from a plurality of images recorded in a database, each image of the database representing all or a portion of the space of which the three-dimensional position and the orientation in relation to the image acquisition device are known.
  • 22. The augmented reality method of claim 16, wherein the determination of the three-dimensional position and orientation of the space in relation to the image acquisition device comprises superimposing a model of the space on at least one of the images acquired by the image acquisition device.
  • 23. The augmented reality method of claim 16, further comprising correcting the instantaneous position of the mobile element according to at least one of: an instantaneous speed of the mobile element and an instantaneous acceleration of the mobile element.
  • 24. The augmented reality method of claim 16, wherein the superimposing of said at least one overlay on said at least one acquired image displayed on the screen is performed in real time.
  • 25. The augmented reality method of claim 16, wherein said at least one overlay comprises at least one of the following piece of data: a name of a player;a statistic associated with the player;a name of a team;a positioning of a group of players in relation to other players;a formation of the team or of a group of players;a distance between a point of the field and the player;a difference between two points of the field;a graphic element;a fixed or animated image; anda video.
  • 26. The augmented reality method of claim 25, the statistic associated with the player is at least one of: a number of goals, a number of tries, a number of baskets, a number of points scored and a number of successful passes.
  • 27. The augmented reality method of claim 25, the graphic element is one of the following: a line, a circle, a square or a triangle.
  • 28. The augmented reality method of claim 16, further comprising determining a clipping of a mobile element, the clipping generating an occlusion for said at least one overlay superimposed on said at least one acquired image displayed on the screen.
  • 29. The augmented reality method of claim 16, further comprising selecting a second mobile element and displaying a piece of information relating to the second mobile element in an overlay in a vicinity of the mobile element.
  • 30. A portable electronic device comprising a camera and a screen, the portable electronic device implementing the augmented reality method of claim 16.
  • 31. The portable electronic device of claim 30 being a smartphone, augmented reality glasses or an augmented reality headset.
  • 32. The portable electronic device of claim 30, further comprising at least one of accelerometer and a gyroscope.
Priority Claims (1)
Number Date Country Kind
FR1900794 Jan 2019 FR national
RELATED APPLICATIONS

This application is a § 371 application of PCT/EP2020/052137 filed Jan. 29, 2020, which claims priority from French Patent Application No. 19 00794 filed Jan. 29, 2019, each of which is incorporated herein by reference in its entirety.

PCT Information
Filing Document Filing Date Country Kind
PCT/EP2020/052137 1/29/2020 WO 00