The present invention relates to a system for providing a simulation-based virtual reality to a user of the system.
Head Mounted Displays (HMD) or VR glasses are common systems used to provide the user with an interactive simulated reality. The simulation provided is created by a software developer (e.g. the manufacturer of a video game) and then displayed to the user via the HMD so that the user has the feeling of being in the simulation, The HMD therefore creates a virtual reality based on the simulation, which is played to the user, e.g. via the display in the HMD, or with which the user can interact using their own avatar, for example.
The user can move his avatar, for example, by moving a controller, a joystick or by moving his body in an environment detected by sensors within a simulated environment. During this movement of the avatar in the simulated environment, the avatar is illuminated by various light sources and the direction from which these light sources shine on the avatar changes as the movement progresses.
As a result, the illumination of the avatar in the simulation is also constantly changing, in particular the direction of the illumination and the intensity of the illumination. For example, weather influences in the simulation (e.g. rainy weather or sunshine) can cause the brightness of a simulated sun in the simulation to decrease or increase.
Accordingly, these changes to the lighting conditions in the simulation affect the illumination of the avatar's face, wherein changes to the lighting conditions of the avatar's face in prior art HMD systems are communicated to the user of the system via the display built into the HMD, in particular by increasing or decreasing the brightness of the display.
However, since the display is designed as a larger surface with a certain distance to the user's face and thus emits light corresponding to the representation of the simulation, it is not possible in prior art HMD systems to selectively illuminate certain parts of the user's face in the display area of the HMD with light in different colors and intensities in such a way that the illumination of the user's face in the display area of the HMD corresponds to the illumination of this facial area of the avatar in the simulation.
Particularly noteworthy here is the user's nose, which on the one hand is in the user's field of vision and on the other hand is particularly easily illuminated by the display due to its raised architecture. If, for example, a rainy, dark environment is shown in the simulation in which the avatar is moving, the display illuminates the nose to show the simulation, even though the avatar's nose is not directly illuminated in the simulation. On the other hand, if a very sunny, bright area is displayed in the simulation, the brightness of the display may not be sufficient to illuminate the user's nose in the HMD according to the brightness of the illumination of the avatar's nose in the simulation. Since the nose is in the user's field of vision, this inadequate illumination of the nose may cause the user to subconsciously perceive this illumination, somewhat disrupting their experience of virtual reality or causing them to perceive virtual reality as less realistic.
It is therefore an object of the invention to provide a system that overcomes the disadvantages of the prior art.
Another object is to provide a system that enables the user to have an improved experience of virtual reality based on a simulation.
It is a further object to provide a system that provides the user with virtual reality in such a way that the user perceives the virtual reality as more realistic, in particular with regard to the provision of illumination of parts of the user's face.
These objects are solved by the realization of at least some of the characterizing features of the independent claims. Features which further develop the invention in an alternative or advantageous manner can be taken from some of the other features of the independent claims and the dependent claims.
The invention relates to a system for providing a simulation-based virtual reality to a user of the system, wherein the system comprises:
The term “corresponding” means that the illumination of the face by the illumination unit is related to the illumination of the model of the face in the simulation, in particular wherein this relationship is such that the two illuminations converge (or also that the illumination of the face by the illumination unit approaches the illumination of the model of the face in the simulation, whereby the two illuminations are similar). In the optimum case, the illumination of the face by the illumination unit and the illumination of the model of the face in the simulation correspond (or are related) in such a way that the illumination of the face by the illumination unit corresponds to the illumination of the model of the face in the simulation (or also that the illumination of the face by the illumination unit and the illumination of the model of the face in the simulation are similar/correspond/match).
The system attempts to control the illumination unit in such a way that the illumination of the face by the illumination unit corresponds to the illumination of the model of the face in the simulation. However, exact equality of these two illuminations is very difficult or sometimes impossible to achieve in reality, since parameters such as the light intensity, light temperature, light color and/or illuminance of the illumination in the simulation, in particular the illumination of the model of the face in the simulation, can vary greatly depending on the creation of the simulation/programming, while these parameters can be varied much less in the illumination of the face in reality (for example by the illumination unit within VR glasses) due to technical limitations. Such technical limitations can be caused, for example, by the fact that the light source (e.g. one or more LEDs) of the illumination unit can only emit light in a certain wavelength range (which in turn influences the light color) or in a certain intensity range. The control unit receives the information regarding the illumination of the model of the face in the simulation from the computing unit and then controls the illumination unit in such a way that the illumination of the face by the illumination unit corresponds to the illumination of the model of the face in the simulation within the framework of the technical conditions present, for example, due to the design of the light source and/or the structure of the illumination chamber within the VR goggles, in particular wherein this correspondence is formed such that the two illuminations converge (i.e. the illumination of the face by the illumination units converges-as closely as possible-to the illumination of the model of the face in the simulation). In the best case, the two illuminations are identical or so identical (or converge in such a way) that the user's vision and subconscious cannot perceive any differences in the illumination, thus enabling a more realistic provision of virtual reality by the system according to the invention.
Because the system according to the invention ensures that the illumination of the user's face converges (or at least approximately coincides) with the illumination of the model of the face in the simulation and the previously described differences in illumination as in VR systems of the prior art cannot occur, wherein these differences in illumination are at least subconsciously perceived by the user, the user perceives the virtual reality conveyed to him as more realistic than in VR systems of the prior art through the use of the system according to the invention.
In addition to an (artificially created) computer-animated environment/world, as used for computer games, for example, the term simulation also refers to a (3D) film or film sequence, a (3D) image, etc., which can be displayed to the user of the system and thus enables them to immerse themselves in a virtual reality. By means of the computing unit, the model of the face can be used, for example, to create the avatar's face in a computer game and/or the model of the face can be inserted into a movie, for example.
The control unit is designed, for example, as a control unit (as used in the automotive industry, for example) or as a microcontroller (MCU). The control unit receives information/data from the computing unit, processes this data and sends corresponding commands to the illumination unit. As microcontrollers in particular are small in size, the control unit can also be integrated into the illumination unit, allowing the system comprising the illumination unit and control unit to be compact and in one piece, which in turn facilitates installation in visual output devices such as VR glasses for the user.
The computing unit is designed, for example, as a processor, in particular as a microprocessor, which can receive, process and transmit data (e.g. the three-dimensional information relating to the face available as data).
The three-dimensional information relating to the face is, for example, information relating to the spatial extension of the face, such as the width, height and depth of the face, the facial geometry or the geometry and arrangement of parts of the face, but also information relating to the presence of elevations, such as the nose, or depressions, such as scars. This three-dimensional information about the face can be extracted from photos of the face taken by a camera, for example, using photogrammetry. Alternatively or in addition to this, several (at least two) images of the face (taken from different positions, but with a certain degree of overlap) can first be stitched together into one image using a “stitching” process and then the three-dimensional information relating to the face (as 3D data) can be extracted from this stitched image using photogrammetry.
The three-dimensional information relating to the face can be derived/obtained, for example, from (2D and/or 3D) image data recorded by a camera, for example, light imaging detection ranging data (LIDAR data) or 3D point cloud data recorded by a laser scanner, for example.
The model of the face can, for example, be a three-dimensional surface mesh (wireframe model).
Information regarding the illumination of the model of the face in the simulation can be, for example: which parts of the face and from which direction they are illuminated; with which intensity (how strong) the respective parts of the face are illuminated; and/or with which light temperature/light color the respective parts of the face are illuminated.
The system according to the invention can, for example, only be designed as an illumination unit with a control unit, wherein this system can then be attached to, for example, VR glasses that display the virtual reality based on the simulation for the user. The system is arranged, for example, in the inner area designed as a kind of chamber, i.e. the part of the VR goggles located between the display unit (e.g. screen) and the user's face (e.g. at arrangement/fixing points in the VR goggles specially provided for such a system) and can illuminate the part of the user's face located inside the chamber of the VR goggles from there. The control unit exchanges information with the computing unit. In this embodiment, this computing unit is not part of the system, but is designed, for example, as a processor on a computer or in the VR glasses. The computing unit transmits information/data regarding the illumination of the model of the face in the simulation to the control unit, for example by means of a cable or wireless data transmission, whereupon the control unit then controls the illumination unit in order to illuminate the face in the VR glasses accordingly. The advantage of this design of the system according to the invention is that existing VR systems without an illumination unit can easily be retrofitted with the system according to the invention.
In an exemplary embodiment of the system according to the invention, the system has the computing unit, a first recording unit, in particular a camera, wherein the first recording unit is designed to record the three-dimensional information relating to the face, in particular optically, and/or a visual output device or head-mounted display (HMD), in particular glasses for displaying the virtual reality or virtual reality glasses (VR glasses), wherein the visual output device is designed to display the virtual reality to the user, and/or a display unit, wherein the display unit is designed as a screen, wherein the simulation can be displayed on the screen.
The system according to the invention can, for example, also be designed as a type of overall system in which the illumination unit, the control unit and the computing unit are permanently installed in the visual output device, e.g. the VR glasses. The first recording unit can, for example, also be permanently installed in this overall system or be designed as a separate recording unit, e.g. a smartphone, which records images of the user's face and sends them to the computing unit. The advantage of this design of the system according to the invention is that the individual components can be matched to each other in their external shape and their technical modification during production or at the latest during assembly, so that on the one hand the design and/or accuracy of fit and on the other hand the technical interaction, in particular of the electronics, is less susceptible to faults.
In a further exemplary embodiment of the system according to the invention, the computing unit uses setting parameters, in particular setting parameters such as skin color, accessories such as glasses and/or jewelry, light preferences, in particular light intensity, light temperature, light color and/or illuminance, of the illumination in the simulation, in particular the illumination of the model of the face in the simulation, and/or the illumination of the face by the illumination unit, which can be provided by a manual input of the user and/or by an input of a provider/manufacturer of the simulation (e.g. a software developer), in particular via a programming interface or an application programming interface (API interface), for.
This embodiment has the advantage that the user and/or the provider/manufacturer of the simulation (e.g. the developer of a computer game) can provide information, e.g. settings with regard to the model of the face, settings with regard to the avatar, settings with regard to the properties of the light in the simulation and/or settings with regard to the properties of the light emitted by the illumination unit, to the computing unit and thus the illumination by the illumination unit, the illumination in the simulation and/or the illumination of the model of the face in the simulation can be (individually) adjusted according to the corresponding wishes.
In another exemplary embodiment, the first recording unit is arranged at the visual output device, and/or captures movements of the face as updated three-dimensional information relating to the face and the computing unit adjusts the model of the face based on the updated three-dimensional information relating to the face.
This embodiment has the advantage that the model of the face created by the computing unit can be constantly updated, whereby the illumination of the updated model of the face in the simulation is also updated. If there is a change in the illumination of the model of the face in the simulation, the system according to the invention is able to control the illumination unit by means of the control unit in such a way that the illumination of the face by the illumination unit corresponds to the illumination of the updated model of the face in the simulation (or is approximated to the illumination of the updated model of the face in the simulation or corresponds to the illumination of the updated model). In this way, the user perceives the virtual reality conveyed as very realistic even after moving his face.
In a further exemplary embodiment, the visual output device has a first motion sensor which is designed to detect a movement of the user's head.
In this way, movements of the visual output device and thus head movements of the user are detected and the orientation of the face relative to the rest of the body and/or relative to the orientation of the face before the head movement is derived from this by means of the computing unit. This information about the orientation of the face can be further processed by the computing unit, whereby, among other things, the orientation of the model of the face in the simulation (relative to the orientation of the model of the face before the head movement) can be determined. From this, the computing unit can, for example, derive changes in the illumination of the model of the face in the simulation (e.g. the model of the face in the simulation is less strongly illuminated when the head and thus the face turns away from the sun and vice versa). This detected change in the illumination of the model of the face in the simulation is transmitted as information from the computing unit to the control unit, wherein the control unit then controls the illumination unit accordingly in order to equalize the illumination conditions of the model of the face in the simulation and the face within the visual output device.
In a further exemplary embodiment, the system comprises at least one movement unit (e.g. a controller, a joystick, etc.), wherein the at least one movement unit
In a further exemplary embodiment, the visual output device and the at least one movement unit each have at least one position sensor, wherein the computing unit determines a position of the visual output device and, in this way, of the user's face relative to a position of the at least one movement unit.
In a further exemplary embodiment, the computing unit uses the data provided by the first motion sensor, the second motion sensor and/or the respective position sensors for
In a further exemplary embodiment, for creating the simulation, and/or for calculating the illumination of the model of the face in the simulation, in particular taking into account the provided setting parameters, the computing unit calculates.
This embodiment has the advantage that the computing unit constantly recalculates the illumination of the model of the face in the simulation. If there are changes in the illumination of the model of the face in the simulation, the Illumination unit adjusts the illumination of the face in the visual output device accordingly, whereby the realistic experience of the virtual reality conveyed is maintained for the user even if the aforementioned aspects in the simulation change.
A further exemplary embodiment of the system according to the invention, wherein
This embodiment has the advantage that in particular the parts of the face that are in the user's field of vision and are therefore perceived at least subconsciously can be illuminated particularly well or the illumination of these parts of the face by the illumination unit comes particularly close to the illumination of these parts of the face in the simulation, which makes the virtual reality feel particularly real for the user of the system.
In a further exemplary embodiment, the illumination unit has a light source, in particular a light-emitting diode (LED).
This embodiment has the advantage that the illumination unit can be obtained easily and cheaply.
In a further exemplary embodiment, the illumination unit has two light sources, in particular LEDs, wherein
This embodiment has the advantage that the illumination unit can illuminate different areas of the nose differently and independently of each other in order to be able to simulate the presence of differently illuminated nose areas of the facial model in the simulation. In this way, the virtual reality feels particularly real to the user of the system.
In a further exemplary embodiment, the illumination unit has a light directing element, in particular a lens, and/or an aperture, each designed to focus light from the illumination unit onto a part of the user's face, in particular the user's nose.
This embodiment has the advantage that the illumination of the face by the illumination unit can be individually adapted to the user's wishes.
In a further exemplary embodiment, an arrangement of the light directing element and/or the aperture on the illumination unit can be changed in order to make the illumination unit adaptable to faces of different users.
This embodiment has the advantage that the system according to the invention can be used by several users and still ensures optimum illumination of the face by the Illumination unit and optimum wearing comfort for all users, even with different face shapes.
In a further exemplary embodiment, the system has a second recording unit, in particular a laser tracking system, wherein
In a further exemplary embodiment, the illumination unit has a plurality of light sources, in particular a plurality of LEDs, wherein the plurality of light sources are arranged in a defined arrangement on the illumination unit, in particular wherein the arrangement of the plurality of light sources corresponds to
This embodiment has the advantage that the parts of the face can be illuminated from different directions and/or each part of the face to be illuminated can be illuminated by its “own” light source thanks to the multiple light sources and a corresponding arrangement of these light sources.
In a further exemplary embodiment, the illumination unit is connected to the visual output device or to a computer, in particular one having the computing unit, for the supply of energy and/or for the exchange of data/information, in particular by means of a cable, and/or is designed for wireless data transmission, in particular by means of Bluetooth, and/or has a battery unit for the wireless energy supply.
In a further exemplary embodiment, the illumination unit is arranged on the visual output device in such a way that parts of the user's face, in particular eyelashes or eyebrows, do not cast any unwanted shadows on the user's face when the face is Illuminated by the illumination unit. As a result, the user's experience of virtual reality is not disturbed by unwanted shadows.
In a further exemplary embodiment, the illumination unit (or the system comprising illumination unit and control unit) is arranged on the visual output device, in particular at a predefined arrangement point.
This embodiment has the advantage that visual output devices, e.g. VR glasses, can be upgraded with an illumination unit (or the system consisting of illumination unit and control unit) very easily and without great effort for the user, and the attached illumination unit is then also directly positioned/aligned correctly.
In another exemplary embodiment, a computer, a game console, the visual output device and/or the illumination unit comprises the computing unit.
This embodiment has the advantage that the computing unit can be provided either by an external device such as a computer and/or the illumination unit (or the system comprising the illumination unit and the control unit), whereby the structure of the system according to the invention can be designed very flexibly and the system can thus be adapted to the needs of the user. Furthermore, it is advantageous if the computing unit is contained in an external device such as a computer, since this external device can normally provide a greater computing capacity than, for example, a microprocessor in the illumination unit (or the system comprising the illumination unit and the control unit).
The system according to the invention is described in more detail below by way of embodiment examples shown schematically by way of examples in the figures. Identical elements are marked with the same reference signs in the figures. The embodiments described are generally not shown to scale and are not to be understood as a limitation, wherein the figures show in detail:
Furthermore, the first recording unit 6 is arranged in the chamber 10 of the VR glasses 7, wherein the first recording unit 6 records three-dimensional information (data) 5 relating to the face, from which a three-dimensional surface mesh (also: 3D surface mesh) of the part of the user's face 3 is obtained, which is enclosed by the VR glasses 7 and can therefore be recorded by the first recording unit 6. The (wireframe) model of the face, in particular a three-dimensional model, is created from this three-dimensional surface mesh 5 by means of the computing unit.
In a further embodiment, the first recording unit 6 is designed in such a way that movements of the face 3 are recorded by the first recording unit 6 and the three-dimensional information 5 relating to the face is updated accordingly. The computing unit then adapts the model of the face based on the updated three-dimensional information relating to the face.
In a further embodiment, the system 1 has a memory unit which is designed to store the captured and/or updated three-dimensional information (data) 5 relating to the face. The three-dimensional information (data) 5 recorded and/or updated by the first recording unit 6 can be transferred directly from the first recording unit 6 to the computing unit and/or transferred from the first recording unit 6 to the storage unit for creating the model of the face by the computing unit, wherein the computing unit can retrieve and process the three-dimensional information (data) 5 stored there with respect to the face as required for creating the model of the face, for creating the simulation, for integrating the model of the face into the simulation, and/or for calculating the illumination of the model of the face in the simulation.
The illumination unit 2 is designed as an oval ring, wherein the oval ring consists of a plurality of sections 12 arranged next to each other. Light sources 9 are arranged in these sections, wherein these light sources 9 can be light-emitting diodes (LEDs) (connected in series), for example. The respective sections 12 of the illumination unit 2 can be of different sizes, with the sections 12 being larger in the outer areas of the oval, i.e. in the areas next to the eyes, and becoming smaller and smaller towards the center of the oval, i.e. the areas around the nose. In this way, the light sources 9 in the large sections 12 can be arranged further apart from each other, whereby the outer areas of the face 3 are less strongly illuminated. The illumination of the outer side of the face 3 does not have to be as strong, since light sources 9 located further out are no longer in the user's field of vision, light sources 9 located on one outer side are shielded from the other side by the face 3, as a result of which the shielded outer side is not illuminated by these light sources 9 and is therefore darker, and there are no elevations such as the nose 4 in these outer areas, which can be very easily illuminated from many sides. For the reasons mentioned, the nose area 4 is preferably to be illuminated very brightly, which is why the smaller sections 12 around the nose area 4 allow many light sources 9 to be arranged close together to provide this bright illumination.
Because the illumination unit 2 is designed as an oval ring, the user can look through the illumination unit 2 to the display unit 8, which allows the simulation to be displayed to the user at the same time.
The illumination unit 2 is not necessarily arranged on the visual output device 7, but can also be designed as a separate unit which, together with the control unit, forms the system 1 according to the invention. Accordingly, prior art visual output devices, e.g. VR glasses, can be upgraded with the system 1 according to the invention (Illumination unit 2 and control unit). The computing unit can be designed as part of this system 1 or as part of the visual output device 7 and/or as part of a separate device, e.g. a computer. The power supply, communication and/or data transmission between the system 1 according to the invention and the computing unit can be carried out by means of cables and/or wireless data transmission, in particular by means of Bluetooth. In a further embodiment, the system 1 according to the invention has the illumination unit 2, the control unit, the computing unit, the first recording unit 6 and the visual output device 7, in particular designed as HMD or VR glasses.
In
In
In
In
In
Whether small parts of the face 3, such as in
It should be understood that the figures shown are only schematic representations of possible embodiment examples. The various approaches can also be combined with each other and with prior art methods.
Number | Date | Country | Kind |
---|---|---|---|
23189568.1 | Aug 2023 | EP | regional |