Many attempts at translating real-time events (e.g., sporting events) to augmented reality (AR)-based, extended or cross reality (XR)-based, or virtual reality (VR)-based experiences and environments rely upon mapping captured surface image data (such as video, pictures, etc.) of objects (e.g., balls, players, etc.) onto computer-modeled environments. This surface mapping results in imperfect and unsatisfactory virtual reality experiences for the viewer because the images and sounds do not perfectly correlate to the motion and states of the real-time objects and players. There is a need for a method and system to recreate real-time events in a manner that provides the virtual spectator a more seamless and realistic VR, AR, or XR experience of the real-time event.
To solve this problem, and create an improved experience for the virtual spectator, a more accurate and immersive virtual, extended, or augmented reality environment can be created by relying on data from a network system of sensors embedded throughout the real-time environment during the event in question. This network system would capture data that would otherwise be difficult and/or impossible to determine solely from surface data.
Additionally, by layering and correlating surface data (e.g., video, images, sound, etc.) to the sensor-based data, the verisimilitude of the virtual, extended, or augmented environment will have increased, and the virtual, extended, or augmented reality experience of the user will be enhanced and improved. In some embodiments, certain sensor measurements are used to create calibration curves for the other sensor measurements. These calibration curves allow for total sensor calibration, to ensure that the sensor data collected is as accurate as possible.
The system may be broken down into the following core components:
Participant Sensor Array
During an event, such as a sporting game, there are a number of individuals whose participation is essential to bring the event to life. From coaches to players, to referees—even to spectators—each individual participant brings an important aspect to the event in question and helps complete the event experience. These individuals will be referred to as “event participants” herein.
Event participants are a source of data with the potential to enhance the experience of a virtual spectator. Collecting event participant data is unique to each individual event participant and must be captured using a sensor array system that can create a complete picture of that participant's contribution to the event in question.
The participant sensor array system is composed of sensors that are located on the participant themselves. As depicted in
In some example embodiments, the sensors in the sensor array system may be attached at specific points on the event participant's body to capture specific and unique movements at those points. As such, measurements between those points would be required in order to not only properly calibrate the sensor array system but also increase the accuracy of the data collected, all helping to create a complete model of the event participant's data contribution to the overall data set comprising the event experience.
In one example embodiment, as depicted in
Event Object Sensor Array
Like the individuals who participate during an event, there is often an associated physical object that is a major participant in, or even a focus of, the event's activities. As depicted in
Like the event participant individuals, the physical objects are a source of data that can help enhance the experience of a virtual spectator. Data collected from a physical object is unique to that object and may be captured using a sensor array system that aids in compiling a complete picture of the object's specific contributions to the event in question.
As depicted in
Like the modular sensor array depicted in
The object sensor array system may also establish a mesh-style network between sensor-embedded objects where data is shared, used, analyzed and interpreted to help both calibrate the system of sensors and correlate the data in order to improve the overall quality of data being collected from any participating individual objects. This mesh-style network may be further extended to integrate modular sensor arrays incorporated into event participant suits.
As depicted in
Event Facility Sensor Array
The facility at which the event occurs may also play an important part of the overall event experience. The surface upon which the event happens (e.g., grass, ice, wood floor, pavement, etc.) and the lights, acoustics, location of stands, and even the shape of building will all play an important role in contributing to a virtual spectator's overall experience.
In many respects, the event facility may be treated as just another item within the event object list noted above (e.g., the stands could be thought of in the same context as a net on the field). However, the event facility is also unique in that the facility may define the boundaries of the event and the data collected therein. These boundaries provide a frame of reference and present a unique data capture opportunity that is quite difficult to accomplish solely with sensors mounted on the event objects and participants—the tracking of the object and participant sensors themselves relative to the facility itself.
As depicted in an example embodiment of
These object and participant positions are almost impossible to track solely at the object/participant level because there is no discernible frame of reference. By fixing and locating sensors within the facility itself, triangulation and algorithmic work may be done to determine the exact location of event objects and event participants, thus improving and enhancing the VR/AR/XR data set used to create the virtual spectator's experience.
The facility sensor array system may also be used to capture, relay, process, and manipulate data from event object and event participant sensor arrays in order to not only further enhance the VR/AR/XR experience, but also to calibrate and correlate data collected from event object and event participant sensory arrays located within the event facility boundaries.
The facility sensor array, as with the object sensor array, may be comprised of camera and mic sensors and sensor arrays for capturing data in order to provide a three-dimensional view of the overall facility. Additionally, sensors within the facility may capture data including, but not limited to, temperature, pressure, light, sound, and vibration.
Data Processing Service and Technology
The combination of data collected from the event facility sensor system, the event object sensor systems, and the event participant sensor systems during an event can provide a complete picture of the event in raw data form subject to subsequent processing and distribution.
The data processing service may feature databases, software, hardware, and other technology to allow for specific uses of the data collected by the above described sensor array systems. Once the sensor data is collected, processed and manipulated, it can be distributed through various channels to implement the virtual, augmented, or extended or cross reality-based experience of the event for a spectator.
The data processing service may utilize algorithms to properly analyze, process, and correlate sensor data in near real-time so that the data could be used by external services in rendering the virtual, augmented, or extended or cross reality experience for a spectator.
The data processing service may also feature advanced security and encryption technology to protect collected sensor data prevent interception and/or manipulation that may corrupt or change the virtual, augmented, or extended or cross reality experience and/or results of the processed data.
Integrated Solution for Real-Time Event Spectatorship
Coordinating and integrating the above-described components will allow a real-time event to be experienced remotely and recreated for an event spectator in an augmented, virtual, or extended reality space. In one embodiment, this augmented, virtual, or extended reality space may be presented or displayed to a spectator through virtual reality hardware, such as virtual reality goggles and gloves. In another embodiment, this space may be presented or displayed to a spectator through mobile phone or tablet technology.
The sensor-based data allows for the creation of a more accurate virtual, augmented, or extended reality-based representation of a participant's body in three dimensions during the event, than for example a system based solely on captured images and sound or other surface data. For example, the data collected from the participant sensor array allows for an accurate three dimensional model of the player's physique and associated movements to be rendered. Superimposed over this sensor-based model of the player is a “skin” or three-dimensional surface scan of the player's likeness that completes the three-dimensional representation comprising a sensor data-based player avatar.
The sensor data-based player avatar can then be merged with incoming data captured by the event object sensor array and facility sensor array (e.g., audio-video capture) that would then be processed to provide a realistic real-time (or near real-time) representation of the event. This real-time representation could allow a viewer to place themselves anywhere in the virtual field of play so that they can experience and view the event from any available perspective.
In some embodiments, the viewer will also be able to rewind gameplay and watch it from different perspectives within the event field. In other embodiments, a viewer may be able to accelerate or slow the motion of the event to experience the event from different temporal viewpoints and perspectives.
In some embodiments, the addition of the microphone arrays within the participant, object, and facility sensor arrays allows for the capture of sound data that will facilitate the creation of a three-dimensional sound environment. This sound data can then be correlated to the rest of the sensor-based data and video image data to create a virtual soundscape experience that allows the viewer to experience the sound during the event from any position they choose.
In this arrangement, the viewer could move their position and the soundscape would change based on where they choose to observe the virtual event. For example, if a viewer observing a hockey match positions themselves close to a net, that viewer may experience the sound of the puck approaching the net and being saved by a goalie more intensely, or loudly, than a viewer that observes the game from a position mid-rink.
Fully processing, correlating, and integrating a real-time three-dimensional soundscape, real-time sensor-based data from participants, objects and the facility, three-dimensional image scans, and real-time video data allows for the creation of a truly immersive and realistic virtual, augmented, or extended reality-based recreation of an event happening in real time in the real-world that is far superior to a virtual experience based solely on captured and mapped surface data.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/CA2020/000019 | 2/28/2020 | WO | 00 |
Number | Date | Country | |
---|---|---|---|
62812744 | Mar 2019 | US |