Some of the world's most popular spectator sports involve racing events. Viewers enjoy seeing dramatic, high speed passing of runners, skaters, bicyclists, race cars and more. With improved video technology, cameras are often placed on or within the moving racers to provide an in-race view from the vantage point of each racer. In addition, when watching events, knowing the additional data behind the races, such as lead changes, speeds and speed differential, g-forces felt by the racers, and elapsed and remaining time can contribute to keeping the viewers engaged in the outcome of the race.
Gathering footage and data from multiple vantage points, and editing it into an integrated and compelling video can be a manual, time consuming and expensive process. It requires camera systems on each racer, sensors on each racer, manual tracking of each pass or significant event (which can be extensive for large races and hard to observe over a large race track), manual post production video splicing, manual calculations of sensor values, manually superimposing sensor information into the video, and manual uploading or transferring of the completed video to a service for viewing. For live events, production teams must monitor video and manually switch a viewer display to a selected video source. For non-professional events or spontaneous or ad-hoc activities, there may be racers that do not know each other, making it difficult or impossible to share video footage and data.
For race car events, racers are outfitted with cameras to record the racers' individual views of the event. That footage is either recorded on the actual camera itself and/or wirelessly transmitted to a central recording facility. A person automatically manages and manipulates the recordings to create highlight reels showing passes, adds in graphics showing any additional data or metrics, and then uploads the final video for viewing. This is a time-consuming process, an expensive process, and requires dedicated trained resources to do the work. For smaller scale or informal race events, this video editing process can be cost prohibitive.
Embodiments of a system and corresponding method for automated editing of multiple video streams includes obtaining multiple video data streams from a plurality of video sources, each video source associated with a moving device from a plurality of moving devices. The locations of the moving devices relative to each other are monitored. The video data streams are processed into a single integrated video file. The processing includes determining a preferred video source for viewing based on a location of a moving device relative to other moving devices, and switching the preferred video source based on a triggering event relative to the locations of the moving devices relative to each other. A single integrated video file can then be output to be provided to a viewer or automatically posted to a cloud-based service for storage and distribution.
In some embodiments, the single integrated video file is provided to a viewer through a live video stream. In other embodiments, the triggering event is a change in a race lead between moving devices, and may further include determining the preferred video source is the leading moving device. In yet other embodiments, the system may determine the preferred video source is the moving device behind a leading moving device.
In some embodiments, other moving devices may be detected within a field of view of each moving device and the preferred video source may be switched based on whether any moving devices are within the field of view of each moving device.
The foregoing will be apparent from the following more particular description of example embodiments, as illustrated in the accompanying drawings in which like reference characters refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating embodiments.
A description of example embodiments follows.
As illustrated in
Each device 115a-c has a pre-programmed universally unique identifier (UUID). This UUID may be mapped to the racer's name, demographic, and privacy information stored in a cloud-based server 170.
In an embodiment consistent with principles of the disclosure, prior to the start of a race, each racer activates the recording mode by pressing a button or telling the “app” (e.g., through voice activation) to start recording. For example, in
In some embodiments, activation may occur based on the racer arriving at, or passing by, a predetermined position as detected by the GPS. In yet other embodiments, the video recording devices 115a-c may begin recording prior to activation, but activation will tell the system that the event has started for purposes of processing the video data streams.
In an alternate embodiment consistent with principles of the disclosure, as shown if
Once a video recording device (video recording device 115a-c or video cameras 215A and 215B) begins recording video and audio, it may use a wireless radio transceiver, and send a “is anyone there encoded with its own UUID?” message and listen for responses for other racers. All responses are cached on the local device.
The video recording device listens for “is anyone there?” messages on the wireless radio transceiver and answers with its own UUID. The video recording device caches the host's UUID. For each cached UUID, the video recording device is programmed to triangulate the relative positional information which includes distance and direction. With this positional information the device is classified as being relatively: in front, next to, or behind the racer. The relative calculation takes into account the historical position of the UUID as a curvy road course could give the false impression that another device has moved ahead or behind because of a path that circles back on itself. When a UUID changes its relative position, that is it moves from being in front/next to/behind to a new state, the device logs the exact time, UUID, and state change of the pass to be used later for automated video creation.
When the race is over, each device stops recording by the racer pressing a button in the app or the app may automatically stop recording when it believes the event is over because other conditions are met: no other participating racers have been observed for a certain amount of time, the device is about to turn off from lack of battery charge, the GPS position of the device has not changed for a certain amount of time, etc.
In the embodiment described with respect to
In some embodiments, a single controller can track the racers within the system to identify and recognize which racers are participating in the race for video recording purposes, and then receive the corresponding video. A racetrack could have its own physical controller on premises (e.g., a physical device, or software executed on a cellphone or computer). In this scenario the controller broadcasts a wireless signal to tell all the racers to activate. This can be linked to the official timing of the race. It can also be told beforehand through emails and/or cell phone numbers of racers to invite them to the event beforehand.
In yet other embodiments, the single controller can be a processor on a device of a particular racer for activating other racer devices within the system for purposes of tracking them as part of the system and send messages to activate the recording functions of each racer as a race begins. Recognizing the racers that are participating in the race may be possible based on a pre-registration of the racer to a particular event, or based on devices within an established social network, or based on acceptance of an invitation to participate in a race. For example, a controller racer can send an invitation through a social network or through targeted emails or messages stating a time or place for a racing event. Recordings from invited racers can then be monitored and processed as part of the larger system. In yet other embodiments, each racer participant may be capable of obtaining and processing the video streams within the system of racers to create a single integrated video file based on that user's particular processing preferences (e.g., perspective from the last place racer, or switching perspectives to a racer behind racers passing each other, etc.). This individualized processing may occur in the cloud, with the individualized processed video made available to one or more racer participants.
As shown in
In some embodiments, the system may also detect other racers or other objects using traditional image processing techniques, such as object segmentation and object recognition, within a field of view of each racer. In an example using computer vision, a system can operate as follows:
1. The software app records footage from racer #1.
2. During the race the software app records two other cars (racers #2 and #3) in the video that are not using the software app.
3. After the race the software app asks racer #1 for the email address or cell phone number of racers #2 and #3.
4. The system automatically emails and/or texts racers #2 and #3 asking them if they would like a copy of the footage and if they could upload their own videos, should they have any.
5. Anything that racers #2 and #3 upload is analyzed as if the software app was running live during the race and clips are integrated with the other users as determined.
Artificial Intelligence (AI) processes may use image recognition in the system as an additional trigger for switching the preferred video source for the integrated video based on whether any moving devices (e.g., other racers) are within the field of view of each moving device. For example, if a car spins off the track or has some other sort of crash, the software agent could detect a crashed car and use that information to automatically cut to that footage from any of the participants as well. Another variation may be to use the field of view of the racer crashing into something. As another example, if the racer passes by a particular object that has been identified as an object of interest (e.g., a local landmark, a type of animal, or a type of vehicle) the system may switch to that racer's video feed. As each device records the video data, markers of these detected objects may be flagged in the video data associated with each UUID.
For each UUID, if it is still in wireless range the video recording device sends the appropriate video clips wirelessly to each UUID via a peer to peer network method. If the UUID is not in range, or is unable to receive the clip at that time, the video recording device uploads the clips to a cloud-based server for the other UUID devices to download when able. This allows racers who do not know one another to automatically receive relevant clips in a privacy protected manner, without needing to share the actual identity of other racers.
The video recording device checks with the cloud-based server to see if there are video clips available for downloading that it did not receive via the peer to peer network method. Available clips and metadata may be downloaded via the internet connection a from the cloud-based server. In the event a clip becomes available at a later time, perhaps because another device had a delay in uploading to the cloud, a callback notification may be generated by the cloud-based server to alert the device of new footage that will re-trigger this process. In some embodiments, the device may be a handheld device such as a mobile phone or laptop. In other embodiments, the device may be a processor on a cloud-base server.
Referring again to
After a race, the user now has a video showing all of the passes from users of the system in a single integrated video file. This single integrated video file, complete with audio and metadata superimposed in, is readily available for sharing, broadcasting, and long-term storage. No manual processing was required to create it.
Users could manually record a race, collect all the footage from the participating devices, then manually search for passes, manually splicing the videos, reassembling them, and then posting. Given the difficulties of sharing footage amongst users, who may not know each other's identities or means of contact, this could be impossible or take a significant amount of time. With the approach of the present disclosure, these difficulties are avoided.
The term “system” is used throughout to refer to one or more of the video recording devices 115a-c or 215A, 215B, or other such video recording devices, as programmed to implement the features described herein for automated editing of multiple video streams based on one or more triggering events.
In some embodiments, the processor may be located locally on the same device as the video source. In other embodiments, it may be located on a remote general-purpose computer or cloud-computer. The interface may include a display on a networked computer, a display on a handheld device through an application, or a display on a wearable device.
Referring to the center portion of
At 406 the video recording device enters a repeated “race loop” 406 wherein, if a triggering event has occurred, the time index and all the data of other relevant racers and non-racer devices within range is recorded for later processing at 412.
At 408 the video recording device starts and repeats a “find others” process. In this process, the device broadcasts the “is anyone there?” message. If another device responds, the video recording device caches the UUID and relative position of the responding device(s) at 416. The video recording device responds in turn by transmitting its UUID and relative position and any user-permissioned additional information (e.g., name, vehicle information, etc.) at 418.
At 410 the video recording device starts and repeats a “listen for others” process. In this process, the device listens for “is anyone there?” message broadcasts from other devices at 420. Upon detecting such a message, the video recording device responds by transmitting its UUID and relative position and any user-permissioned additional information (e.g., name, vehicle information, etc.) at 422. At 424 the device may cache the UUID of a responding device.
Referring to the right portion of
At 436 if a triggering event has occurred, the time index and all the data of relevant racers and non-racer devices within range is recorded for later processing. At 438 the fixed device uploads recorded video and data to the cloud-based server for use by the racers to download.
At 508 the video recording device listens for peer video clip and metadata information from other racers. At 510 the device downloads data locally for further processing. At 512 the device creates a “best-of” highlight reel, e.g., as a single integrated video file. This highlight reel may contain the best clips from each racer's perspective.
It should be understood that the example embodiments described above may be implemented in many different ways. In some instances, the various methods and machines described herein may each be implemented by a physical, virtual or hybrid general purpose computer having a central processor, memory, disk or other mass storage, communication interface(s), input/output (I/O) device(s), and other peripherals. The general-purpose computer is transformed into the machines that execute the methods described above, for example, by loading software instructions into a data processor, and then causing execution of the instructions to carry out the functions described, herein.
Embodiments may typically be implemented in hardware, firmware, software, or any combination thereof.
In certain embodiments, the procedures, devices, and processes described herein constitute a computer program product, including a non-transitory computer-readable medium, e.g., a storage medium such as one or more high-speed random access memory, such as DRAM, SRAM, DDR RAM or other random access solid state memory devices; and optionally includes non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid state storage devices or any combination thereof. Such a computer program product can be installed by any suitable software installation procedure, as is well known in the art. In another embodiment, at least a portion of the software instructions may also be downloaded over a cable, communication and/or wireless connection.
Further, firmware, software, routines, or instructions may be described herein as performing certain actions and/or functions of the data processors. However, it should be appreciated that such descriptions contained herein are merely for convenience and that such actions in fact result from computing devices, processors, controllers, or other devices executing the firmware, software, routines, instructions, etcetera.
It also should be understood that the flow diagrams, block diagrams, and network diagrams may include more or fewer elements, be arranged differently, or be represented differently. But it further should be understood that certain implementations may dictate the block and network diagrams and the number of block and network diagrams illustrating the execution of the embodiments be implemented in a particular way.
Accordingly, further embodiments may also be implemented in a variety of computer architectures, physical, virtual, cloud computers, and/or some combination thereof, and, thus, the data processors described herein are intended for purposes of illustration only and not as a limitation of the embodiments.
While this invention has been particularly shown and described with references to example embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the invention encompassed by the appended claims.
This application claims the benefit of U.S. Provisional Application No. 63/262,234, filed on Oct. 7, 2021. The entire teachings of the above application are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
63262234 | Oct 2021 | US |