The invention is related to video, augmented video, and interaction with video in various medium.
Video has changed the way in which we interact with events. Whether the event is an athletic event, a musical performance, theater, or some other event, the ability to capture and replay the event, typically with a smartphone or other small hand-held device, has dramatically altered the consumption of these performances.
Consumers today want more than just a replay, however. They desire to re-live the event or to capture it in a unique way that they did not perceive with their own eyes. This includes using slow-motion, changing the sounds or pitch of the audio feed, and participating in the event through modification of the video files.
The advent of high-speed video cameras, high-definition video, and other video and sensor-based imagery allows for further modification of a video feed. This is particularly noticeable in these event venues and especially as it relates to sports.
Instant replays in sporting events have become an integral part of the fan viewing experience. Video production for televised and internet streaming events provides multiple camera angle replays and sophisticated graphical analysis overlays to enhance the viewing experience. Furthermore, with current streaming and digital video recording technology, the viewer is able to rewind the live feed to access the instant replay multiple times, if necessary. Commentators frequently draw on a replay and identify action that occurred or identify what a player should have or could have done instead of what occurred. Furthermore, information such as speed, distance travelled, and other metrics that provide more in-depth viewing experiences can be overlaid onto the video as supporting information or analysis.
The live, in-venue experience lacks access to the production quality replay as fans are most likely limited to the instant replay disseminated over the venue's multimedia system such as a jumbotron or video monitors. Additionally, the live, in-venue experience does not allow fans to access graphical analysis overlays, analytics and telemetry that they would otherwise be provided in a video production. These are major setbacks for consumers who now expect and demand sophisticated video options related to the event they are enjoying.
The method and systems defined herein details new and unique manners in which the systems can be utilized to generate augmented video feeds. Specifically, these feeds can be accessed through user devices such as mobile devices and can further modify the fan experience.
The embodiments herein are directed to augmented video and to access of said augmented video through the use of Machine-Readable Codes (“MRC”). Such embodiments allow for a unique approach towards accessing the augmented video, generating content within the augmented video, and accessing additional content within the system.
In a preferred embodiment, an augmented video system comprising: a server, a plurality of video cameras, a database, and software capable of being operated on said server; wherein, said plurality of video cameras capturing a moment in time, wherein each of said plurality of video cameras capturing a video file of said moment in time from a different perspective and storing each of said video files in a database device; wherein said software combines each of said video files into a single combined video file; and said combined video file being directed back to the server and said server generating a link to said combined video file; said combined video file being operably viewable as a video file on user device. In a further preferred embodiment, the augmented video system wherein said combined video file can be modified by a user to change the perspective view of the video file.
In a further preferred embodiment, the augmented video system comprising a tag with a machine-readable code, wherein the augmented video is accessed by scanning the machine-readable code; wherein the action of scanning the machine-readable code on the tag generates a URL encoded to the tag; wherein the URL is connected to said server; and wherein opening the URL displays the augmented video.
In a further preferred embodiment, the augmented video system wherein the server further identifies the user device or a user within the system. In a further preferred embodiment, the augmented video system wherein the server further identifies user analytics.
In a further preferred embodiment, the augmented video system wherein the plurality of video cameras is selected from the group consisting of: high resolution, high-frame rate video cameras, volumetric video capture hardware, depth sensing cameras, ultra high FPS machine vision cameras, LIDAR sensors, LIDAR enabled cameras, and combinations thereof.
In a further preferred embodiment, the augmented video system wherein the user is added into the video via an avatar. In a further preferred embodiment, the augmented video system wherein the avatar participates in the video. In a further preferred embodiment, the augmented video system wherein the avatar perspective modifies the video perspective to a first-person view of the video based upon the placement of the avatar within the video.
In a further embodiment, a method of viewing a video replay in augmented reality comprising: capturing a moment of time on a plurality of video cameras, said plurality of video cameras each capturing the same moment in time from a different perspective to create a plurality of video files; stitching the plurality of video files together from the plurality of video cameras to create an augmented video file; replaying the moment of time from the augmented video file on a computing device; said replay generated by scanning a tag with a machine-readable code; said scanning engages with a server to generate a URL that comprises the augmented video file for viewing; displaying the augmented video file on a user device; and modifying the visual angle of view of the augmented video file by rotating the user device along the horizontal or vertical axis; wherein rotating along the vertical axis rotates the view of the augmented video file around the viewer in the vertical axis; and wherein rotating along the horizontal axis, rotates the view along the horizontal axis.
In a preferred embodiment, an augmented video system comprising: a machine-readable code, a user device, a server, a plurality of video cameras, a database, and software capable of being operated on said server; wherein said plurality of video cameras capture a moment in time, each of said plurality of video cameras capturing a video file of said moment in time from a different perspective and storing each of said video files in said database; wherein said software combines each of said video files into a combined video file; and wherein said user device, upon accessing the augmented video system via the machine-readable code, generates a request to said server to view the combined video file, said combined video file being directed back to the server in an assembled form; said combined video file being operably viewable as a video file on said user device.
In a further embodiment, the augmented video system wherein said combined video file can be modified by a user to change the perspective of the combined video file. In a further embodiment, the augmented video system wherein the combined video file being modified is performed by a rotation of the user device along a horizontal axis or a vertical axis. In a further embodiment, the augmented video system wherein the combined video file being modified by a user is performed by touching a button operable to the user device. In a further embodiment, the augmented video system wherein the button operable to the user device is on a screen of said user device.
In a further embodiment, the augmented video system wherein said machine-readable code is defined on a tag, wherein the combined video file is accessed by scanning the tag; wherein scanning the tag generates a URL encoded to the tag; wherein the URL is connected to said server; and wherein opening the URL displays the combined video file. In a further embodiment, the augmented video system wherein the server further identifies the user device or a user within the augmented video system. In a further embodiment, the augmented video system wherein the server further identifies user analytics, said user analytics stored in a database and corresponding to a unique ID assigned to said user device.
In a further embodiment, the augmented video system wherein the plurality of video cameras is selected from the group consisting of: high resolution, high frame rate video cameras, volumetric video capture hardware, depth sensing cameras, ultra-high FPS machine vision cameras, LIDAR sensors, LIDAR-enabled cameras, and combinations thereof.
In a further embodiment, the augmented video system wherein an augmented video comprises an avatar added to the combined video file. In a further embodiment, the augmented video system wherein the avatar participates in the augmented video by replacing one or more elements within the augmented video. In a further embodiment, the augmented video system wherein the augmented video is displayed by an avatar perspective, wherein the avatar perspective modifies the perspective of the augmented video to a first-person view of the augmented video based upon placement of the avatar within the augmented video.
In a preferred embodiment, a method of viewing a video replay in augmented reality comprising: (a) capturing a moment in time on a plurality of video cameras, said plurality of video cameras each capturing the same moment in time from a different perspective to create a plurality of video files; (b) stitching the plurality of video files together from the plurality of video cameras to create a combined video file; (c) generating a replay from the combined video file on a user device by scanning a tag; (d) in response to scanning the tag, generating a URL by receiving a request for a video file at a server; (e) accessing a target of a redirect request; (f) identifying a content of said combined video file to be included in said request; (g) receiving data from a database including said content; (h) assembling the combined video file; (i) sending said combined video file to said user device; and (j) accessing said combined video file on said user device.
In a further embodiment, the method further comprising: modifying a visual angle of the combined video file by receiving at said user device a rotation along a horizontal axis or a vertical axis; wherein rotating along the vertical axis rotates the viewing of the combined video file around a viewer in the vertical axis; and wherein rotating along the horizontal axis rotates the viewing of the combined video file along the horizontal axis.
In a preferred embodiment, a method of generating an avatar within a visual replay comprising: (a) uploading, within a database, a digital file defining an avatar; (b) in response to scanning a tag with a user device, generating a URL by receiving a request for a combined video file at a server; (c) accessing a target of a redirect request; (d) identifying a content of said combined video file to be included in said request; (e) inserting said avatar within said combined video file; (f) receiving data from a database including said content; (g) assembling the combined video file; (h) sending said combined video file to said user device; and (i) accessing said combined video file on said user device.
In a preferred embodiment, a method of overlaying information on a video file comprising: (a) generating a combined video file at a user device by scanning a tag; (b) in response to scanning the tag, generating a URL by receiving a request for a video file at a server; (c) accessing a target of a redirect request; (d) identifying a content of said combined video file to be included in said request; (e) receiving data from a database including said content, wherein said data includes a set of data to be superimposed over the video file; (f) assembling the combined video file; (g) sending said combined video file to said user device; and (h) accessing said combined video file on said user device.
In a further embodiment, the method wherein the set of data to be superimposed over the video file provides live statistics regarding one or more players viewed within the combined video file.
In a preferred embodiment, a system for generating automatic replays within a venue comprising: (a) capturing a moment in time on a plurality of video cameras, said plurality of video cameras each capturing the same moment in time from a different perspective to create a plurality of video files; (b) stitching the plurality of video files together from the plurality of video cameras to create a combined video file; (c) generating a replay from the combined video file on a user device by scanning a tag; (d) in response to scanning the tag, generating a URL by receiving a request for a video file at a server; (e) accessing a target of a redirect request; (f) identifying a content of said combined video file to be included in said request; (g) receiving data from a database including said content; (h) assembling the combined video file; (i) sending said combined video file to said user device; and (j) accessing said combined video file on said user device.
In a further embodiment, the system wherein a GUI defines a list of one or more video files to be viewed. The system wherein the combined video file to be viewed is selected from the GUI, wherein selection from the GUI sends a request to the server to access the combined video file, and wherein the combined video file is assembled and delivered to said user device.
In a preferred embodiment, a method for using a sensor on a user device to generate overlay information on a video feed of said sensor comprising: (a) capturing a live video feed from a camera selected from a user device camera or a second camera; (b) overlaying, within said video feed, data from a plurality of video cameras, said plurality of video cameras each capturing the same moment in time from a different perspective to create a plurality of video files; (c) stitching the plurality of video files together from the plurality of video cameras to create a combined video file; (d) generating a replay from the combined video file on a user device by scanning a tag; (e) in response to scanning the tag, generating a URL by receiving a request for a video file at a server; (f) accessing a target of a redirect request; (g) identifying a content of said combined video file to be included in said request; (h) receiving data from a database including said content; (i) assembling the combined video file; (j) sending said combined video file to said user device; and (k) accessing said combined video file on said user device.
Various embodiments are described more fully hereinafter with reference to the accompanying drawings, which form a part hereof, and which show, by way of illustration, specific exemplary embodiments by which the innovations may be practiced. The embodiments may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the embodiments to those skilled in the art. Among other things, the various embodiments may be methods, systems, media, devices, or any similar or equivalent arrangements known to those skilled in the art. Accordingly, the various embodiments may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. The following detailed description is, therefore, not to be taken in a limiting sense.
Video replay is a critical element in consumption of live events. A video replay can be a simple as re-playing the entire live event or can be more nuanced as a replay of a key moment in the live event.
As used herein, a live event is an event that is captured by video recording by another. This is typically something like a sports game, a sporting practice, a visual performance, a visual performance practice, etc. In sports, for example, video of a game is frequently used to review the plays and to dissect the positive and the negative aspects of the play. Then, individual plays can specifically highlight specific elements and a user or a player in this instance can practice certain elements to improve upon the negative and reinforce the positive.
In a sporting practice, a user may take a particular element and replay to visualize the elements of a moment. Take a baseball swing as an example. Video from a game can be captured to record how a particular player swung at a pitch during a baseball game. Tendencies can be dissected and a plan to modify or improve tendencies can be enacted. A player can then swing during a practice, capture video of the practice, and then re-play the practice video to reinforce the learning of the tendencies to be modified.
As used herein, the below terms will have the following meanings as may be supplemented elsewhere in this specification:
As used in this application, the words “a,” “an,” and “one” are defined to include one or more of the referenced items unless specifically stated otherwise. The terms “approximately” and “about” are defined to mean ±10%, unless otherwise stated. Also, the terms “have,” “include,” “contain,” and similar terms are defined to mean “comprising” unless specifically stated otherwise. Furthermore, the terminology used in the specification provided above is hereby defined to include similar and/or equivalent terms, and/or alternative embodiments that would be considered obvious to one skilled in the art given the teachings of the present patent application.
A
A
A
A
A
API C
B
B
C
C
C
D
D
D
D
D
D
D
E
F
I
G
GUI
J
L
M
M
M
N
NFT M
P
P
R
R
R
R
R
R
S
T
T
T
T
T
T
T
T
T
T
U
U
U
V
V
W
A high-level overview of an exemplary system (10) for capturing content from a recording device, such as video or AR content, cataloging content and delivering unique content such as video or AR to specific user devices at a venue during an event or remote user devices viewing a live event remote from the venue to users who have scanned a tag with their user device is shown in
A proprietor may use a network of encoded tags (16a, 16b) to identify points of interest (e.g., locations, objects, people, etc.). The number of tags (16a, 16b) in the network and placement of tags on, in, or near points of interest is at the discretion of the proprietor to fit its particular assets and needs. Further, a proprietor may add to or subtract from the number of tags (16a, 16b) in the network at will. Thus, the number of tags (16a, 16b) in a proprietor's network may be dynamic, either more or less than an original network of tags. Each tag (16a, 16b) in the network of tags has a unique identifier (tag ID), which may be used to identify a particular point of interest. For example, a tag (16a, 16b) may be situated on or near a seat in a stadium, and the user who purchased a ticket to sit in that seat is the “limited owner” or renter of that seat for a particular event. In certain embodiments, it may be possible to have multiple copies of the same tag, each with the same tag ID, in locations where multiple scans would be desirable at the same time by multiple users. Thus, at the entrance to a stadium, a plurality of tags could be located at different entrance points, each having the same tag ID.
As is implied in
The proprietor may also access platform (20), albeit via the administrator device (12) and one or more networks (18). The administrator device may be located at the venue, or it may be at a location remote from the venue. Generally, the proprietor may access a proprietor portal (
In addition to hosting the proprietor portal, platform (20) may host a variety of other services including, without limitation, event user and remote user access to content associated with the event, venue, proprietor, and the like. As such, platform (20) may include, or may include access to, one or more servers, databases, application programming interfaces (APIs), artificial intelligence/machine learning algorithms, other algorithms, code, blockchains, blockchain platforms, geofences, third-party integrations, times stamp, and more, which is detailed below, with reference to accompanying figures.
As detailed in the preferred embodiments herein, by use of augmented reality, we can modify the way in which the video is captured and consumed. For example, by use of a plurality of recording devices (206a, 206b, 206c, 206d), a live event, such as a football play, or the practice swings of a baseball player can be captured by all the recording devices (206a, 206b, 206c, 206d) for a single live event, i.e., in this case a single play or swing. The plurality of recording devices (206a, 206b, 206c, 206d) is positioned to capture the play or swing in different visual planes, for example, a series of four recording devices (i.e., video cameras, etc.), each positioned at a relative “corner” of a hypothetical quadrilateral shape surrounding the player. The recordings such as video from the four recording devices thus captures the live media from a left rear (206d), a right rear (206b), a right front (206a) and a left front (206c). The four visual positions and their associated video are combed into a single augmented video file. This augmented video file allows for rotation of the video, because the four recording devices, generating four video files, once stitched together, allow for orientation of the video image based on the desired perspective. Thus, the video can be slowed down, rotated, stopped, rewound, provided with overlays of additional material and information, etc., and oriented as desired by the user.
The total number of recording devices may be more or less than four, however, to typically achieve proper augmented video playback at least four recording devices are preferred. The recording devices (206d, 206c, 206b, and 206a) are oriented at different points so as to capture the recording, (i.e., video image) from a different orientation. Notably, as these are placed in four corners surrounding the live action being performed and captured by the recording devices (206d, 206c, 206b, and 206a), though the embodiments are not so limited to the position in the four corners. Because the nature of the live action is being captured between the four recording devices (i.e., video cameras), each captures the same live action from a different perspective. This allows for combining a data video file from each of the four recording devices (corresponding to the same precise time of the live action) to create an augmented video file that can be manipulated.
In the example of
As was mentioned with respect to
In-venue tags (16a) may be physical (e.g., tangible), digital (e.g., virtual/intangible), or combinations of both forms. Physical tags may be constructed from diverse types of materials. In the case of tags having one or more graphical/matrix type codes such as QR codes, barcodes, and the like, the code may be printed, etched, fabricated, or the like on materials such as paper, glass, plastic, metal, fabric, and the like, as a few nonlimiting examples. In the case of NFC/RFID enabled tags, chips/antennae may be adhered to, attached to, embedded in, or fabricated on (or combinations thereof) a natural or manufactured material such as metal (e.g., aluminum, stainless steel), semiconductor, wood, polymer (e.g., plastic), film, glass, and combinations thereof, without limitation. The material may be incorporated into or affixed (e.g., adhesive, or other form of attachment) where desired. Digital tags may be displayed on a screen or communicated via radio waves. In the case of QR codes, barcodes, and the like, the graphical code may be displayed on a display screen such as the jumbo screen (204) or a display screen associated with the event user's seat (208), other locations/point of interest, or combinations thereof. Thus, the in-venue tag (16a) may be a video display, such as LCD, LED, e-ink, or other visual display and/or text accompanying the MRC (17a). In fact, most, if not all, remote tags (16b) will be a display screen such as on a television screen, computer screen, appliance screen, and the like, having the MRC (e.g., 17b) displayed thereon, or text on the display screen identifying the MRC (17b), although embodiments are not limited thereto.
Information encoded on or in each tag in the system (10) may include an address to direct a request (e.g., for a Web page) from the user device (14a, 14b) to a server or the like on the network (18) such as a server on platform (20). The address may be in the form of a uniform resource identifier (URI) such as a uniform resource locator (URL), according to a non-limiting embodiment. In this way, when the user scans the tag (16a, 16b) with the user device (14a, 14b), the user device (14a, 14b) sends a request to the appropriate network (18) location. In the example shown in
In a typical embodiment, each tag (16a, 16b) in the plurality has a unique tag identification number (i.e., “tag ID”), which may be appended to the URI/URL, although embodiments are not so limited. The tag ID may be used by the platform (20) for several reasons, one of which is to identify a point of interest/location associated with the tag (14a, 14b) via a tag ID lookup. For example, when a request comes from the event user device (14a), the platform (20) knows that the request came from within the venue (202) and is associated with the seat (208) in which the event user is sitting. And when the request comes from the remote user device (14b), the platform (20) knows that the request is in response to scanning a tag (e.g., 16b/MRC 17b) in transmission, on a Web page, or the like, and the platform (20) knows which transmission/Web page is associated with the scanned tag (16b). In an embodiment, the tag ID may be appended to the URL (or URI) such as by one or more parameters, pattern matching techniques, or other such mechanism for encoding information in a URI, URL and/or browser request.
Referring to
In an embodiment, the redirect/identification server (302) may pass information needed to further method (400). For example, the tag ID may be passed to the interface server (306) for a tag ID lookup (step 412), such as in database (308), the administration server (310) and/or any other suitable database or server. In this instance, the redirect/identification server (302) obtained the tag ID from the request made by the event user device (14a). In an embodiment, the tag ID is appended to the URL, and thus the entire URL, or a portion thereof, may be passed to the interface server (306) for use in looking up the tag ID. Looking up the tag ID provides information about the venue (202) and/or event. To clarify, when a particular venue (202) installs tags (16a) and/or uses tags (16b), the tag IDs for the installed/used tags (16a, 16b) are associated with the point/location of interest and the particular venue (202). Thus, if a tag is installed proximate seat 1, row A, section 100, database (308) information associates the installed tag's (16a) tag ID and that particular seat (208), which is in that particular venue (202). Since the tag ID is known to belong to a particular venue (202), the interface server (306), the administration sever (310) via the interface server (306), any other suitable server, or combinations thereof makes a series of determinations using the tag ID, which was received in response a request from a user device (14a, 14b) prompted by scanning the tag (16a, 16b). One determination is if the venue (202) is actively implementing platform (20) services (step 414). For example, the venue (202) may have tags (16a) installed but it is no longer using the tags (16a), or it is not using the tags for a particular event. If not, the event user device (14a) is redirected to a global default target (step 416) that may inform the event user that the services are no longer available, are temporarily out of service, to a generic homepage, or the like. If the venue (202) is actively implementing platform (20) services, the method (400) may make another determination. At step (418), the method (400) may determine if a particular event is currently (or soon to be) in progress, or recently ended. In an embodiment, an event may be determined to be in progress based on the time that the event is scheduled to begin. Since many venues (202) open before the actual event begins, and close after the actual event ends, the window set for an event to be in progress may encompass a given amount of time before and after the actual activity begins/ends. In an embodiment, the time that the “event in progress” determination is made (step 418) may be recorded to serve as a timestamp to approximate the time that the event user device (14a) scanned the tag (16a). In other words, the unique ID, tag ID, and time determination may be recorded for later use, in certain embodiments. If the event is not in progress, the event user device (14a) may be redirected to a venue default target (step 420) such as a Web page for the venue, or another Web page such as a page to identify that an incident has occurred at the venue (202) at the location/point of interest in which the tag (16a) was scanned. Incidents may encompass any sort of incident such as a need for something to be cleaned up to calling emergency services.
If the event is in progress, the method (400) may also determine if the tag ID belongs to a grouping of tag IDs (step 422). Tags (16a, 16b) may be grouped for many reasons and in many different ways. Tags (16a, 16b) may also belong to more than one group. As one non-limiting example, in the stadium of
Method (400) may simultaneously process other data such as looking up one or more records associated with the unique ID (step 428). In embodiments, the platform (20) may gather information relating to user activities via the user device and unique ID. For example, the platform (20) may gather data relating to tags that the user has scanned in the past (across a variety of different events, venues, or the like) and activities associated with those tag scans (e.g., purchases made, content looked at, coupons downloaded), although embodiments are not limited thereto. This data may be stored in association with the unique ID assigned to the event user device (14a). Thereafter, a controller may associate the unique ID, its record, its record location or the like with the tag ID, target ID, a URL, any other determined information, or combinations thereof (step 430). The event user device (14a) may then be redirected to the appropriate target that has been determined for the event user device (14a).
When a request comes from a remote user device (14b), the method (400) starts out essentially the same as with the event user device (14a). That is, the redirect/identification server (302) receives the request (step 402), checks for a manifest, containing a unique ID (step 404), assigns a manifest with a unique ID if one has not yet been assigned (step 406), and sends it to the remote user device (14b, step 408) for secure storage thereon. If the remote user device (14b) has a manifest, then the redirect/identification server (302) obtains it (and other information such as a unique ID) from the remote user device (14b). Either way, the redirect/identification server (302) has the information that it needs such as unique ID, URL, tag ID, and the like, and forwards the information to the interface server (306) to continue the method (400). The interface server (306) may then look up, or cause to look up, the record associated with the unique ID (step 428) assigned to the remote user device (14b). At the same time, the interface server (306) may cause a determination to be as to whether the venue exists (step 414). In this case the interface server (306), or other server, may look at the data associated with the tag ID to determine from where the tag (16b) that was scanned originated. For example, the MRC (17b) may have originated from a particular signal, transmission, etc., (e.g., network, regional network, etc.), Web site (e.g., for the venue, a streaming service, etc.) or the like. If, the method (400) determines that the venue does not exist, for example, if the tag is to an unrelated element, then the remote user device (14b) is redirected to that unrelated element or to a global default target (step 416), for example if the tag is related. Assuming that the venue in this case does exist, the interface server (306)/method (400), then determines whether the event is in progress (step 418). If the signal, transmission, Web page, or the like is transmitting an event as it is occurring in real time then the event is in progress. Such can also be determined by a time stamp or time record set within the system. Either way, in an embodiment, the time the determination is made may be recorded by the platform (20). If the event is not occurring in real time (e.g., the user is watching a recording after the fact), then the remote user device (14b) will be redirected to an appropriate target such as a Web page relating to the event (step 420). However, the proprietor can set any time parameter to define “real time”. For example, a proprietor may desire to allow recordings watched within N number of days of a live event to constitute real time. The interface server (306) may then determine if the tag (16b), via the tag ID belongs to a group (step 422). For instance, different tags (16b) may be associated with different signals, transmissions, Web sites, or the like. Some of these tags (16b) may form groups based on predetermined criteria. Thus, if the tag (16b) belongs to a group, the remote user device (14a) will be redirected to the target for the appropriate group, and if not, the remote user device (14a) will be redirected to the default target. The default target for remote users may or may not be the same default for event users. Either way, the information relating to the determined redirection target is obtained (step 424, 426). At step (430), a controller may associate the unique ID, the record for the unique ID, a pointer to the record for the unique ID, the tag ID, and target information such as a URL, target ID, or both. Thereafter, the remote user device (14b) is redirected to the appropriate target (step 432), as was described with respect to the event user. In certain embodiments, the step of (428) may be provided in parallel to or concurrent with the lookup of the tag ID (step 412), where the unique ID is necessary for determining any of the other elements. Furthermore, the unique ID may be stored, for example in local memory or cache, which is readily accessible or known to the system after step (410).
In an embodiment, the user device (14a, 14b) may receive a redirect URL from the redirect/identification server (302) at the end of method (400) to redirect the user device (14a, 14b) to the appropriate target. For instance, the method (400) may return a target ID to identify the particular target. The target ID, tag ID, unique ID (and/or information associated therewith), or combinations thereof may be appended to the redirect URL for the target, which is sent to the requesting user device (14a, 14b). The requesting user device (14a, 14b) then uses the redirect URL to send a new request, this time for the target, which is received by the redirect/identification server (302) and is forwarded to the interface server (306) for processing. Alternatively, the target ID, tag ID, and unique ID may be used by the platform (20) without sending a redirect URL to the requesting device at the end of method (400). Regardless of the forgoing, the requesting user device (14a and/or 14b) receives the target of the redirection whatever that target may be.
Furthermore, targets are not necessarily static. In fact, the same tag (16a) may cause a user device (e.g., 14a) to be redirected to distinct targets depending upon when the tag (16a) is scanned. A proprietor or administrator may also change a target during the course of a particular event. One of ordinary skill in the art would understand a target of redirection as described herein may be a multitude of different targets with various purposes, designs, capabilities, and the like. Therefore, the target to which a particular tag (16a, 16b) is assigned, may be changed by simply changing the target ID associated therewith.
There may be instances where the content delivered via the target may need to be changed, updated, altered, released, opened, or other such stipulations. Rules may be defined to force a modification of content already delivered, deliver additional content, information, data, release content, and/or make other such changes as would be appreciated by one skilled in the art. for example, to
While the target of redirection (e.g., fan portal [218] or targets [862-865] from
The communication connection (504), which may be a socket connection or any other appropriate type of connection, may be used to allow communications between the user device (14a and/or 14b) and the platform (20) including pushing and pulling as described above. A controller (506) may be a set of software code that is utilized to manage, direct, or generally be in charge of one or more rules, enabling pushing and/or pulling per the rules. In this example, rules may be used to change content on the user device (14a and/or 14b). That is, rules may impact the particular target being displayed on the user device (14a and/or 14b). The rules can come in several different forms, and per this non-limiting example may be event rules or local rules. Generally, an event rule is monitored by the platform (20) and may cause data to be pushed and a local rule comes from a user device (14a, 14b), which wants data (i.e., pulls data) from the platform (20). A rule for a sporting event may relate to points scored, or another occurrence in the game. As an illustration, the rule may be if team “A” scores a touchdown, push an AR video to all user devices (14a, 14b) that have scanned tags (16a, 16b). Here, the metric or trigger of the rule can be monitored (step 516) such as by directly sending a request or query to a data source (at 512) via the interface server (at 510), receiving data from the data source (at 512) on a regular basis such as every 5 seconds, 5 minutes or the like (via the interface sever [at 510]), or combinations of both. Another type of event rule may include more than one trigger/metric. For example, the rule may be that if team “A” scores a touchdown, push an AR video playback of the touchdown with advertising for an alcohol brand to all event users over the age of 21 that have used their user device (14a) to scan a tag (16a) in the venue (202). The first metric/trigger of whether a touchdown has been scored may be monitored as described above. The second metric/trigger may be monitored in the same or similar manner. For example, since the metric/trigger relates to age, a query may be sent to the database (at 512), via the interface server (at 510), to find all users who are over the age of 21. In this query, user records associated with unique IDs may be searched for age, tag ID, time, and/or other search parameters to determine users who have scanned a tag (16a) during the event, and who are at least 21 years of age. As a backup, alternative, confirmation, or if database data does not have the answers, another data source (at 514) may be consulted to determine if the age metric/trigger has been met. For example, one or more third-party integrations may have age information; thus, an API call or other query may be made to obtain ages. With either of the forgoing rule examples, if the first metric/trigger (step 520, no) is not met (i.e., touchdown), then the platform (20) continues to monitor the metric/trigger (step 522). If the metric/trigger (step 520, yes) is met, and there is no second metric/trigger (518) then the content (e.g., AR video) is pushed (step 526) to the user devices (14a and/or 14b), such as, via the controller (at 514, 506, via the connection 504). If there is a second metric/trigger (518), then upon receiving a yes at (520), a determination is made to see if the second trigger/metric has been met (step 524). If the second trigger/metric has not been met then the target on the user device (14a) is not updated (step 528), such as with the digital offer. Depending upon the rule the second metric/trigger may continue to be monitored or not. For example, if the digital offer was to be sent only one time, then the rule is satisfied, and no additional monitoring will continue. If, however, the rule sent the same AR video every time team “A” scored a touchdown, the second metric/trigger would not have to be redetermined since even if the user turned 21 that day, the user's age would not change. Of course, if the event went past midnight, the rule could be structured to recheck ages after midnight. This does not mean that for a given rule a second (or third, or fourth, etc.,) trigger/metric would never need to be monitored. Should an additional metric/trigger be defined by a rule that needs additional monitoring, the method (500) will be allowed to do so. If at the determination made at step (524) is yes, the content may be pushed (526), such as via the controller (at step [514] or [506]). Pushed content may update an element on a Web page, cause a popup to show on the user device (14a, 14b), send a content to a digital wallet (24a, 24b), or any other way to push content as is known in the art.
Further examples of rules may also be understood by those of ordinary skill in the art. For example, the interface server (306) may determine, or cause to be determined, if there are any rules associated with the selected template or other target. Generally, a rule may define criteria that must be met for an occurrence to happen. In an embodiment, the data associated with the unique ID may be pre-analyzed to see if the local rule has been satisfied. Alternatively, data associated with the unique ID may be gathered (e.g., database, from a third-party integration such as a ticketing service or the like) and analyzed when the event user device (14a) makes the request. As yet another option, the data may be pre-analyzed and verified/checked for changes upon the event user device (14a) requests. The interface sever (306) may take all of the variables from the target application code, template, rules, and the like and send requests/queries to the appropriate data sources or links to the data sources (at 512). The data sources may include data from the database (308), blockchain (314), geofence (316), timestamp (318), third-party integrations (320) such as data servers/databases, analytics server (312), and administration server (310), and a counter (at 512), without limitation. A counter may be software on platform (20) that may be used as a counting mechanism for rules or other reasons. As such, the counting mechanism may be configured to meet the counting requirements of a rule or other counting need. As an illustration, a counter may count the number of tags (16a) scanned in a venue (202) during a particular event; count the number of tags (16a, 16b) scanned by a particular user device (14a, 14b) in a predetermined time window; count the tags (16a) scanned by a particular user during a particular event; count the number of times a user has interacted with the target delivered to that user device; or other such non-limiting illustrations.
The platform (20) may also collect a large amount of data from multiple sources regarding users and/or their devices (14a, 14b). Collected data via user device (14a, 14b) may be used to determine and/or customize content. In addition to information obtained after scanning a tag (16a, 16b) such as date, time, and GPS or other location, the platform (20) may also obtain device (14a, 14b) information such as device orientation (i.e., landscape, portrait), type (e.g., iPhone, Android), operating system, which is shown in
The analytics server (312), in an embodiment, may be a server or other device allowing communication, capture, decision making, etc., in order to receive and analyze various input from user device (14a, 14b) (e.g., at [508] via the interface server at [510]). The analytics server (312) may also allow communication, capture, decision making, etc., to receive and analyzed various data from third-party integrations (320), time/timestamp (318), geofence (316), blockchain (314), database (308), and even proprietor portal (322), as a few non-limiting examples either alone or together with input, data, information, etc., from user devices (14a, 14b). As has been mentioned, the unique ID may enable collecting/storing significant data about the user/user device (14a, 14b) from multiple sources. As one non-limiting example, the unique ID may allow the platform (20) to collect information about the user via the user device (14a, 14b) from service providers, such as mobile/cellular service providers, that are used in association with the user device (14a, 14b). As another non-limiting example, information associated with the user device (14a, 14b)/unique ID may be collected from various third-party integrations such as in-venue/event metrics, third-party metrics, ticket brokerage, and other tools, without limitation to the forgoing. In-venue metrics may include data collected relating to the venue, event, or both. For example, information relating to user purchases such as tickets, food, merchandise, videos watched, and upgrades and the like may all be gathered and stored in association with the unique ID. Third-party metrics integrations (320) may enable collecting information about the user/user device (14a, 14b) from third parties who participate in a shared program or who sell or otherwise provide marketing information, demographics, and other data about the user. Similarly, ticket brokerage integrations (e.g., 320) may be used to gather from ticket brokers who sell tickets for the venue (202), event, or both, and may include a wide range of marketing data, not only about ticket purchases made, but also related information about the user. User/user device (14a, 14b) data may also be obtained via tools such as cookies, widgets, plug-ins, and similar tools. Finally, certain metrics may be provided directly by the user, for example, information can be provided in order to access certain opportunities or offers, which may include personally identifiable information, unique information such as interests or responses to questions, as well as generic information such as age or sex. The forgoing mechanisms may be used to get information about the user/user device (14a, 14b), especially when the user is actively engaged with the platform (20). Information/data relating to the user/user device (14a, 14b) via the unique ID or otherwise, may be stored in database (308) or another such database or data store (e.g., blockchain [314]) and analyzed via analytics server (312), which in an embodiment may include artificial intelligence analysis such as machine learning/pattern recognition/deep learning as is now known or will be known in the art.
User/user device (14a, 14b) information, data, etc., may also be obtained as the user engages with a target (e.g., fan portal [218] at [508]), other Web sites, the Internet, and the like. This information/data may be collected and analyzed within the analytics server (312) and coupled with other information relating to the user/user device (14a, 14b), including the unique ID associated with the user device (14a, 14b). For example, the platform (20) and methods (e.g., 400, 500) may be configured to collect and aggregate analytical data relating to, without limitation, total screen time, Internet browsing (times, sites/pages accessed, software used), updates to Web pages, digital offers presented, digital offers downloaded, products viewed, purchases made, IP addresses, personal information input by the user, and the like whether or not the activities are through the target displayed on the user device (14a, 14b). Such data is of high value to, for example, advertisers, proprietors (e.g., team, venue, and/or event owners) as it provides a large insight into consumer purchasing and Web browsing habits.
Thus, when the interface server (306) sends (or causes to be sent) requests/queries to data sources (at 512) the unique ID, tag ID/target information, or combinations thereof may be used or provided to the third-party integrations (320) when such requests/queries are made. In this way, content provided to a particular user may be customized or modified as was described above with respect to
If data is showing that a user has particular preferences, the platform (20) can modify content, such as advertisements that are delivered to that user or the team of videos provided for augmented video, as nonlimiting examples. Additionally, since the platform (20) may ascertain the features of the fan portal (218) or other aspects of the platform (20) that a user or multiple users interact with the most or spend the most time viewing, the proprietor may charge a premium to advertisers wishing to purchase the ability to place content, such as advertisement or digital offers on the pages or features of the fan portal that receive the most traffic. The unique ID of the system (10) with known information associated therewith can be used to access and utilize third-party advertising services to deliver unique advertising to the user. For example, where available, the platform (20) has the ability to interface with advertising platforms to deliver a customized experience based on the user's search history or user information as a whole.
Furthermore, in embodiments, the proprietor may be able to generate rules specific to a user, or send the user custom e-mails, push/socket notifications or other messaging based upon the user's interaction with the platform (20) or otherwise (e.g., at 514, 508). In an embodiment, a socket connection (e.g., at 504) between the user device (14a, 14b) and the platform (20) may be used for communications including pushing content, notifications, and the like, and dynamically updating content while the event is in progress, for example through push and pull features. Indeed, this provides for multiple opportunities for interaction and communication between the proprietor and the user to continue building relationships that can then be mined for longer-term relationships.
While a target is displayed on a particular device (14a, 14b), dynamic content may be seamlessly and dynamically updated/changed per coding/interactions between the user device (14a, 14b) and the platform (20). Certain dynamic changes are occurring through push and pull, as detailed by
The forgoing has been described largely with reference to a sports environment where event users can scan tags (16a) located proximate each seat (208)/other point of interest or remote users can scan MRCs (17b) that appear on a screen such as a television or computer display. Other environments may utilize the same sort of tag (16a) placement strategy. However, sports environments provide a key opportunity to utilize various video files to provide fan engagement opportunities through augmented reality and video playbacks in augmented reality, such video files may be the target video.
Access to the target, such as an AR video file, can be performed in a number of ways. Certainly, access can be provided by a dedicated video system that is controlled by the user. This would enable direct visualization and modification of the stitched video files. A simple system would include the plurality of recording devices; a series of recordings, such as video, feeds into a computer. Software running on the computer can modify and stitch together the video file to create the augmented video file, and then a graphical user interface “GUI” can allow for manipulation of the augmented video file. However, it is not always practical to have an individualized video system for private use.
However, many public performances, such as sporting events in professional and collegiate level play include a plurality of video feeds. Those of ordinary skill in the art recognize that there are commonly multiple video feeds of live action play in any broadcast of the live event. Access to these video feeds, and specifically to an augmented, or augmented reality video feed, however, is lacking. In certain embodiments, the system accessed by a user device (14a or 14b) after scanning of a tag (16a or 16b) directs a user to a fan portal (218), wherein the fan portal contains a button or link (220) to access interactive content such as video or augmented video playback of the live event occurring in venue (202).
Thus, in further detail, one embodiment is performed by the scan of a tag with a user device. The user device (14a) is directed to a URL that is uniquely encoded to the tag which allows the user device (14a) to access the redirect/identification server (302) which verifies the location of the tag (16a) and ties it to a specific venue (202) by performing a tag look-up as more fully described starting at (428 from
Taking the above into consideration with regard to certain functionalities of the system, the redirect/identification server (302) is informed of the incoming URL. The server then determines the location of the tag (16a) and the identification of the user device (14a). For example, if Joe is attending a Sunday football game at a stadium in Philadelphia and scans the tag positioned on the back of his seat, by performing a tag look-up as more fully described starting at (428 from
Next, within the system, the redirect/identification server (302) informs the interface server (306) of the URL request and provides the unique ID and the location of the tag (16a) that the user device (14a) scanned. The target determination process (844) executed by the interface server (306) determines what content to display to the user based upon the data received. In this example, the target determination process (844) would direct the interface server (306) to only deliver Joe targets such content in the form of video playback and augmented reality playback related to the Eagles game. This content may be all available content from that game, or it may be limited to a particular quarter, a particular play or a particular down. Likewise, the target determination process (844) would direct the interface server (306) to only deliver to Bob content related to the Cowboys game. Furthermore, because the user device (14a) is used to scan tags in the system (10) located at various locations outside of the venue, the redirect/identification server (302) will be able to determine the prominent geographic location of the user device (14a). For example, because Bob frequently scans tags linked to the system (10) in the Dallas area, the redirect/identification server (302) will make the determination that Bob is a Cowboys fan and the target determination process (844) will deliver targets in the form of video and augmented reality content to Bob that is related to the home team, in this instance, the Cowboys. However, if Joe, who predominately scans tags in the Philadelphia area, travels to Dallas to watch a football game and scans a tag in the stadium (202), the redirect/identification server (302) will identify that Joe is not from the Dallas area and will therefore, the target determination process (844) will deliver targets in the form of video and augmented reality content to Joe that is related to the away team. However, users may modify the selection of content based on their interactions with the system (10). Thus, information tied to the unique ID can determine what content is delivered. Similarly, the use of a geofence, or a global position determined by the scan, can also provide context to the content to be delivered. Finally, the user may provide input, for example into a GUI provided on the user device regarding the selection of desired content. By electing some content on the GUI, the system can provide the appropriate targets for content to be delivered to the user device.
The target determination process (844) thus selects from the database (308) what content to show and that content queried from “N” number of content sources. The database (308) contains unique redirect URLs to target 1 (862), target 2 (863), target 3 (864) and target “N” (865) as each video is assigned a target ID. Video “N” here representing the 4th and greater number of recording devices, such as video cameras or video feeds being provided into the system at any given time. The video can be a live feed, a file with video playback from a specific event that happened in the live event, an augmented reality file of a specific event that happened in the live event, or a pre-recorded video or augmented reality file.
Thereafter, the interface server (306) pulls a target (862-865), in this example, a video file or augmented reality file stored in the database (308) as determined by the target determination process (844) and sends the target (862-865), in this instance a video file or augmented reality file to the interface server (306). The interface server (306) then delivers the target (862-865) such as a video file or augmented reality file to the redirect/identification server (302). Finally, the redirect/identification server (302) delivers the redirect URL for the target (862-865) such as a video file or augmented reality file to the user device (14a).
To view the augmented video, the software may scan for a flat surface for displaying the augmented video. Those of skill in the art will recognize the present limitations with certain augmented video and recognize that improvements are being made to reduce such limitations or eliminate their requirements in the use of such augmented video. Any suitable flat surface may be utilized.
In view of
Thus, as depicted by
Thus,
The system (10) can utilize the tag to position the original orientation of the augmented video. Because of the known location of the tag, down to the exact seat via the tag look-up as more fully described starting at (step [428] from
As detailed in
Thus, in
The augmented video, with the avatar can also attach the avatar (a digital file, 301a, 301b) to literally put the avatar face on the player. This allows the person to add their face, or some facsimile of their face into the play via the system. This creates unique viewing opportunities and new fan engagement platforms.
Notably, the video files are loaded into the database as understood by one of ordinary skill in the art.
It is understood that the system (10) as more fully described above and in
One embodiment is a system for delivering instant replays of live events to a user's device by scanning a tag. In this example, the live event is being recorded by a video production company. The video production company uses its crew and cameras to record an event such as a football game. The video production company films a play using multiple cameras. Once the play is complete, the video production company loads video files of that play from each camera angle to the interface server and assigns each video file a target ID. Alternatively, the video files are combined into a single video file, with the several different camera angles being used to create a single video file. The system then creates a unique redirect URL for each target, such as a video file, on the interface server.
In this embodiment, a user device (14a) may be used to scan or otherwise detect one of the tags (16a) which directs the user to a fan portal to a Web app, a URL, or to a GUI as non-limiting examples. As depicted in
For example, Joe is attending a football game at a stadium. The game is being produced by XYZ production company for consumption on live television and live internet stream. XYZ has ten cameras in the stadium recording the game. Once a play has concluded, XYZ will create a separate combined video file, which incorporates the video files as recorded by each camera and upload the combined video file, and/or the individual video files to the interface server where they are assigned a target ID. When Joe uses his user device to scan the tag on the arm rest of his seat, he is directed to the one of two options, either to a Web app that is populated with videos to select for viewing, which, when selected either retrieves a redirect URL to the targets, such as video files, on the interface server of the various camera angles from the recently completed play, or to a Web app to push such video file to the page. Joe clicks on a particular video target, in this instance, the video he wishes to view and watches that video on his phone. Alternatively, the video of the last play can automatically be pushed to Joe's user device for viewing. After the next play is completed, XYZ production company repeats the process of saving the video files of the play from each camera, as targets with a unique target ID, to the interface server. Now, the Web app re-populates, from a push or a pull on the system, to show the updated and new video, or to simply automatically play the new video. Joe selects the video he wishes to view for the second play and watches that video on his phone.
In another embodiment, a venue is equipped with a plurality of recording devices to capture live action. These could be cameras such as high resolution, high-frame rate video cameras, volumetric video capture hardware, depth sensing cameras, ultra-high FPS machine vision cameras, LIDAR sensors and LIDAR enabled cameras. The images from the cameras are stitched together to create a three-dimensional rendering of the play that was captured by the cameras. Stitching together or combining footage allows the video footage of the live action sports play to be turned into an augmented reality file that is stored on the interface server.
In this embodiment, a user device (14a) is used to scan one of the MRCs (17a) which directs the user to a Web app via a unique URL. The Web app provides the user with a GUI of certain video files or directly pushes the prior replay to the user device. Once the video is playing, the user places the augmented reality file by aiming the user device at a surface such as the seat, the ground or the playing field and replays the three-dimensional rendering of the live action play allowing the user to view the players from multiple angles. In this example, Joe is attending a football game at a venue. Once the play on the field is complete, Joe uses his user device to scan the tag on his arm rest which launches the Web app. The interface server populates that fan portal with the augmented reality, three-dimensional instant replay of the proceeding play.
In a further embodiment, instead of viewing the video for the proceeding play, Joe clicks on the appropriate video from the GUI which launches the target which is an augmented reality, three-dimensional instant replay on Joe's user device. Via the user device's camera, Joe is able to point his user device at a display surface, to launch the augmented video, which places Joe within the field of view for the replay from multiple angles. Typically, the view launches via a flat surface, however this can be accomplished by any generally accepted mechanism known in the art for viewing such files.
In a further embodiment, the user's device is used to provide an augmented reality graphical overlay that is superimposed on the real time event happening in the venue, as detailed in
A tag can also be placed remote from a given venue, for example on a video feed of the particular game. Thus, if the given video feed of the game does not provide the desired information, a user can scan a tag on the video feed to provide unique overlay opportunities, as well as to view replays from the game on the described system, by accessing the system outside of the venue of the game via the tag on a video display, such as from a broadcast of the game.
In a further embodiment, the LIDAR sensor on the user device is used to create an augmented reality experience. For example, as in
Machine learning can be utilized in any of the embodiments herein, wherein a series of cameras are capturing the same or similar event or location at a different time. Thus, if the LIDAR sensor is placed on the first hole, as golfers A-Z play, there are likely at least 26 different putts that occurred on the green and allowing an improved distance estimation based on the prior events. On the green (1203), prior ball locations are displayed (1204, 1205, 1206, 1207, 1208, 1209) as non-liming examples of prior actions. Using the prior speed of the ball, the slope of the ground, the distance to the hole, wind conditions, etc. it could be calculated, for example, to hit the ball at a particular force, aimed at a particular location to make the putt. Thus, a superimposed image may recommend a ball speed off the putter of x and aiming at position y to make the putt. Indeed, Joe may not need the LIDAR sensor on his device, as the prior footage from each of the prior putts can be viewed and aggregated into a video file, which can be accessed by scanning the tag on the system. In this way, the current position of the ball (1201) can be visualized as to its expected path and the speed or distance necessary to get the ball into the hole. Certainly, such system can be used to train and practice putting as well.
In a further embodiment, the system is able to track the user's interactions with the target such as video playback and augmented reality files to determine user preferences and provide targeted marketing and digital offers. In this example, Bill is an avid football fan and frequently attends Cowboys games. When Bill uses his user device to scan or otherwise detect one of the tags to enable the system (10), the identification/redirect server tracks the interactions from Bill's user device and associates them with the database records corresponding to Bill's unique ID. Because Bill frequently views video playback and augmented reality files for one particular player on the Cowboys team, the identification/redirect server is able to determine that this player is likely one of Bill's favorite players and the identification/redirect server directs the interface server, via the process of
In certain embodiments, a user may be remote to the live action she is watching. Thus, a user may scan a tag displayed on a video feed, such as on a broadcast or cable broadcast of a baseball game. The system through the tag ID from the video feed will recognize that the user is remote and determine the unique ID and other factors within the system. Furthermore, geofencing can be utilized to determine location. Thus, using the unique ID, the tag ID, and other features of the system, the user can seamlessly be provided with target URL (namely a video, and most particularly and augmented reality video), which can then load on the user device launching an augmented reality video. The user is then able to provide certain further information and control the video as detailed herein.
In a further embodiment, because the system tracks a user's interactions with the video playback and augmented reality files, the identification/redirect server is able to determine the user's preferences such as favorite team or favorite player. This allows the identification/redirect server to direct the interface server via the process of
Referring back to
Administrator device (12), which is shown in
Administrator device (12), user devices (14a, 14b), and servers (e.g., 302, 306, 310, 312, 320, 322, and 324) may each be a general-purpose computer. Thus, each computer includes the appropriate hardware, firmware, and software to enable the computer to function as intended and as needed to implement features detailed herein. For example, a general-purpose computer may include, without limitation, a chipset, processor, memory, storage, graphics subsystem, and applications. The chipset may provide communication among the processor, memory, storage, graphics subsystem, and applications. The processor may be any processing unit, processor, or instruction set computers or processors as is known in the art. For example, the processor may be an instruction set based computer or processor (e.g., x86 instruction set compatible processor), dual/multicore processors, dual/multicore mobile processors, or any other microprocessing or central processing unit (CPU). Likewise, the memory may be any suitable memory device such as Random Access Memory (RAM), Dynamic Random-Access memory (DRAM), or Static RAM (SRAM), without limitation. The processor together with at least the memory may implement system and application software including instructions, including methods, disclosed herein. Examples of suitable storage includes magnetic disk drives, optical disk drives, tape drives, an internal storage device, an attached storage device, flash memory, hard drives, and/or solid-state drives (SSD), although embodiments are not so limited.
In an embodiment, servers (e.g., 302, 306, 310, 312, 320, 322, an/or 324) may include database server functionality to manage database (308) or another database. Although not shown, infrastructure variations may allow for database (308) to have a dedicated database server machine. Database (308) and any other database may be any suitable database such as hierarchical, network, relational, object-oriented, multimodal, nonrelational, self-driving, intelligent, and/or cloud based to name a few examples. Although a single database (308) is shown in
It will be appreciated that the embodiments and illustrations described herein are provided by way of example, and that the present invention is not limited to what has been particularly disclosed. Rather, the scope of the present invention includes both combinations and sub combinations of the various features described above, as well as variations and modifications thereof that would occur to persons skilled in the art upon reading the forgoing description and that are not disclosed in the prior art. Therefore, the various systems and methods may include one or all of the limitations of an embodiment, be performed in any order, or may combine limitations from different embodiments, as would be understood by those implementing the various methods and systems detailed herein.
This application claims the benefit of U.S. Provisional Patent Application No. 63/201,374 filed on Apr. 27, 2021, U.S. Provisional Patent Application No. 63/201,373 filed on Apr. 27, 2021, U.S. Provisional Patent Application No. 63/201,376 filed on Apr. 27, 2021, U.S. Provisional Patent Application No. 63/269,015 filed on Mar. 8, 2022, all with the United States Patent and Trademark Office, the contents of which are incorporated herein by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
6144304 | Webb | Nov 2000 | A |
6658348 | Rudd et al. | Dec 2003 | B2 |
7379886 | Zaring et al. | May 2008 | B1 |
7587214 | Inselberg | Sep 2009 | B2 |
7817990 | Pamminger et al. | Oct 2010 | B2 |
8056802 | Gressel et al. | Nov 2011 | B2 |
8494838 | Donabedian et al. | Jul 2013 | B2 |
8731583 | Wengrovitz | May 2014 | B2 |
8971861 | Gupta et al. | Mar 2015 | B2 |
9002727 | Horowitz et al. | Apr 2015 | B2 |
9117231 | Rodgers et al. | Aug 2015 | B2 |
9201470 | Kim | Dec 2015 | B2 |
9223750 | Liu et al. | Dec 2015 | B2 |
9223885 | Marsico | Dec 2015 | B2 |
9324079 | Moulin et al. | Apr 2016 | B2 |
9405844 | Lim et al. | Aug 2016 | B2 |
9451389 | Beg et al. | Sep 2016 | B2 |
9616338 | Hooper | Apr 2017 | B1 |
9681302 | Robinton et al. | Jun 2017 | B2 |
9767645 | Cronin et al. | Sep 2017 | B1 |
9826049 | Lim | Nov 2017 | B2 |
9870585 | Cronin et al. | Jan 2018 | B2 |
9883344 | Bolton et al. | Jan 2018 | B2 |
9965819 | Devries | May 2018 | B1 |
9977865 | Laborde | May 2018 | B1 |
10009429 | Manchado | Jun 2018 | B2 |
10127746 | Bergdale et al. | Nov 2018 | B2 |
10163124 | Horowitz et al. | Dec 2018 | B2 |
10178166 | Sharan | Jan 2019 | B2 |
10248905 | Beatty | Apr 2019 | B1 |
10594774 | Thomas | Mar 2020 | B2 |
10942913 | Khoyilar et al. | Mar 2021 | B1 |
11074543 | Rudeegraap et al. | Jul 2021 | B1 |
11461425 | Fowler et al. | Oct 2022 | B2 |
11468138 | Fowler et al. | Oct 2022 | B2 |
11838587 | Ensing | Dec 2023 | B1 |
20010050310 | Rathus et al. | Dec 2001 | A1 |
20020016816 | Rhoads | Feb 2002 | A1 |
20030008661 | Joyce et al. | Jan 2003 | A1 |
20030041155 | Nelson et al. | Feb 2003 | A1 |
20060077253 | VanRiper et al. | Apr 2006 | A1 |
20060094409 | Inselberg | May 2006 | A1 |
20070229217 | Chen et al. | Oct 2007 | A1 |
20090085724 | Naressi et al. | Apr 2009 | A1 |
20090112683 | Hamilton, II et al. | Apr 2009 | A1 |
20090138920 | Anandpura et al. | May 2009 | A1 |
20090189982 | Tawiah | Jul 2009 | A1 |
20090222336 | Etheridge, Jr. et al. | Sep 2009 | A1 |
20100077429 | Kim et al. | Mar 2010 | A1 |
20100184462 | Lapstun et al. | Jul 2010 | A1 |
20100228577 | Cunningham et al. | Sep 2010 | A1 |
20100245083 | Lewis | Sep 2010 | A1 |
20100279710 | Dicke et al. | Nov 2010 | A1 |
20110034252 | Morrison et al. | Feb 2011 | A1 |
20110270618 | Banerjee et al. | Nov 2011 | A1 |
20120011015 | Singh et al. | Jan 2012 | A1 |
20120130770 | Heffernan | May 2012 | A1 |
20120162436 | Cordell | Jun 2012 | A1 |
20120233237 | Roa et al. | Sep 2012 | A1 |
20120280784 | Gaviria Velez et al. | Nov 2012 | A1 |
20130043302 | Powlen et al. | Feb 2013 | A1 |
20130073366 | Heath | Mar 2013 | A1 |
20130080218 | Wildern, IV et al. | Mar 2013 | A1 |
20130085834 | Witherspoon, Jr. et al. | Apr 2013 | A1 |
20130166384 | Das | Jun 2013 | A1 |
20130218721 | Borhan et al. | Aug 2013 | A1 |
20130275221 | Zeto, III et al. | Oct 2013 | A1 |
20130297430 | Soergel | Nov 2013 | A1 |
20130311214 | Marti et al. | Nov 2013 | A1 |
20140039945 | Coady et al. | Feb 2014 | A1 |
20140046802 | Hosein et al. | Feb 2014 | A1 |
20140058886 | Gopalakrishnan et al. | Feb 2014 | A1 |
20140156752 | Fetyko | Jun 2014 | A1 |
20140217164 | Sweeney | Aug 2014 | A1 |
20140278592 | Giampapa | Sep 2014 | A1 |
20140279072 | Serino | Sep 2014 | A1 |
20140282684 | Keen et al. | Sep 2014 | A1 |
20140365574 | Franks et al. | Dec 2014 | A1 |
20150012307 | Moss | Jan 2015 | A1 |
20150067811 | Agnew | Mar 2015 | A1 |
20150073879 | Acosta-Cazaubon | Mar 2015 | A1 |
20150081532 | Lewis et al. | Mar 2015 | A1 |
20150088658 | Iiduka et al. | Mar 2015 | A1 |
20150112704 | Braun | Apr 2015 | A1 |
20150120388 | Tan et al. | Apr 2015 | A1 |
20150161684 | Raikula | Jun 2015 | A1 |
20150279164 | Miller et al. | Oct 2015 | A1 |
20150294392 | Sharon et al. | Oct 2015 | A1 |
20150296347 | Roth et al. | Oct 2015 | A1 |
20150304601 | Hicks et al. | Oct 2015 | A1 |
20150348329 | Carre | Dec 2015 | A1 |
20150379791 | Russell et al. | Dec 2015 | A1 |
20160086228 | Babb et al. | Mar 2016 | A1 |
20160104041 | Bowers et al. | Apr 2016 | A1 |
20160104347 | Yang | Apr 2016 | A1 |
20160132925 | Durst, Jr. et al. | May 2016 | A1 |
20160189287 | Van Meter | Jun 2016 | A1 |
20160191821 | Dwarakanath et al. | Jun 2016 | A1 |
20160217258 | Pitroda et al. | Jul 2016 | A1 |
20160260319 | Jeffery | Sep 2016 | A1 |
20160282619 | Oto | Sep 2016 | A1 |
20160307379 | Moore, Jr. et al. | Oct 2016 | A1 |
20160335565 | Charriere et al. | Nov 2016 | A1 |
20160381023 | Dulce et al. | Dec 2016 | A1 |
20170039599 | Tunnell et al. | Feb 2017 | A1 |
20170142460 | Yang et al. | May 2017 | A1 |
20170250006 | Ovalle | Aug 2017 | A1 |
20170308692 | Yano | Oct 2017 | A1 |
20170330263 | Shaffer | Nov 2017 | A1 |
20170337531 | Kohli | Nov 2017 | A1 |
20180025402 | Morris | Jan 2018 | A1 |
20180026954 | Toepke et al. | Jan 2018 | A1 |
20180075717 | Reinbold | Mar 2018 | A1 |
20180089775 | Frey et al. | Mar 2018 | A1 |
20180276705 | Jay et al. | Sep 2018 | A1 |
20180288394 | Aizawa | Oct 2018 | A1 |
20180330327 | Hertenstein et al. | Nov 2018 | A1 |
20180336286 | Shah | Nov 2018 | A1 |
20180353999 | McGillicuddy et al. | Dec 2018 | A1 |
20180376217 | Kahng | Dec 2018 | A1 |
20190098504 | Van Betsbrugge et al. | Mar 2019 | A1 |
20190130450 | Lamont | May 2019 | A1 |
20190220715 | Park et al. | Jul 2019 | A1 |
20190311341 | Rice | Oct 2019 | A1 |
20190362601 | Kline et al. | Nov 2019 | A1 |
20190385128 | Cummings | Dec 2019 | A1 |
20200035019 | Cappello | Jan 2020 | A1 |
20200066129 | Galvez et al. | Feb 2020 | A1 |
20200104601 | Karoui | Apr 2020 | A1 |
20200184547 | Andon et al. | Jun 2020 | A1 |
20200213006 | Graham et al. | Jul 2020 | A1 |
20200236278 | Yeung et al. | Jul 2020 | A1 |
20200320911 | Bruce | Oct 2020 | A1 |
20200327997 | Behrens et al. | Oct 2020 | A1 |
20200413152 | Todorovic et al. | Dec 2020 | A1 |
20210019564 | Zhou et al. | Jan 2021 | A1 |
20210019715 | Stier et al. | Jan 2021 | A1 |
20210027402 | Davis et al. | Jan 2021 | A1 |
20210065256 | Shontz | Mar 2021 | A1 |
20210118085 | Bushnell et al. | Apr 2021 | A1 |
20210134248 | Wan | May 2021 | A1 |
20210247947 | Jaynes et al. | Aug 2021 | A1 |
20210248338 | Spivack et al. | Aug 2021 | A1 |
20210390509 | Fowler et al. | Dec 2021 | A1 |
20220060759 | Fowler et al. | Feb 2022 | A1 |
20220103885 | Sarosi et al. | Mar 2022 | A1 |
20220114228 | Fowler et al. | Apr 2022 | A1 |
20220116737 | White et al. | Apr 2022 | A1 |
20220167021 | French | May 2022 | A1 |
20220172128 | Lore | Jun 2022 | A1 |
20220188839 | Andon et al. | Jun 2022 | A1 |
20220248169 | Bettua et al. | Aug 2022 | A1 |
20220337898 | Dorogusker | Oct 2022 | A1 |
20220343451 | Fowler | Oct 2022 | A1 |
20230117466 | Idris | Apr 2023 | A1 |
Number | Date | Country |
---|---|---|
2254083 | Nov 2010 | EP |
2988260 | Feb 2016 | EP |
3550844 | Oct 2019 | EP |
3092195 | Jul 2020 | FR |
10-2015-0042885 | Apr 2015 | KR |
WO2006011557 | Feb 2006 | WO |
WO2008124168 | Oct 2008 | WO |
WO2013120064 | Aug 2013 | WO |
WO2014081584 | May 2014 | WO |
WO2014112686 | Jul 2014 | WO |
WO2015035055 | Mar 2015 | WO |
WO2016041018 | Mar 2016 | WO |
WO2019016602 | Jan 2019 | WO |
Entry |
---|
International Search Report issued in International Application No. PCT/US2021/071461 dated Feb. 10, 2022. |
International Search Report issued in International Application No. PCT/US2022/071913 dated Jun. 15, 2022. |
International Search Report issued in International Application No. PCT/US2021/070471 dated Jun. 28, 2021. |
International Search Report issued in International Application No. PCT/US2022/071909 dated Aug. 3, 2022. |
International Search Report issued in International Application No. PCT/US2022/071912 dated Jul. 6, 2022. |
International Search Report issued in International Application No. PCT/US2022/071938 dated Jul. 6, 2022. |
International Search Report issued in International Application No. PCT/US2022/071906 dated Jul. 7, 2022. |
International Search Report issued in International Application No. PCT/US2022/071910 dated Jul. 7, 2022. |
International Search Report issued in International Application No. PCT/US2022/071908 dated Aug. 8, 2022. |
International Search Report issued in International Application No. PCT/US2022/071915 dated Jul. 8, 2022. |
“Blockchain”, Wikipedia, Sep. 27, 2019. |
“URL Redirection”, Wikipedia, Mar. 21, 2021. |
“Web Template System”, Wikipedia, Mar. 21, 2021. |
Garg, “QR Codes in Sporting Goods Companies: Eight Use Cases Across the Industry”, Scanova Blog, Dec. 11, 2019, https://scanova.io/blog/qr-codes-sporting-goods-companies/. |
“QR Codes: Here They Come, Ready or Not”, Printing Tips: Precision Printing Newsletter, vol. 1, No. 1, May 2011. |
McLaren, “Women's Beach Volleyballers Sign Deal to Display QR Code on Their Rears!”, Digital Sport, Aug. 9, 2011, https://digitalsport.co/women%E2%80%99s-beach-volleyballers-sign-deal-to-display-qr-code-on-their-rears. |
Unruh, “OU's Sooner Schooner Will Run on Field as Football Gameday Traditions Remain in New Ways: “We Feel Like We Can Safely Do That””, The Oklahoman, Sep. 10, 2020, https://www.oklahoman.com/story/sports/columns/2020/09/10/sooner-schooner-will-run-field-football-gameday-traditions-remain-new-ways-feel-like-can-safely-that/60383091007/. |
Muthukumar, et al., “QR Code and Biometric Based Authentication System for Trains”, IOP Conference Series: Materials Science and Engineering, vol. 590, art. 012010, 2019, 1-7. |
Number | Date | Country | |
---|---|---|---|
20220345789 A1 | Oct 2022 | US |
Number | Date | Country | |
---|---|---|---|
63269015 | Mar 2022 | US | |
63201376 | Apr 2021 | US | |
63201373 | Apr 2021 | US | |
63201374 | Apr 2021 | US |