Systems and methods for controlling camera movements between storylines in a video game

Information

  • Patent Grant
  • 12005357
  • Patent Number
    12,005,357
  • Date Filed
    Tuesday, March 15, 2022
    2 years ago
  • Date Issued
    Tuesday, June 11, 2024
    5 months ago
  • Inventors
    • Tieger; Matthew (Valley Center, CA, US)
    • Duncan; Matthew (Palomar Mountain, CA, US)
    • Wetmiller; John Philippe (Oceanside, CA, US)
    • Krystek; Matthew John (San Diego, CA, US)
  • Original Assignees
  • Examiners
    • Myhr; Justin L
    Agents
    • Novel IP
Abstract
In a multiplayer video game, virtual cameras are controlled by continually assessing and ranking storylines relating to the interactions of various players during a game session. A ranking for each storyline may be based on various factors such as, but not limited to, what the players can see, the distance between certain players, nearby game items or in-game. Subsequently, at least one virtual camera is navigated to provide a view of the highest-ranking storyline, subject to certain limitations on how the camera can move, transition, or otherwise display the interactions between players.
Description
FIELD

The present specification is related generally to the field of multiplayer online gaming. More specifically, the present specification is related to systems and methods for selectively controlling camera perspectives, movements and/or displays of video game gameplay, thereby enabling the distribution of interesting game play for managed viewing by a plurality of spectators.


BACKGROUND

Multiplayer online gaming has seen explosive proliferation across the globe among a wide range of age groups. Similar to popular competitive sports, such as soccer, football, card games and basketball, multiplayer online games also have a large fan following who relish watching competitive online games and highly skilled players in action.


Consequently, the continued evolution and growth of online gaming, together with an ever-increasing fan base, have led to the rise in popularity of video games as an in-person spectator sport or a virtual spectator sport. As with other sports, such fans enjoy being spectators of highly competitive games or games that popular players are participating in, either online or live, as is the case with organized tournaments. In a multiplayer online game, spectators may watch one or more players or teams of players involved in combat or otherwise participating in game play. Multiplayer online games may involve a large number of players in a game session.


For example, games may support hundreds or thousands of active players including but not limited to simple two-dimensional shooting games, multiplayer online battle arena games, and massive multiplayer online role-playing games.


Existing video games enable spectators to view interesting moments of game play across a plurality of game events which develop as a result of a large number of player interactions or actions. However, conventionally, only one or more virtual cameras are configured in a video game to capture game play action. A virtual camera is an in-game object that, when executed in a video game, generates a visual, displayable viewpoint of the game that is a function of a) the programmed field of view of the virtual camera and b) the location of the in-game object, as defined by a three dimensional coordinate in a virtual game map. Those cameras are limited, however, in how they follow, track, and display game play action, often missing the most interesting interactions between players and failing to present a smooth transition between different camera angles.


Accordingly, there is a need for video game systems and methods that more effectively control camera perspectives, and manage camera transitions and displays, to ensure spectators have access to game events that are likely to be of high interest and/or entertainment value to the spectators. There is also a need for systems and methods for capturing and broadcasting interesting gameplay events in a realistic, real-world manner without detracting from the viewing experience of the spectators.


SUMMARY

The following embodiments and aspects thereof are described and illustrated in conjunction with systems, tools and methods, which are meant to be exemplary and illustrative, and not limiting in scope. The present application discloses numerous embodiments.


The following specification discloses a computer-implemented method of controlling at least one of a navigation, positioning or orientation of a virtual camera configured to capture a plurality of storylines in a multiplayer video game for broadcasting to at least one computing device, said method comprising: defining, in at least one server remote from the at least one computing device, a function to identify the plurality of storylines; defining, in the at least one server, at least one base criterion for ranking each of the plurality of storylines; determining, in the at least one server, a base rank for each of the plurality of storylines, the base rank being a function of said at least one base criterion; defining, in the at least one server, at least one factor to weight the base rank, said at least one factor having a predetermined value; determining, in the at least one server, an overall rank for each of the plurality of storylines by associating said at least one factor with the base rank, wherein the overall rank is determined at a predetermined frequency in the game; subject to one or more rules, programmatically moving, in the at least one server, the virtual camera to capture one of the plurality of storylines determined to have a highest overall ranking among the plurality of storylines; and broadcasting, from the at least one server to the at least one computing device, the one of the plurality of storylines determined to have a highest overall ranking among the plurality of storylines.


Optionally, the function is dependent on at least a genre of said multiplayer video game.


Optionally, the at least one server is configured to concurrently broadcast to at least 20 computing devices.


Optionally, the at least one base criterion comprises a virtual distance of each player with respect to each other player in the multiplayer video game.


Optionally, the base rank for each of the plurality of storylines is inversely related to the virtual distances of each player with respect to each of the other players.


Optionally, the at least one factor is a function of at least one of: a first value representative of a relationship between each player and each other player in the multiplayer video game; a second value representative of an orientation of each player relative to each other player in the multiplayer video game; or a third factor representative of a degree of an unobstructed view between each player and each other player in the multiplayer video game.


Optionally, the one or more rules limit a time which the virtual camera must take to travel to capture the storyline determined to have the highest overall ranking and wherein the time ranges between 0.25 to 1 second.


Optionally, the one or more rules limit at least one of rotation, tilt or pan required for the virtual camera to capture the storyline determined to have said highest overall ranking.


Optionally, the one or more rules require the highest overall ranking of the storyline to exceed an overall ranking of an immediately preceding storyline by a predefined value.


Optionally, the one or more rules limit at least one of a speed of movement of the virtual camera or an angular speed of rotation of the virtual camera.


The present specification also discloses a system for controlling at least one of a navigation, positioning or orientation of a virtual camera configured to capture a plurality of storylines in a multiplayer video game executing on a plurality of gaming devices for broadcasting to a plurality of spectator devices, said system comprising: at least one server for hosting a game session of the multiplayer video game and to broadcast said game session to each of a plurality of spectator devices through a network; a plurality of gaming modules stored on the plurality of gaming devices remote from the at least one server and configured to enable a plurality of human players to play in the game session of the multiplayer video game; a plurality of spectator modules stored on the plurality of spectator devices remote from the at least one server and configured to enable a plurality of human spectators to view the broadcast of the game session, wherein at least a portion of the plurality of spectator modules are executing on at least a portion of the plurality of gaming devices and wherein at least a portion of the plurality of gaming modules are executing on at least a portion of the plurality of spectator devices; a processor in said at least one server, said processor executing a plurality of executable programmatic instructions to: define a function to identify the plurality of storylines; define at least one base criterion for ranking each of the plurality of storylines; determine a base rank for each of the plurality of storylines, the base rank being a function of the at least one base criterion; define at least one factor to weight the base rank; determine an overall rank for each of the plurality of storylines by associating the at least one factor with the base rank, wherein a value of the at least one factor has an increasing or decreasing effect on the overall ranking; and subject to one or more rules, programmatically moving the virtual camera to capture one of the plurality of storylines determined to have a highest overall ranking among the plurality of storylines; and broadcasting the one of the plurality of storylines determined to have a highest overall ranking among the plurality of storylines to the plurality of spectator modules.


Optionally, the at least one base criterion comprises a virtual distance of each player with respect to each other player participating in the game session.


Optionally, the base rank for each of the plurality of storylines is inversely related to the virtual distances of each player with respect to each other player.


Optionally, the at least one factor comprises at least one of: a first value representative of a relationship between each player with respect to each other player in the game session; a second value representative of an orientation of each player with respect to each other player in the game session; and a third value representative of a degree of an unobstructed view between each player and each other player in the game session.


Optionally, the one or more rules limit a time which the virtual camera must take to travel to capture the storyline determined to have the highest overall ranking and wherein the time ranges between 0.25 to 1 second.


Optionally, the one or more rules limit at least one of rotation, tilt or pan required for the virtual camera to capture the storyline determined to have the highest overall ranking.


Optionally, the one or more rules require the highest overall ranking of the storyline to exceed an overall ranking of an immediately preceding storyline by a predefined value.


Optionally, the one or more rules limit at least one of a speed of movement of the virtual camera or an angular speed of rotation of the virtual camera.


The present specification also discloses a computer readable non-transitory medium comprising a plurality of executable programmatic instructions wherein, when said plurality of executable programmatic instructions are executed by a processor in at least one server, a process for controlling navigation, positioning and orientation of a virtual camera configured to capture a plurality of storylines in a multiplayer video game for broadcasting to at least one spectator device, said plurality of executable programmatic instructions comprising: programmatic instructions, stored in said computer readable non-transitory medium, for determining a base rank for each of said plurality of storylines, said base rank being a function of at least one base criterion; programmatic instructions, stored in said computer readable non-transitory medium, for determining an overall rank for each of said plurality of storylines by associating at least one weighting factor with said base rank, wherein said overall rank is determined at a predetermined frequency in the game, and wherein said weighting factor has an accentuating or diminishing effect on the overall rank; and programmatic instructions, stored in said computer readable non-transitory medium, for switching the virtual camera to a second storyline of a second overall rank from a first storyline of a first overall rank, wherein said switching is subject to one or more rules.


Optionally, the at least one base criterion is a distance of each player with respect to every other player participating in a game match of said multiplayer video game, and wherein the base rank for each of the plurality of storylines is inversely related to the distance of each player with respect to every other player


The aforementioned and other embodiments of the present shall be described in greater depth in the drawings and detailed description provided below.





BRIEF DESCRIPTION OF THE DRAWINGS

These and other features and advantages of the present specification will be further appreciated, as they become better understood by reference to the following detailed description when considered in connection with the accompanying drawings:



FIG. 1 illustrates an embodiment of a multiplayer online gaming system or environment in which a plurality of spectating modalities may be enabled, implemented or executed, in accordance with some embodiments of the present specification;



FIG. 2 is a computer-implemented method of tracking and ranking one or more storylines in a game match of a multiplayer online video game, in accordance with some embodiments of the present specification;



FIG. 3 is a computer-implemented method of controlling navigation, positioning and orientation of a virtual camera configured to capture a plurality of storylines in a multiplayer video game for broadcasting to at least one spectator device, in accordance with some embodiments of the present specification; and



FIG. 4 is a computer-implemented method of enabling at least one spectator to view video data associated with one or more storylines in a game match of a multiplayer video game, in accordance with some embodiments of the present specification.





DETAILED DESCRIPTION

The present specification discloses systems and methods wherein virtual cameras are controlled by continually assessing and ranking storylines relating to various players during a multiplayer game session. In various embodiments, a ranking for each storyline may be based on a plurality of factors such as, but not limited to, what the players can see, the distance between certain players, nearby game items or in-game events in an exemplary multiplayer first person shooter (FPS) game. Subsequently, at least one virtual camera is navigated to provide a view of the highest ranking storyline, subject to certain limitations on how the camera can move, transition, or otherwise display the interactions between players.


In various embodiments, a computing device includes an input/output controller, at least one communications interface and system memory. The system memory includes at least one random access memory (RAM) and at least one read-only memory (ROM). These elements are in communication with a central processing unit (CPU) to enable operation of the computing device. In various embodiments, the computing device may be a conventional standalone computer or alternatively, the functions of the computing device may be distributed across multiple computer systems and architectures.


In some embodiments, execution of a plurality of sequences of programmatic instructions or code enables or causes the CPU of the computing device to perform various functions and processes. In alternate embodiments, hard-wired circuitry may be used in place of, or in combination with, software instructions for implementation of the processes of systems and methods described in this application. Thus, the systems and methods described are not limited to any specific combination of hardware and software.


The term “module” used in this disclosure may refer to computer logic utilized to provide a desired functionality, service or operation by programming or controlling a general purpose processor. In various embodiments, a module can be implemented in hardware, firmware, software or any combination thereof. The module may be interchangeably used with unit, logic, logical block, component, or circuit, for example. The module may be the minimum unit, or part thereof, which performs one or more particular functions.


The term “virtual camera” or “camera” refers to at least one viewing element configured to capture or provide a view of game play in the virtual world associated with a video game. It should be appreciated that akin to a physical camera, the at least one virtual camera may have associated position coordinates to uniquely identify its location within a gaming world. Also, in various embodiments, a virtual camera may be characterized by a plurality of customizable parameters such as, but not limited to, orientation, viewing angle, focal length, zoom factor, tilt and pan.


The term “storyline” is defined as an aggregated set of data defining a player's visual and auditory in-game interactions, e.g. with one or more other players. Storyline data may refer to all, or a portion, of the programmatic data defining the visual storyline, auditory storyline and associated metadata pertaining or incidental thereto which, if processed and rendered, would visually and aurally display the storyline to one or more human spectators. Storyline data may be indicative of one or more preferred positions, angles, foci, fields of view, or perspectives of one or more cameras.


The term “player” refers to any human or virtual actor within a game, where the human or virtual actor may engage in play actions, social actions, administrative actions, or observation actions.


The present specification is directed towards multiple embodiments. For example, the systems and methods described herein may be applied to multiplayer video games in numerous genres, including first person shooter (FPS), fighting games, survival games, action-adventure games, role-playing games, simulation games, strategy games, sports, card games, and racing games. The following disclosure is provided in order to enable a person having ordinary skill in the art to practice the invention. Language used in this specification should not be interpreted as a general disavowal of any one specific embodiment or used to limit the claims beyond the meaning of the terms used therein. The general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the invention. Also, the terminology and phraseology used is for the purpose of describing exemplary embodiments and should not be considered limiting. Thus, the present invention is to be accorded the widest scope encompassing numerous alternatives, modifications and equivalents consistent with the principles and features disclosed. For purpose of clarity, details relating to technical material that is known in the technical fields related to the invention have not been described in detail so as not to unnecessarily obscure the present invention.


In the description and claims of the application, each of the words “comprise” “include” and “have”, and forms thereof, are not necessarily limited to members in a list with which the words may be associated. It should be noted herein that any feature or component described in association with a specific embodiment may be used and implemented with any other embodiment unless clearly indicated otherwise.


As used herein, the indefinite articles “a” and “an” mean “at least one” or “one or more” unless the context clearly dictates otherwise.



FIG. 1 illustrates an embodiment of a multiplayer online gaming system or environment 100 in which a plurality of spectating modalities may be enabled, implemented or executed, in accordance with some embodiments of the present specification. The system 100, in some embodiments, comprises a client-server architecture, where one or more game servers 105 are in communication with one or more player devices 110 and one or more spectator devices 145 over a network 115. Players may access the system 100 via the one or more player devices 110 while one or more spectators or viewers may access the system 100 using the one or more spectator devices 145. The player devices 110 and the spectator devices 145 comprise computing devices such as, but not limited to, personal or desktop computers, laptops, Netbooks, handheld devices such as smartphones, tablets, and PDAs, gaming consoles and/or any other computing platform known to persons of ordinary skill in the art. Although four player devices 110 and three spectator devices 145 are illustrated in FIG. 1, any number of player and spectator devices 110, 145 can be in communication with the one or more game servers 105 over the network 115. Also, in some embodiments, the spectator devices 145 include facilities similar to Esports Arena or locations/theaters deploying large display screens to enable a number of spectators to simultaneously view a game match.


The one or more game servers 105 can be any computing device having one or more processors and one or more computer-readable storage media such as RAM, hard disk or any other optical or magnetic media. The one or more game servers 105 include a plurality of modules operating to provide or implement a plurality of functional, operational or service-oriented methods of the present specification. In some embodiments, the one or more game servers 105 include or are in communication with at least one database system 150. The database system 150 stores a plurality of game data associated with at least one game that is served or provided to the player devices 110 for game-play and to the spectator devices 145 for viewing and engagement over the network 115. In embodiments, the database system 150 also stores a plurality of data such as, but not limited to, storyline data, storyline ranking data, storyline ranking rules or logic, and/or camera movement or intelligence rules or logic. In some embodiments, the one or more game servers 105 may be implemented by a cloud of computing platforms operating together as game servers 105.


In embodiments, the one or more game servers 105 provide or implement a plurality of server-side modules such as, but not limited to, a master game engine or module 120, a storyline ranking engine or module 130 and a camera intelligence engine or module 135. In embodiments, the one or more player devices 110 and spectator devices 145 respectively provide or implement client-side modules such as, but not limited to, a player game module 120′ and a spectator engagement module 140. In embodiments, the player game module 120′ is same as or similar to the counterpart server-side module 120. In some embodiments, the client-side modules 120′ and 140 are configured to connect and communicate with the plurality of modules on the server 105 via the network 115.


The one or more game servers 105 are preferably configured to concurrently communicate with at least 20 spectator devices, and more preferably 20 to 1,000,000 spectator devices or any increment therein, such that each of said at least 20 spectator devices are permitted to concurrently receive a broadcast of the storylines selected and recorded by a virtual camera. In another embodiment, the one or more game servers are configured to concurrently host at least 5 storyline broadcasts per second, preferably 50-150 storyline broadcasts per second, with the plurality of spectator devices.


Master Game Module 120


In embodiments, the master game module 120 is configured to execute an instance of an online game to facilitate interaction of the players with the game. In embodiments, the instance of the game executed may be synchronous, asynchronous, and/or semi-synchronous. The master game module 120 controls aspects of the game for all players and receives and processes each player's input in the game. In other words, the master game module 120 hosts the online game for all players, receives game data from the player devices 110 and transmits updates to all player devices 110 based on the received game data so that the game, on each of the player devices 110, represents the most updated or current status with reference to interactions of all players with the game and/or with one another (depending upon the type or genre of game match being played). Thus, the master game module 120 transmits game data over the network 115 to the player devices 110 for use by the game module 120′ to provide local versions and current status of the game to the players.


Player Game Module 120


On the client side, each of the one or more player devices 110 implements the game module 120′ that operates as a gaming application to provide a player with an interface between the player and the game. The game module 120′ generates the interface to render a virtual environment, virtual space or virtual world associated with the game and enables the player to interact in the virtual environment to perform a plurality of game tasks and objectives. The game module 120′ accesses game data received from the game server 110 to provide an accurate representation of the game to the player. The game module 120′ captures and processes player inputs and interactions within the virtual environment and provides updates to the game server 110 over the network 115.


Storyline and Storyline Data


As previously explained, the term “storyline” is defined as an in-game, virtual interaction between a first player and at least one other second player or actor in a multiplayer video game. A storyline therefore includes outcomes, occurrences, episodes or events, which occur in a game match and that are of value, significance, appeal, curiosity or interest to a spectator and, therefore, impact the viewership, engagement, learning and/or entertainment value of a video game.


As an example, in a multiplayer shooting or combat game a storyline may be defined as interactions/combats between each player and every other player in the match. In a two player versus two player first person shooter (FPS) game involving A, B, C and D players there may be potentially six storylines (AB, AC, AD, BC, BD, and CD). Thus, a first person shooter (FPS) game session may include x number of players thereby creating n number of storylines, with any combination or permutation of players possible. In the example provided above, a three player storyline may include ABC, ABD, ACD, or BCD.


More generally, a given multiplayer video game may have n(n−1)/2 number of storylines where n represents the total number of players or actors who may interact with each other in the game and where each storyline represents an interaction between two distinct players. Notwithstanding the above, the number of storylines may be modified based on a plurality of rules. For example, the system may eliminate, ignore, or otherwise filter one or more of the storylines based on whether a player or actor falls into a certain category or has a predefined characteristics, such as having a rank that is below a threshold value, having a title, label or designation that is not preferred, being on the same team as the other player with whom the storyline would be defined, having a score, point total, health level, number of kills, or amount of treasure that is below a threshold level, or having weaponry types that are not preferred. Accordingly, storylines may be filtered such that the remaining set of storylines only pertain to interactions between players on opposing teams or only to key players (such as, in a football game, quarterback, running back, or wide receiver while ignoring linemen) or players ranked in the top 10.


Storyline Ranking Engine or Module 130


The storyline ranking module 130 implements or executes a plurality of instructions or programmatic code to track one or more storylines as these are generated during a game match, acquire and store data (storyline data) associated with each of the storylines and rank each of the storylines in accordance with a plurality of pre-stored yet customizable storyline ranking logic, rules, guidelines, or imperatives. In various embodiments, the storyline data, storyline ranking data and the plurality of storyline ranking rules are stored in the database system 150.


For the purposes of illustrating the ranking functionalities and features of the module 130 the present specification refers to shooter games genre and specifically a first-person shooter (FPS) game such as, for example, Call of Duty, in accordance with some embodiments. A first person shooter (FPS) game session is now considered, as an example, where Team Alpha of first and second players is pitted in shooting combat with Team Beta of third and fourth players.


As the gameplay ensues between Team Alpha and Team Beta, a plurality of events or interactions between members of Team A and Team B unfold. In some embodiments, the plurality of storyline ranking logic, rules, guidelines or imperatives are applied such that they consider, establish or assign at least one base factor or criterion for ranking followed by at least one multiplier factor or criterion that has an accentuating or diminishing effect (on the overall ranking) depending upon the type or genre of the game.


In the first person shooter (FPS) game session, in some embodiments, a distance between two players in the game match is considered as the base criterion for ranking the storylines. In other words, the base criterion is a distance determined for each player with respect to every other player participating in the first person shooter (FPS) game session. For example, in a game map, the maximum distance between two players will be a known or fixed number, for example, 8000 units at the two farthest points in the game map. Therefore, at any given moment, any two players will be between 1 and 8000 units apart from each other. However, from the spectators' vantage points, valuable storylines are those where the two players are closer to each other because there is a greater chance of meaningful interaction.


Therefore, in some embodiments, the module 130 establishes proximity of two players as a base factor or criterion and consequently determines a base rank by inverting (that is, taking the reciprocal of) the distance between the two players in the game match. Thus, if the two players are 8000 units apart from each other the base ranking assigned to their storyline is the lowest ( 1/8000) whereas if the two players are 1 unit apart from each other the base ranking assigned to their storyline is the highest (1/1) in the game match. In some embodiments, the base ranking may be calculated as a percentage ( 1/8000=0.125%, 1/1=100%). Therefore, a storyline may be weighted using a factor that is a function of the distance between the players defining the storyline, where the function provides for a larger factor, or weight, if the players are closer to each other and a smaller factor, or weight, if the players are further away from each other.


In some embodiments, the module 130 further establishes at least one multiplier or weighting factor or criterion and assigns an accentuating or diminishing coefficient or value thereto depending upon the value (an interest or value quotient) of the multiplier or weighting factor from the vantage point of the spectators. In the first person shooter (FPS) game, in some embodiments, following exemplary multiplier factors are established by the module 130 and assigned a corresponding coefficient:

    • A first multiplier or weighting factor is representative of a relationship between the two players. Thus, a coefficient of, for example, “5” may be assigned if the two players are from opposing teams—that is, the two players are enemies. However, if the two players are from the same team then a coefficient of, for example “1” or any increment therein in a range 0 to 1 may be assigned. In other words, the module 130 puts more emphasis and value on storylines between opponents or foes. Therefore, a storyline may be weighted using a factor that is a function of the relationship between the players defining the storyline, where the function provides for a larger factor, or weight, if the players are in opposition to each other and a smaller factor, or weight, if the players are in cooperation with each other. Note that this, and all subsequent, weighting approaches may be used in place of eliminating or filtering out such storylines altogether, as described above.
    • A second multiplier or weighting factor is representative of an orientation of the two players with respect to each other. Thus, a coefficient of, for example, “2” may be assigned if each of two players (from opposing teams, each having two players) are facing each other; a coefficient of, for example, “1.5” may be assigned if only one of the two players (from opposing teams) is facing the other. However, if neither of the two players (from opposing teams) is facing the other then a coefficient of, for example “1” or any increment therein in a range of 0 to 1 may assigned. Therefore, a storyline may be weighted using a factor that is a function of the orientation between the players defining the storyline, where the function provides for a larger factor, or weight, if the players are fully or partially oriented toward each other and a smaller factor, or weight, if the players are oriented away from each other. Accordingly, the module 130 places more emphasis or weighted value on storylines between opponents/foes/enemies where at least one player is facing the other thereby indicative of a possible combat event. In some embodiments, the second multiplier or weighting factor is also indicative of whether the virtual camera can establish a position and/or orientation where both players are visible.
    • A third multiplier or weighting factor is representative of what the two players can “see” or have in their field of view (FOV). Thus, a coefficient of, for example “2” may be assigned if the player (that the virtual camera is closest to) can actually see the other player without any obstructions in the view (that is, an unobstructed view).


Thus, in some embodiments, a base ranking determined from a base criterion, such as the proximity of two players, when multiplied by one or more coefficients determined from one or more multiplier factors provides an overall ranking of a storyline between the two players, as follows:

R=k1×(br)+k2×(br)+ . . . +kn×(br),

where R is the overall ranking of a storyline between two players, k1 to kn are coefficients corresponding to ‘n’ number of multiplier factors and br is the base ranking determined established on a base factor/criteria. In various embodiments, the value of the coefficients may be real numbers ranging between 0 and 1 (0≤coefficient≤1) corresponding to multiplier factors that have diminishing effects (that is, are low in terms of value for spectators) and may be real numbers greater than 1 (1≤coefficient) corresponding to multiplier factors that have accentuating or emphasizing effects (that is, are high is terms of value for spectators).


In the first person shooter (FPS) game example of Team Alpha and Team Beta, in accordance with the base and multiplier criteria, module 130 is programmed to value conflict and combat. Also, it should be appreciated that the base and multiplier factors or criteria are customizable by an administrator. That is, the plurality of rules or logic implemented by the module 130, to determine the storyline rankings, may be expanded or relaxed/briefed depending on factors such as, but not limited to, game mode, type, genre, and/or level. For example, in some embodiments, the module 130 generates the overall storyline ranking not only on the basis of player relationships (the first multiplier factor) but also other criteria such as, but not limited to a) locations of interest on a game map (for example, storylines taking place in the center of the game map, a particular building, and/or on a bridge may be ranked higher or lower depending upon the perceived value for the spectators), and b) certain virtual items or combat gear of interest to spectators (such as, vehicles, combat weapons, positions of advantage in terms of first strike).


In some embodiments, the module 130 implements a plurality of instructions or programmatic code to acquire storyline data and generate storyline rankings (also referred to as ‘storyline ranking data’) in real-time or near real-time as the storylines develop between the players during the game map. In such embodiments, the game match is available for spectating in real-time during the game match. Accordingly, a first plurality of rules (such as those described above) are used by the module 130 to establish the base and multiplier factors in order to determine the overall ranking of the storylines in real time.


However, in some embodiments, the module 130 implements a plurality of instructions or programmatic code to acquire storyline data and generate storyline rankings when the game match is broadcasted for spectating with time delay or after the match is complete. Accordingly, a second plurality of rules are used by the module 130 to establish the base and multiplier factors in order to determine the overall ranking of the storylines. For example, if the match is already complete the second plurality of rules enable the module 130 to assess and rank storylines with the benefit of already knowing their outcome. For example, a particular interaction between two players of opposing teams that seemed mundane in the beginning may lead to a very exciting or game-defining battle towards the end of the match. With the known outcome, the second plurality of rules enable the module 130 to assign an overriding high ranking to this match. Such second plurality of rules includes assigning a greater weighting value to the player or players who achieved the highest score, number of kills, wins, or rank, thereby increasing the ranking of storylines involving those players, assigning a greater weighting value to the player or players who are positioned in a particular location which experienced interesting gameplay, thereby increasing the ranking of storylines involving that location and/or assigning a greater weighting value to the player or players who participated in the most interesting gameplay action.


In embodiments, the module 130 continuously or pseudo-continuously evaluates all relevant storylines for rankings depending upon at least the complexity of the game. In some embodiments, the module evaluates or determines the rankings of all relevant storylines at a frequency or refresh-rate of ‘t’ times per second. In some embodiments, the value of ‘t’ varies from 1 to 60 times per second. In some embodiment, a refresh occurs on a frame-by-frame basis. In another embodiment, a refresh occurs every second. In an embodiment, the frequency of storyline evaluation for ranking generation is 4 times per second. It should be appreciated that in embodiments, the refresh-rate is also a function of the available processing power at the spectator devices 145. For example, a gaming console may be able to render storylines, commensurate with updated rankings, more frequently than a mobile device. Accordingly, the refresh-rate may be programmatically modified based upon the processing power of the client device where a lower refresh-rate is implemented with a lower processing power and a higher refresh-rate is implemented with a higher processing power.


Persons of ordinary skill in the art should appreciate that the base factor and multiplier factors may differ across game genres, modes and/or levels. For example, in multiplayer platform games (where a plurality of players controls their corresponding characters or avatars to jump between suspended platforms and avoid obstacles) the base factor may be an advance level that has been reached in the game by a player and/or a threshold number of points amassed by a player. Again, in multiplayer adventure games (where a plurality of players assume the role of protagonists in an interactive story driven by exploration and puzzle-solving instead of physical challenge) the base factor may be an advance level of difficulty that has been reached in the game by a player. Yet again, in strategy games such as multiplayer online battle arenas or MOBAs (that are a hybrid of action games, real-time strategy and role-playing video games where the objective is for the player's team to destroy the opposing side's main structure with the help of periodically spawned computer-controlled units that march towards the enemy's main structure) the base factor may be a team of players occupying or destroying key sub-structure(s) of the opposing team's main structure, hideout or fort.


It should be appreciated that the module 130 communicates the determined overall storyline rankings to the camera intelligence module 135 and also stores the rankings in the database system 150.



FIG. 2 is a computer-implemented method 200 of tracking and ranking one or more storylines in a game match of a multiplayer online video game, in accordance with some embodiments of the present specification. In embodiments, the method 200 is implemented by the storyline ranking module 130. Referring now to FIGS. 1 and 2, at step 205, a premise or principle, collectively referred to as a function, is defined or established to identify the one or more storylines, depending upon at least the genre, type or level of the multiplayer online video game. For example, in some embodiments, for a first person shooter (FPS) genre, the basic premise pertains to combative interactions amongst a plurality of players participating in the multiplayer video game.


At step 210, at least one base criterion is defined or established to enable ranking each of the one or more storylines. For example, in a first person shooter (FPS) game session, the at least one base criterion is a distance of each player with respect to every other player participating in said multiplayer video game. Accordingly, position coordinates (within the game map) of each player are acquired and distances of each player with respect to every other player are calculated based on the positional coordinates. At step 215, depending upon the at least one base criterion a base rank is determined for each of the one or more storylines. In various embodiments, the base rank is a function of the at least one base criterion. For example, if a distance (base criterion) between any two players in the first person shooter (FPS) game session is ‘D’ units, then the corresponding base rank for the storyline associated with the two players is determined to be a reciprocal of the distance—that is, the base rank is 1/D points.


At step 220, one or more weighting or multiplier factors are defined or established. In embodiments, the one or more weighting factors have predetermined values. In some embodiments, the one or more weighting factors comprise a first factor representative of a relationship between each player with respect to every other player in the game match, a second factor representative of an orientation of each player with respect to every other player and a third factor representative of obstructed or unobstructed view between each player with respect to every other player.


Finally, at step 225, an overall ranking for each of the one or storylines is determined by associating the one or more weighting factors with the base rank. In some embodiments, the base rank is multiplied by each of the one or more weighting factors to calculate a weighted sum representative of the overall ranking. In various embodiments, the weighting factors have accentuating or diminishing effect on the overall ranking of the one or more storylines. In various embodiments, the overall rank is determined at a predetermined frequency in the game match.


It should be appreciated that while executing the method 200, the module 130 continuously stores and/or updates data related to the basic premise, base criterion, position coordinates and distances of each player with respect to every other player, base rank, one or more weighting factors and the overall ranking, for each of the one or more storylines, in the database system 150.


Camera Intelligence Engine or Module 135


The camera intelligence module 135 implements a plurality of instructions or programmatic code to control a virtual camera so as to enable adaptive or intelligent navigation, positioning and viewing orientation of the virtual camera in response to dynamic rankings of storylines determined in real-time during a game match or offline after completion of the game match. In embodiments, the virtual camera generates video output of at least one storyline in a game match for broadcasting over the network 115 for viewing or spectating at the spectator devices 145.


In accordance with an aspect of the present specification, the camera intelligence module 135 considers a spectator as a pseudo-player in the game match, assigns the pseudo-player's camera entity to an invisible game object (that is, the virtual camera) which is thereafter manipulated like any other game object through a plurality of instructions or programmatic code for navigation, positioning and orientation.


In some embodiments, the module 135 is programmed to receive overall rankings of storylines from the storyline ranking module 130 or access the overall rankings from the database system 150 and in response automatically navigate the virtual camera to capture and broadcast the storyline with the highest ranking at any given point in time. However, in embodiments, this navigation of the virtual camera is constrained, bound or governed by a plurality of pre-stored (in the database system 150) yet customizable camera intelligence rules, logic or imperatives. In accordance with aspects of the present specification, the plurality of camera intelligence rules or logic is configured to avoid situations where the virtual camera is too frequently alternating between storylines that have oscillating rankings. Therefore, the plurality of camera intelligence rules or logic are aimed at intelligently navigating the camera while setting certain limitations on how far and frequently the camera can change storylines. In some embodiments, the camera navigation limitations are set with reference to the following exemplary parameters.


A first parameter relates to a travel distance of the camera as the module 135 evaluates whether it should move to a new storyline (both in the horizontal and vertical directions). For example, in some embodiments, the camera is allowed to move from one storyline to another if the travel distance for switching the storyline can be traversed within a predefined time range of 0.25 to 1 second. In an embodiment, the camera is allowed to move from one storyline to another if the travel distance for switching the storyline can be traversed in 0.5 seconds or less. In some embodiments, the camera is allowed to switch from first to second storyline by traveling a distance that requires a traveling time of greater than 1 second if a ranking difference between the first and second storylines is a spectrum of values lying between a lower end and a higher end point value. In some embodiments, the lower end point value is a first sliding scale ranging from 0.9 to 2 (with incremental values of 0.1). In an embodiment, the lower end point value is preset at 1 point by default. In some embodiments, the higher end point value is a second sliding scale ranging from 2 to 5 (with incremental values of 0.1). In an embodiment, the higher end point value is preset at 3 points by default. Thus, in some embodiments, the camera is allowed to switch if the ranking difference between the first and second storylines ranges from 1 to 3 points or is otherwise confined to a predefined range of less than a predefined value and not allowed to switch if the ranking difference exceeds the predefined value. However, as mentioned earlier, the lower end of this range may vary in accordance with the first sliding scale while the upper end of this range may vary in accordance with the second sliding scale. It should be appreciated that lower values within the first sliding scale results in the camera switching more frequently between storylines. On the other hand, higher values within the first sliding scale results in the camera being more “sticky” and preferring to remain focused with a storyline. Accordingly, the virtual camera is programmed to switch at a speed or rate that is dependent upon a first sliding scale, wherein the first sliding scale is indicative of a ranking difference between storylines. In some embodiments, a value for the lower end of the spectrum automatically resets towards higher values of the first sliding scale after the virtual camera changes focus or switches from the first to the second storyline, thereby insuring the virtual camera does not switch storylines too rapidly. Therefore, after the virtual camera switches in accordance with a first value of the first sliding scale, that first value is reset to a second value that will cause the virtual camera to switch at a slower rate than the first value.


A second parameter relates to how much the camera can rotate, tilt and/or pan. For example, in some embodiments, the camera is allowed to rotate at a predetermined angular speed and acceleration. Accordingly, the virtual camera is programmed to rotate, tilt and/or pan at a scale or speed that is constrained to a predetermined set of threshold values, thereby insuring the virtual camera does not present an excessively jarring view. For example, the virtual camera may be constrained to rotate, tilt or pan less than 45 degrees, 90 degrees, 135 degrees, 180 degrees, 225 degrees, 270 degrees, 315 degrees, 360 degrees, or some increment therein.


A third parameter relates to a predetermined (yet customizable) ranking threshold. For example, a second storyline ranking must exceed a first storyline ranking by at least ‘x’ number of points for the camera to be switched from the first to the second storyline. That is, a new storyline must be ranked ‘x’ number of points better than the existing storyline for the module 135 to switch the camera. In various embodiments, ‘x’ is at least 1 number of points, although one of ordinary skill in the art would appreciate that the precise units used may vary depending on the scale being used.


A fourth parameter relates to establishing a predefined boundary or ‘sandbox’ beyond which the virtual camera is not permitted to be moved or relocated. In one embodiment, module 135 defines a boundary, defined in terms of three-dimensional coordinates of a virtual map, around the coordinate location of the virtual camera. If, as a result of selecting a new storyline to broadcast, module 135 determines a virtual camera's position should be redefined to a new coordinate location that is outside the boundary, the virtual camera will be positioned at a coordinate point on the boundary that is closest to the new coordinate location but not external to that boundary. Accordingly, upon determining a new storyline and therefore a new position for the virtual camera, module 135 compares the coordinates of the new position against the coordinates of the predefined boundary and determines if the coordinates of the new position are outside the coordinates of the predefined boundary. If the coordinates of the new position are outside the coordinates of the predefined boundary, the module 135 determines coordinates on the predefined boundary that are closest to, or representative of, the coordinates of the new position. If the coordinates of the new position are inside the coordinates of the predefined boundary, the module 135 moves the virtual camera to the coordinates of the new position.


Before describing the fifth parameter, it should first be appreciated that, during the above described transitions from one virtual camera position to another virtual camera position, the virtual camera remains on, thereby continuously displaying the surrounding visual and auditory elements of the video game as it moves from the first virtual camera position, angle, tilt, field of view or focus to the second virtual camera position, angle, tilt, field of view or focus. However, there are situations where the second position is so far from the first position or situations where getting to the second position from the first position requires passing through such a contorted path that continuously displaying the surrounding visual and auditory elements of the video game as it moves from the first virtual camera position to the second virtual camera position would take too long, require moving too fast, and/or would require moving in a jarring manner.


In such situations, a fifth parameter, which may be implemented in place of the fourth parameter, establishes a predefined boundary or ‘sandbox’ beyond which the virtual camera would “teleport” from a first current position within the boundary to a second future position outside the boundary. The view of the virtual camera first momentarily fades to black (fadeout) upon leaving the first position and then fades to the new view (fade-in) when at the second position. Accordingly, module 135 may implement the teleport function when the module 135 determines the distance or pathway between a first position and a second position is too great, or too contorted, such that the speed the camera would have to adopt to get to the new viewpoint in a standard camera transition time would create a video sequence too jarring and/or disorienting for viewers, possibly causing motion sickness in viewers. In such a case, the teleport function fades the viewer's view to black (fade-out), instantly moves the camera to the new viewpoint, and then fades the viewer's view back in from black (fade-in), thereby not continuously displaying the surrounding visual and auditory elements of the video game as it moves from the first virtual camera position, angle, tilt, field of view or focus to the second virtual camera position, angle, tilt, field of view or focus. In some embodiments, the module 135 teleports the virtual camera when 1) the module 135 determines there is obstructive geometry or other visual hazards between a first position and a second position or the pathway is otherwise too contorted, 2) module 135 determines to jump-cut between multiple cameras responsible for covering different subset-volumes of a playing field, or 3) the module 135 determines to reset a poorly-performing camera.


It should be appreciated that the parameters are only exemplary and in no way exhaustive and therefore may be expanded or relaxed by the administrator as per need. Thus, the plurality of camera intelligence rules or logic implements the aforementioned exemplary parameters to determine ‘when’ the module 135 should navigate/move the camera between storylines in a game match.


In accordance with aspects of the present specification, the plurality of camera intelligence rules or logic also determines ‘how’ the module 135 should navigate/move the camera between storylines in a game match. In some embodiments, rather than a traditional overhead, third-person, or first-person view of the storylines, the module 135 enables navigation, positioning and viewing orientation of the virtual camera such that to the spectators it appears as if the virtual camera is being operated on a drone or by a live cameraman. The module 135 achieves this effect by enabling the virtual camera to navigate as if the camera is trying to move, position and orient itself to get an ideal shot of a storyline; however the camera has programmatically imposed limitations that mimic real-world like conditions or constraints in how it can navigate, thereby preventing it from being able to always get into position to record an ideal shot (similar to a real drone or cameraman trying to capture the storyline). In some embodiments, an ideal shot corresponds to moving, positioning and orienting the camera such that two players (from opposing teams) in the first person shooter (FPS) game session are directly in view along an imaginary center vertical line of the game display area/screen. In some embodiments, an ideal shot corresponds to moving, positioning and orienting the camera for focusing on a center point of an imaginary line drawn between the two players. In some embodiments, an ideal viewing position and orientation of the camera is one that further includes as many other players as possible.


In embodiments, the module 135 acquires positions of the two players, defined in terms of three dimensional coordinates in a virtual map, as well as that of a point midway, also referred to as a center point, on an imaginary ray joining the positions of the two players. In some embodiments, the module 135 is programmed to cause the virtual camera to adopt a position close to the nearest player, with reference to a current position of the virtual camera at any given point in time, within the storyline associated with the two players. The module 135 is further programmed to cause the virtual camera to offset from the nearest player's position in a direction away from the other player and the center point, so that all three dimensional coordinates related to interactions occurring in the storyline, are dimensionally located on a single side of the virtual camera. This ensures that all coordinates or points, relevant to the storyline, are in frame. From that location, the virtual camera is allowed to rotate to point towards the center point.


For example, assuming two players are positioned at points A and B, defined by coordinates (X1, Y1, Z1) and (X2, Y2, Z2), respectively, in the virtual game map and further assuming a virtual camera is already positioned at a coordinate that is closer to player B (X2, Y2, Z2) than player A (X1, Y1, Z1), then the module 135 causes the virtual camera to move to, or be assigned to, a three dimensional coordinate such that 1) the virtual camera is closer to player B than player A, 2) both players A and B are located on one side (right, left, below, above) of the virtual camera and 3) the virtual camera is offset from the position of player B by a predefined orientation and/or distance.


In embodiments, at least the following factors are considered when determining the orientation and distance of an offset of the virtual camera from the nearest player's position: a) the offset must be by a predefined distance of units that may vary based on a game mode and b) the offset position of the virtual camera will not cause a visible game object to be positioned between the virtual camera and one or more players, thereby avoiding an obstruction or blocking of the virtual camera from obtaining a shot from the offset position. In embodiments, the system determines if one or more visible game objects is virtually positioned within the field of view of the virtual camera and if one or more visible game objects block the virtual camera's ability to see the nearest player from the predefined distance ‘d’, the virtual camera is re-positioned to reduce the predefined distance of units to a minimum possible distance (from the nearest player) required for an unobstructed shot.


For example, the module 135 is configured to cause the virtual camera to follow a storyline associated with the two players, A and B, such that the virtual camera is at a distance of 196 units behind and 64 units above the nearest player, and continues to follow as close as 128 units behind and 0 units above the nearest player. It should be appreciated that these distances may vary according to at least the different modes of viewing the game. For example, a longer-distance battle-royale mode may require the virtual camera to be programmed to follow the nearest player from a longer distance. While a sporting event, such as boxing, may require a shorter distance mode in which the virtual camera is programmed to follow the nearest player from a shorter distance.


In some embodiments, the module 135 enables to move, position and/or orient the camera to achieve an ideal shot less than 100% of the time. For example, an ideal shot, as defined above, occurs between 1% and 99% of the time, or any numerical increment therein, while, the remainder of time the virtual camera is programmed to provide a shot that is subject to one or more artificially applied constraints to achieve the effect of a real drone or cameraman. Module 135 is programmed to apply such artificially applied constraints in the form of one or more rules or logic that embody one or more constraining or limiting navigation parameters such as, but not limited to:

    • Speed of camera movement—in some embodiments, a maximum speed and acceleration of the camera is a function of a maximum speed and acceleration of the players in the first person shooter (FPS) game session. For example, in some cases the maximum speed and acceleration of the camera is same as that of the players whereas in other cases the maximum speed and acceleration of the camera are more or less than that of the players. Such variation, of the maximum speed and acceleration of the camera vis-à-vis the maximum speed and acceleration of the players, enables creation of an effect that when a player is moving slowly, the camera is able to stay right behind the action or storyline, but when the player suddenly takes off, the camera appears to lag slightly behind the action and eventually catches up when the player slows down.
    • Angular speed of camera rotation—in some embodiments, an angular speed or acceleration of rotation of the camera is constrained to predetermined (yet customizable) degrees per second. This limitation again enables creation of the effect that sometimes the camera is unable to capture a perfect or ideal shot, but eventually catches up to the action.


In some embodiments, the plurality of rules or logic implemented by the module 135 enable movement, positioning and/or orientation of the camera by considering a plurality of game factors such as, but not limited to, what weapon a player (in the first person shooter (FPS) game session), being followed by the camera, is holding. Thus, in embodiments, the camera is automatically navigated to stay close (to the player) if the player is holding a short range melee weapon while pulling back if the player is using a machine gun.


In some embodiments, the plurality of camera intelligence rules or logic implemented by the module 135 ensure fail safes in case the camera is no longer capturing the most interesting or highest ranking storyline but it is too undesirable to pan the camera to a new storyline (owing to the plurality of navigational limiting parameters, for example). In some embodiments, therefore, the video output of the camera is allowed to just fade to black and the camera is moved, repositioned or reoriented. In some embodiments, the plurality of camera intelligence rules or logic implemented by the module 135 performs collision detection such that the camera avoids or otherwise navigates around solid structures and objects in the level.



FIG. 3 is a computer-implemented method 300 of controlling navigation, positioning and orientation of a virtual camera configured to capture a plurality of storylines in a multiplayer video game for broadcasting to at least one spectator device, in accordance with some embodiments of the present specification. In embodiments, the method 300 is implemented by the camera intelligence module 135. Referring now to FIGS. 1 and 3, at step 305, the module 135 either receives from the module 130 or accesses from the database system 150, overall ranking for the plurality of storylines in a game match of the multiplayer video game.


At step 310, it is determined if the virtual camera should be navigated, positioned and/or oriented to capture a storyline determined to have a highest overall ranking amongst the plurality of storylines, subject to a plurality of limiting factors. In some embodiments, the plurality of camera navigation rules are predefined with reference to: a first parameter that limits a distance which the virtual camera must travel to capture the storyline having the highest overall ranking, a second parameter that limits at least one of rotation, tilt and pan required for the virtual camera to capture the highest overall ranking storyline and a third parameter that requires the highest overall ranking of the storyline to exceed an overall ranking of an immediately preceding storyline by a predetermined threshold number of points.


At step 315, the virtual camera is navigated, positioned and/or oriented to capture video data of a storyline determined to have a highest overall ranking amongst the plurality of storylines, subject to a plurality of camera navigation rules. In some embodiments, the plurality of camera navigation rules comprise at least one of a speed of movement, acceleration of movement, an angular speed of rotation and an angular acceleration of rotation of the virtual camera. In embodiments, the captured video data is broadcasted to one or more spectator devices 145 over the network 115.


Spectator Engagement Module 140


In embodiments, each of the one or more spectator devices 145 implements the spectator engagement module 140 that operates as a game viewing application. The module 140 implements a plurality of instructions or programmatic code to provide a spectator with at least one interface between the spectator and the video data broadcast, of a game match, received from the camera intelligence module 135. The engagement module 140 generates at least one interface to receive the video data broadcasted by the camera intelligence module 135, render the video data associated with the game match, and enable the spectator to view a plurality of viewing options. Thus, the module 140 also captures and processes spectator inputs and interactions with respect to the plurality of viewing options and provides updates to the camera intelligence module 135 over the network 115 for subsequent implementation.


In various embodiments, the engagement module 140 provides a spectator with the following exemplary viewing options:

    • A GUI (Graphical User Interface) displaying a dashboard comprising information of the top ‘n’ ranking storylines in a game match, such as the first person shooter (FPS) game session involving Team Alpha and Team Beta. It should be appreciated that the ranking data is periodically generated by the storyline ranking module 130 and stored/updated in the database system 150. The engagement module 140 accesses the most recently updated ranking data from the database system 150 and displays on the dashboard. In some embodiments, the module 140 enables the spectator to choose any one of the ‘n’ ranking storylines that he would like to view. The spectator's choice of the storyline is communicated to the camera intelligence module 135 that, as a result, broadcasts the chosen storyline to the spectator's device.
    • A GUI to enable a spectator to filter the storylines. For example, in some embodiments, the spectator can choose to view video data pertaining to storylines involving only x, y and z players or storylines related to only a particular team.
    • A GUI to enable a spectator to request kill-cam replays during or after completion of a game match. As known to persons of ordinary skill in the art, in first person shooter (FPS) games, killcam refers to a virtual camera that reveals the way that a player was killed, displaying an instant replay of the player's own death through another player's eyes.
    • A GUI that provides a spectator with a plurality of parameters to affect or control a positioning and/or viewing orientation of the virtual camera that records various storylines. In some embodiments, the plurality of parameters may include tilt and pan options of the virtual camera.
    • A GUI that enables a spectator to choose a picture-in-picture (PIP) mode of viewing. In PIP mode, a first game video is displayed on the full GUI screen at the same time as a second game video is displayed in a smaller inset window. Sound is usually from the game video displayed on the full GUI screen. In some embodiments, the first and second game videos may be two different storylines or two different views of the same storyline. For example, the second game video (rendered in the small inset window) could be a view from a sniper's scope while the first game video (rendered in the full GUI screen) could be the standard storyline view of the sniper. In another example, the first and second game videos could be the two highest ranking storylines. In some embodiments, the engagement module 140 enables the spectator to toggle between the first and second game videos.


In alternate embodiments, each of the one or more spectator devices 145 implements the game module 120′ configurable to operate as a game viewing application for the spectator. In accordance with alternate embodiments, the game module 120′ can be operated in either a player-centric or a spectator-centric mode. When operated in the player-centric mode, the game module 120′ allows all functionalities relevant for a player to interact and play the game match. However, when operated in the spectator-centric mode, the game module 120′ allows all functionalities relevant for a spectator to view the game match. Thus, in spectator-centric mode, the game module 120′ generates at least one interface to receive the video data broadcasted by the camera intelligence module 135, render the video data associated with the game match, enable the spectator to view the plurality of viewing options, captures and processes spectator inputs and interactions with respect to the plurality of viewing options and provides updates to the camera intelligence module 135 over the network 115 for subsequent implementation.


In further alternate embodiments, each of the one or more spectator devices 145 implements a client software application (such as, for example, the Open Broadcaster Software) that receives the video data, broadcasted by the camera intelligence module 135, via a third party video game streaming service such as, for example, Twitch or Mixer.



FIG. 4 is a computer-implemented method 400 of enabling at least one spectator to view video data associated with one or more storylines in a game match of a multiplayer video game, in accordance with some embodiments of the present specification. In embodiments, the method 400 is implemented by the storyline engagement module 140 on at least one spectator device 145 corresponding to the at least one spectator. Referring now to FIGS. 1 and 4, at step 405, the module 140 generates at least one GUI to receive video data pertaining to the one or more storylines, broadcasted by the camera intelligence module 135. At step 410, the video data is rendered on the at least one GUI for viewing by the at least one spectator. It should be appreciated that in some embodiments, the received and rendered video data pertains to a default storyline determined, by the camera intelligence module 135, to have a highest overall ranking in the game match (subject to a plurality of rules).


At step 415, upon request by the at least one spectator (such as by clicking an icon on the at least one GUI generated in step 405), the module 140 generates at least one GUI to display a plurality of options for the at least one spectator to choose from in order to customize the video data being rendered.


If the at least one spectator chooses a first option then, at step 420, the module 140 accesses the most recently updated overall ranking data from the database system 150 and displays information about the top ‘n’ overall ranking storylines on at least one GUI. At step 425, the at least one spectator chooses any one of the ‘n’ ranking storylines. At step 430, the at least one spectator's choice of the storyline is communicated to the camera intelligence module 135 that, as a result, broadcasts video data related to the chosen storyline to the at least one spectator device.


If the at least one spectator chooses a second option then, at step 435, the module 140 displays identification credentials (such as, for example, names) of a plurality of players and/or teams participating in the game match on at least one GUI. At step 440, the at least one spectator chooses at least one of the plurality of players and/or teams. At step 445, the at least one spectator's choice is communicated to the camera intelligence module 135 that, as a result, broadcasts video data pertaining to the storylines of the chosen at least one player and/or team to the at least one spectator device.


If the at least one spectator chooses a third option then, at step 450, the module 140 generates at least one GUI and renders therein kill-cam replays of one or more storylines during or after completion of the game match.


If the at least one spectator chooses a fourth option then, at step 455, the module 150 generates at least one GUI to enable the at least one spectator to choose at least one of a customized positioning and/or viewing orientation of the virtual camera that records various storylines. At step 460, the at least one spectator's choice(s) is communicated to the camera intelligence module 135 that, as a result, broadcasts video data (to the at least one spectator device) of various storylines in accordance with the chosen positioning and/or viewing orientation of the virtual camera.


If the at least one spectator chooses a fifth option then, at step 465, the module 150 generates a picture-in-picture (PIP) mode of viewing video data related to at least two storylines. In PIP mode, a first video data is displayed on the full GUI screen at the same time as a second video data is displayed in a smaller inset window. In some embodiments, the engagement module 140 enables the at least one spectator to toggle between the first and second video data.


The above examples are merely illustrative of the many applications of the system and method of present specification. Although only a few embodiments of the present specification have been described herein, it should be understood that the present specification might be embodied in many other specific forms without departing from the spirit or scope of the specification. Therefore, the present examples and embodiments are to be considered as illustrative and not restrictive, and the specification may be modified within the scope of the appended claims.

Claims
  • 1. A computer-implemented method of controlling at least one of a navigation, positioning or orientation of a virtual camera configured to capture a plurality of storylines in a multiplayer video game for transmitting to at least one computing device, said method comprising: defining, in at least one server remote from the at least one computing device, a function to identify the plurality of storylines;defining, in the at least one server, at least one base criterion for ranking each of the plurality of storylines;determining, in the at least one server, a base rank for each of the plurality of storylines, wherein the base rank is a function of said at least one base criterion;defining, in the at least one server, at least one factor to weight the base rank, said at least one factor having a predetermined value;determining, in the at least one server, an overall rank for each of the plurality of storylines by associating said at least one factor with the base rank, wherein the overall rank is determined at a predetermined frequency in the game;subject to one or more rules, programmatically moving, in the at least one server, the virtual camera to capture one of the plurality of storylines determined to have a highest overall ranking among the plurality of storylines, wherein the one or more rules limit a time which the virtual camera must take to travel to capture the storyline determined to have the highest overall ranking and wherein the time ranges between 0.25 to 1 second;transmitting, from the at least one server to the at least one computing device, the one of the plurality of storylines determined to have a highest overall ranking among the plurality of storylines; andcausing said one of the plurality of storylines to be rendered on the at least one computing device.
  • 2. The computer-implemented method of claim 1, wherein the function is dependent on at least a genre of said multiplayer video game.
  • 3. The computer-implemented method of claim 1, wherein the at least one server is configured to concurrently transmit to at least 20 computing devices.
  • 4. The computer-implemented method of claim 1, wherein the at least one base criterion comprises a virtual distance of each player with respect to each other player in the multiplayer video game.
  • 5. The computer-implemented method of claim 4, wherein the base rank for each of the plurality of storylines is inversely related to the virtual distances of each player with respect to each of the other players.
  • 6. The computer-implemented method of claim 1, wherein the at least one factor is a function of at least one of: a first value representative of a relationship between each player and each other player in the multiplayer video game;a second value representative of an orientation of each player relative to each other player in the multiplayer video game; ora third factor representative of a degree of an unobstructed view between each player and each other player in the multiplayer video game.
  • 7. The computer-implemented method of claim 1, wherein the one or more rules further limit at least one of rotation, tilt or pan required for the virtual camera to capture the storyline determined to have said highest overall ranking.
  • 8. The computer-implemented method of claim 1, wherein the one or more rules further require the highest overall ranking of the storyline to exceed an overall ranking of an immediately preceding storyline by a predefined value.
  • 9. The computer-implemented method of claim 1, wherein the one or more rules further limit at least one of a speed of movement of the virtual camera or an angular speed of rotation of the virtual camera.
  • 10. A system for controlling at least one of a navigation, positioning or orientation of a virtual camera configured to capture a plurality of storylines in a multiplayer video game executing on a plurality of gaming devices for transmitting to a plurality of spectator devices, said system comprising: at least one server for hosting a game session of the multiplayer video game and to transmit said game session to each of a plurality of spectator devices through a network;a plurality of gaming modules stored on the plurality of gaming devices remote from the at least one server and configured to enable a plurality of human players to play in the game session of the multiplayer video game;a plurality of spectator modules stored on the plurality of spectator devices remote from the at least one server and configured to enable a plurality of human spectators to view the transmission of the game session, wherein at least a portion of the plurality of spectator modules are executing on at least a portion of the plurality of gaming devices and wherein at least a portion of the plurality of gaming modules are executing on at least a portion of the plurality of spectator devices;a processor in said at least one server, said processor executing a plurality of executable programmatic instructions to: define a function to identify the plurality of storylines;define at least one base criterion for ranking each of the plurality of storylines;determine a base rank for each of the plurality of storylines, wherein the base rank is a function of the at least one base criterion;define at least one factor to weight the base rank;determine an overall rank for each of the plurality of storylines by associating the at least one factor with the base rank, wherein a value of the at least one factor has an increasing or decreasing effect on the overall ranking; andsubject to one or more rules, programmatically moving the virtual camera to capture one of the plurality of storylines determined to have a highest overall ranking among the plurality of storylines, wherein the one or more rules limit a time which the virtual camera must take to travel to capture the storyline determined to have the highest overall ranking and wherein the time ranges between 0.25 to 1 second;transmitting the one of the plurality of storylines determined to have a highest overall ranking among the plurality of storylines to the plurality of spectator modules; andcausing said one of the plurality of storylines to be rendered on the at least one computing device.
  • 11. The system of claim 10, wherein the at least one base criterion comprises a virtual distance of each player with respect to each other player participating in the game session.
  • 12. The system of claim 11, wherein the base rank for each of the plurality of storylines is inversely related to the virtual distances of each player with respect to each other player.
  • 13. The system of claim 10, wherein the at least one factor comprises at least one of: a first value representative of a relationship between each player with respect to each other player in the game session;a second value representative of an orientation of each player with respect to each other player in the game session; anda third value representative of a degree of an unobstructed view between each player and each other player in the game session.
  • 14. The system of claim 10, wherein the one or more rules further limit at least one of rotation, tilt or pan required for the virtual camera to capture the storyline determined to have the highest overall ranking.
  • 15. The system of claim 10, wherein the one or more rules further require the highest overall ranking of the storyline to exceed an overall ranking of an immediately preceding storyline by a predefined value.
  • 16. The system of claim 10, wherein the one or more rules further limit at least one of a speed of movement of the virtual camera or an angular speed of rotation of the virtual camera.
  • 17. A computer readable non-transitory medium comprising a plurality of executable programmatic instructions wherein, when said plurality of executable programmatic instructions are executed by a processor in at least one server, a process for controlling navigation, positioning and orientation of a virtual camera configured to capture a plurality of storylines in a multiplayer video game for transmitting to at least one spectator device, said plurality of executable programmatic instructions comprising: programmatic instructions, stored in said computer readable non-transitory medium, for determining a base rank for each of said plurality of storylines, wherein the base rank is a function of at least one base criterion;programmatic instructions, stored in said computer readable non-transitory medium, for determining an overall rank for each of said plurality of storylines by associating at least one weighting factor with said base rank, wherein said overall rank is determined at a predetermined frequency in the game, and wherein said weighting factor has an accentuating or diminishing effect on the overall rank; andprogrammatic instructions, stored in said computer readable non-transitory medium, for switching the virtual camera to a second storyline of a second overall rank from a first storyline of a first overall rank, wherein said switching is subject to one or more rules and wherein the one or more rules limit a time which the virtual camera must take to travel to capture the storyline determined to have the highest overall ranking and wherein the time ranges between 0.25 to 1 second; andprogrammatic instructions, stored in said computer readable non-transitory medium, for causing said one of the plurality of storylines to be rendered on the at least one computing device.
  • 18. The computer readable non-transitory medium of claim 17, wherein the at least one base criterion is a distance of each player with respect to every other player participating in a game match of said multiplayer video game, and wherein the base rank for each of the plurality of storylines is inversely related to the distance of each player with respect to every other player.
CROSS-REFERENCE

The present application is a continuation application of U.S. patent application Ser. No. 16/284,234, entitled “Systems and Methods for Controlling Camera Perspectives, Movements, and Displays of Video Game Gameplay”, filed on Feb. 25, 2019, and issued as U.S. Pat. No. 11,305,191 on Apr. 19, 2022, which relies on U.S. Provisional Patent Application No. 62/783,147, of the same title and filed on Dec. 20, 2018, for priority. The above applications are hereby incorporated by reference in their entirety.

US Referenced Citations (296)
Number Name Date Kind
5530796 Wang Jun 1996 A
5561736 Moore Oct 1996 A
5563946 Cooper Oct 1996 A
5685775 Bakoglu Nov 1997 A
5706507 Schloss Jan 1998 A
5708764 Borrel Jan 1998 A
5736985 Lection Apr 1998 A
5737416 Cooper Apr 1998 A
5745678 Herzberg Apr 1998 A
5768511 Galvin Jun 1998 A
5825877 Dan Oct 1998 A
5835692 Cragun Nov 1998 A
5878233 Schloss Mar 1999 A
5883628 Mullaly Mar 1999 A
5900879 Berry May 1999 A
5903266 Berstis May 1999 A
5903271 Bardon May 1999 A
5911045 Leyba Jun 1999 A
5920325 Morgan Jul 1999 A
5923324 Berry Jul 1999 A
5969724 Berry Oct 1999 A
5977979 Clough Nov 1999 A
5990888 Blades Nov 1999 A
6014145 Bardon Jan 2000 A
6025839 Schell Feb 2000 A
6059842 Dumarot May 2000 A
6069632 Mullaly May 2000 A
6081270 Berry Jun 2000 A
6081271 Bardon Jun 2000 A
6091410 Lection Jul 2000 A
6094196 Berry Jul 2000 A
6098056 Rusnak Aug 2000 A
6104406 Berry Aug 2000 A
6111581 Berry Aug 2000 A
6134588 Guenthner Oct 2000 A
6144381 Lection Nov 2000 A
6148328 Cuomo Nov 2000 A
6185614 Cuomo Feb 2001 B1
6201881 Masuda Mar 2001 B1
6222551 Schneider Apr 2001 B1
6271842 Bardon Aug 2001 B1
6271843 Lection Aug 2001 B1
6282547 Hirsch Aug 2001 B1
6311206 Malkin Oct 2001 B1
6334141 Varma Dec 2001 B1
6336134 Varma Jan 2002 B1
6337700 Kinoe Jan 2002 B1
6353449 Gregg Mar 2002 B1
6356297 Cheng Mar 2002 B1
6411312 Sheppard Jun 2002 B1
6426757 Smith Jul 2002 B1
6445389 Bossen Sep 2002 B1
6452593 Challener Sep 2002 B1
6462760 Cox, Jr. Oct 2002 B1
6469712 Hilpert, Jr. Oct 2002 B1
6473085 Brock Oct 2002 B1
6499053 Marquette Dec 2002 B1
6505208 Kanevsky Jan 2003 B1
6525731 Suits Feb 2003 B1
6549933 Barrett Apr 2003 B1
6567109 Todd May 2003 B1
6618751 Challenger Sep 2003 B1
RE38375 Herzberg Dec 2003 E
6657617 Paolini Dec 2003 B2
6657642 Bardon Dec 2003 B1
6684255 Martin Jan 2004 B1
6717600 Dutta Apr 2004 B2
6734884 Berry May 2004 B1
6765596 Lection Jul 2004 B2
6781607 Benham Aug 2004 B1
6819669 Rooney Nov 2004 B2
6832239 Kraft Dec 2004 B1
6836480 Basso Dec 2004 B2
6886026 Hanson Apr 2005 B1
6948168 Kuprionas Sep 2005 B1
RE38865 Dumarot Nov 2005 E
6993596 Hinton Jan 2006 B2
7028296 Irfan Apr 2006 B2
7062533 Brown Jun 2006 B2
7143409 Herrero Nov 2006 B2
7209137 Brokenshire Apr 2007 B2
7230616 Taubin Jun 2007 B2
7249123 Elder Jul 2007 B2
7263511 Bodin Aug 2007 B2
7287053 Bodin Oct 2007 B2
7305438 Christensen Dec 2007 B2
7308476 Mannaru Dec 2007 B2
7403202 Nash Jul 2008 B1
7404149 Fox Jul 2008 B2
7426538 Bodin Sep 2008 B2
7427980 Partridge Sep 2008 B1
7428588 Berstis Sep 2008 B2
7429987 Leah Sep 2008 B2
7436407 Doi Oct 2008 B2
7439975 Hsu Oct 2008 B2
7443393 Shen Oct 2008 B2
7447996 Cox Nov 2008 B1
7467181 McGowan Dec 2008 B2
7475354 Guido Jan 2009 B2
7478127 Creamer Jan 2009 B2
7484012 Hinton Jan 2009 B2
7503007 Goodman Mar 2009 B2
7506264 Polan Mar 2009 B2
7515136 Kanevsky Apr 2009 B1
7525964 Astley Apr 2009 B2
7552177 Kessen Jun 2009 B2
7565650 Bhogal Jul 2009 B2
7571224 Childress Aug 2009 B2
7571389 Broussard Aug 2009 B2
7580888 Ur Aug 2009 B2
7596596 Chen Sep 2009 B2
7640587 Fox Dec 2009 B2
7667701 Leah Feb 2010 B2
7698656 Srivastava Apr 2010 B2
7702784 Berstis Apr 2010 B2
7714867 Doi May 2010 B2
7719532 Schardt May 2010 B2
7719535 Tadokoro May 2010 B2
7734691 Creamer Jun 2010 B2
7737969 Shen Jun 2010 B2
7743095 Goldberg Jun 2010 B2
7747679 Galvin Jun 2010 B2
7765478 Reed Jul 2010 B2
7768514 Pagan Aug 2010 B2
7773087 Fowler Aug 2010 B2
7774407 Daly Aug 2010 B2
7782318 Shearer Aug 2010 B2
7792263 D Amora Sep 2010 B2
7792801 Hamilton, II Sep 2010 B2
7796128 Radzikowski Sep 2010 B2
7808500 Shearer Oct 2010 B2
7814152 McGowan Oct 2010 B2
7827318 Hinton Nov 2010 B2
7843471 Doan Nov 2010 B2
7844663 Boutboul Nov 2010 B2
7847799 Taubin Dec 2010 B2
7856469 Chen Dec 2010 B2
7873485 Castelli Jan 2011 B2
7882222 Dolbier Feb 2011 B2
7882243 Ivory Feb 2011 B2
7884819 Kuesel Feb 2011 B2
7886045 Bates Feb 2011 B2
7890623 Bates Feb 2011 B2
7893936 Shearer Feb 2011 B2
7904829 Fox Mar 2011 B2
7921128 Hamilton, II Apr 2011 B2
7940265 Brown May 2011 B2
7945620 Bou-Ghannam May 2011 B2
7945802 Hamilton, II May 2011 B2
7970837 Lyle Jun 2011 B2
7970840 Cannon Jun 2011 B2
7985138 Acharya Jul 2011 B2
7990387 Hamilton, II Aug 2011 B2
7996164 Hamilton, II Aug 2011 B2
8001161 George Aug 2011 B2
8004518 Fowler Aug 2011 B2
8005025 Bodin Aug 2011 B2
8006182 Bates Aug 2011 B2
8013861 Hamilton, II Sep 2011 B2
8018453 Fowler Sep 2011 B2
8018462 Bhogal Sep 2011 B2
8019797 Hamilton, II Sep 2011 B2
8019858 Bauchot Sep 2011 B2
8022948 Garbow Sep 2011 B2
8022950 Brown Sep 2011 B2
8026913 Garbow Sep 2011 B2
8028021 Reisinger Sep 2011 B2
8028022 Brownholtz Sep 2011 B2
8037416 Bates Oct 2011 B2
8041614 Bhogal Oct 2011 B2
8046700 Bates Oct 2011 B2
8051462 Hamilton, II Nov 2011 B2
8055656 Cradick Nov 2011 B2
8056121 Hamilton, II Nov 2011 B2
8057307 Berstis Nov 2011 B2
8062130 Smith Nov 2011 B2
8063905 Brown Nov 2011 B2
8070601 Acharya Dec 2011 B2
8082245 Bates Dec 2011 B2
8085267 Brown Dec 2011 B2
8089481 Shearer Jan 2012 B2
8092288 Theis Jan 2012 B2
8095881 Reisinger Jan 2012 B2
8099338 Betzler Jan 2012 B2
8099668 Garbow Jan 2012 B2
8102334 Brown Jan 2012 B2
8103640 Lo Jan 2012 B2
8103959 Cannon Jan 2012 B2
8105165 Karstens Jan 2012 B2
8108774 Finn Jan 2012 B2
8113959 De Judicibus Feb 2012 B2
8117551 Cheng Feb 2012 B2
8125485 Brown Feb 2012 B2
8127235 Haggar Feb 2012 B2
8127236 Hamilton, II Feb 2012 B2
8128487 Hamilton, II Mar 2012 B2
8131740 Cradick Mar 2012 B2
8132235 Bussani Mar 2012 B2
8134560 Bates Mar 2012 B2
8139060 Brown Mar 2012 B2
8139780 Shearer Mar 2012 B2
8140340 Bhogal Mar 2012 B2
8140620 Creamer Mar 2012 B2
8140978 Betzler Mar 2012 B2
8140982 Hamilton, II Mar 2012 B2
8145676 Bhogal Mar 2012 B2
8145725 Dawson Mar 2012 B2
8149241 Do Apr 2012 B2
8151191 Nicol, II Apr 2012 B2
8156184 Kurata Apr 2012 B2
8165350 Fuhrmann Apr 2012 B2
8171407 Huang May 2012 B2
8171408 Dawson May 2012 B2
8171559 Hamilton, II May 2012 B2
8174541 Greene May 2012 B2
8176421 Dawson May 2012 B2
8176422 Bergman May 2012 B2
8184092 Cox May 2012 B2
8184116 Finn May 2012 B2
8185450 McVey May 2012 B2
8185829 Cannon May 2012 B2
8187067 Hamilton, II May 2012 B2
8199145 Hamilton, II Jun 2012 B2
8203561 Carter Jun 2012 B2
8214335 Hamilton, II Jul 2012 B2
8214433 Dawson Jul 2012 B2
8214750 Hamilton, II Jul 2012 B2
8214751 Dawson Jul 2012 B2
8217953 Comparan Jul 2012 B2
8219616 Dawson Jul 2012 B2
8230045 Kawachiya Jul 2012 B2
8230338 Dugan Jul 2012 B2
8233005 Finn Jul 2012 B2
8234234 Shearer Jul 2012 B2
8234579 Do Jul 2012 B2
8239775 Beverland Aug 2012 B2
8241131 Bhogal Aug 2012 B2
8245241 Hamilton, II Aug 2012 B2
8245283 Dawson Aug 2012 B2
8265253 D Amora Sep 2012 B2
8310497 Comparan Nov 2012 B2
8334871 Hamilton, II Dec 2012 B2
8360886 Karstens Jan 2013 B2
8364804 Childress Jan 2013 B2
8425326 Chudley Apr 2013 B2
8442946 Hamilton, II May 2013 B2
8506372 Chudley Aug 2013 B2
8514249 Hamilton, II Aug 2013 B2
8554841 Kurata Oct 2013 B2
8607142 Bergman Dec 2013 B2
8607356 Hamilton, II Dec 2013 B2
8624903 Hamilton, II Jan 2014 B2
8626836 Dawson Jan 2014 B2
8692835 Hamilton, II Apr 2014 B2
8721412 Chudley May 2014 B2
8827816 Bhogal Sep 2014 B2
8838640 Bates Sep 2014 B2
8849917 Dawson Sep 2014 B2
8911296 Chudley Dec 2014 B2
8992316 Smith Mar 2015 B2
9083654 Dawson Jul 2015 B2
9142034 Hoof Sep 2015 B2
9152914 Haggar Oct 2015 B2
9205328 Bansi Dec 2015 B2
9286731 Hamilton, II Mar 2016 B2
9299080 Dawson Mar 2016 B2
9364746 Chudley Jun 2016 B2
9525746 Bates Dec 2016 B2
9583109 Kurata Feb 2017 B2
9682324 Bansi Jun 2017 B2
9764244 Bansi Sep 2017 B2
9789406 Marr Oct 2017 B2
9808722 Kawachiya Nov 2017 B2
10489986 Kaifosh Nov 2019 B2
20040157662 Tsuchiya Aug 2004 A1
20040176164 Kobayashi Sep 2004 A1
20040224761 Nishimura Nov 2004 A1
20050009602 Nishimura Jan 2005 A1
20060246968 Dyke-Wells Nov 2006 A1
20090113448 Smith Apr 2009 A1
20090135187 Lee May 2009 A1
20090262112 Yoshimura Oct 2009 A1
20100166056 Perlman Jul 2010 A1
20140002580 Bear Jan 2014 A1
20140113718 Norman Apr 2014 A1
20140125576 Asuke May 2014 A1
20140213353 Lee Jul 2014 A1
20140344725 Bates Nov 2014 A1
20160191671 Dawson Jun 2016 A1
20170157512 Long Jun 2017 A1
20180161682 Myhill Jun 2018 A1
20190083885 Yee Mar 2019 A1
20190351325 Spradling Nov 2019 A1
20190366211 Suzuki Dec 2019 A1
20200038755 Kojima Feb 2020 A1
20210093969 McCoy Apr 2021 A1
Foreign Referenced Citations (71)
Number Date Country
768367 Mar 2004 AU
2005215048 Oct 2011 AU
2143874 Jun 2000 CA
2292678 Jul 2005 CA
2552135 Jul 2013 CA
1334650 Feb 2002 CN
1202652 Oct 2002 CN
1141641 Mar 2004 CN
1494679 May 2004 CN
1219384 Sep 2005 CN
1307544 Mar 2007 CN
100407675 Jul 2008 CN
100423016 Oct 2008 CN
100557637 Nov 2009 CN
101001678 May 2010 CN
101436242 Dec 2010 CN
101801482 Dec 2014 CN
668583 Aug 1995 EP
0627728 Sep 2000 EP
0717337 Aug 2001 EP
0679977 Oct 2002 EP
0679978 Mar 2003 EP
0890924 Sep 2003 EP
1377902 Aug 2004 EP
0813132 Jan 2005 EP
1380133 Mar 2005 EP
1021021 Sep 2005 EP
0930584 Oct 2005 EP
0883087 Aug 2007 EP
1176828 Oct 2007 EP
2076888 Jul 2015 EP
2339938 Oct 2002 GB
2352154 Jul 2003 GB
3033956 Apr 2000 JP
3124916 Jan 2001 JP
3177221 Jun 2001 JP
3199231 Aug 2001 JP
3210558 Sep 2001 JP
3275935 Feb 2002 JP
3361745 Jan 2003 JP
3368188 Jan 2003 JP
3470955 Sep 2003 JP
3503774 Dec 2003 JP
3575598 Jul 2004 JP
3579823 Jul 2004 JP
3579154 Oct 2004 JP
3701773 Oct 2005 JP
3777161 Mar 2006 JP
3914430 Feb 2007 JP
3942090 Apr 2007 JP
3962361 May 2007 JP
4009235 Sep 2007 JP
4225376 Dec 2008 JP
4653075 Dec 2010 JP
5063698 Aug 2012 JP
5159375 Mar 2013 JP
5352200 Nov 2013 JP
5734566 Jun 2015 JP
117864 Aug 2004 MY
55396 Dec 1998 SG
2002073457 Sep 2002 WO
20020087156 Oct 2002 WO
2004086212 Oct 2004 WO
2005079538 Sep 2005 WO
2007101785 Sep 2007 WO
2008037599 Apr 2008 WO
2008074627 Jun 2008 WO
2008095767 Aug 2008 WO
2009037257 Mar 2009 WO
2009104564 Aug 2009 WO
2010096738 Aug 2010 WO
Related Publications (1)
Number Date Country
20220274016 A1 Sep 2022 US
Provisional Applications (1)
Number Date Country
62783147 Dec 2018 US
Continuations (1)
Number Date Country
Parent 16284234 Feb 2019 US
Child 17654839 US