The disclosure below relates generally to adaptive screen share pointers and reactions for computer gameplay.
As understood herein, some video gamers stream their gameplay to others so that the other people can watch their gameplay. As also recognized herein, current systems are technologically limited in their ability to provide adequate viewer participation, which present principles recognize may provide for a more robust and engaging viewing experience from a technological perspective.
Accordingly, present principles are directed to enhancing the overall execution environment of the game itself as well as providing enriched technology-based interactivity features.
As such, in one aspect an apparatus includes a processor assembly programmed with instructions to execute a computer game in which a first person plays the computer game. The processor assembly is also programmed with instructions to receive input from a second person that is viewing a livestream of the computer game but that is not controlling a character of the game, where the input includes a cursor control command and/or a graphic-based reaction. Based on the input, the processor assembly is programmed with instructions to present an output on a display associated with the first person, where the output indicates the input in a game-specific context.
In certain example implementations, the output may include the cursor being presented in an appearance associated with an ongoing game context for the computer game. Additionally or alternatively, the output may include the cursor being presented in an appearance relevant to a current game event.
As another example, the cursor control command may be a first cursor control command and the processor assembly may be configured to execute the first cursor control command during a first portion of the computer game. Per this example, the processor assembly may also be configured to determine that a cutscene is occurring and, based on the determination, decline to process additional cursor control commands.
As yet another example, again the cursor control command may be a first cursor control command but here the processor assembly may be configured to rotate permission to issue cursor control commands between viewers of the computer game based on past viewer inputs related to the first person's gameplay of the computer game.
Also consistent with present principles, in some cases the processor assembly may be configured to present, on a display associated with the second person, a selectable graphical object. Here the processor assembly may also be configured to receive the cursor control command, where the cursor control command may select the selectable graphical object, and to control an output of the computer game as presented on the display associated with the first person based on the cursor control command. The output of the computer game may have an effect that changes the current game state of the computer game.
Still further, if desired the processor assembly may be configured to receive the cursor control command, where the cursor control command may direct placement of a graphical object associated with the second person at a persistent location within the field of view of the first person as presented on the display. Based on the cursor control command, the processor assembly may be configured to present the graphical object at the persistent location.
As another example implementation, the processor assembly may be configured to receive the cursor control command where the command selects a game object and then to, based on the cursor control command, highlight the selected game object. The highlighting may not include a tracing of the cursor control command itself.
Additionally, in some cases the processor assembly may be configured to receive the cursor control command and, based on the cursor control command, execute a predefined game action reserved for non-character-controlling people to instigate during the computer game.
As another example, the input may include eye movement input associated with the second person, and here the processor assembly may be configured to move the cursor on the display according to the eye movement input, determine that the second person has closed one eye, and alter the appearance of the cursor as presented on the display based on the determination.
If desired, the processor assembly may also be configured to present the cursor on the display in a visual appearance conveying a sentiment about terrain over which the first person's game character is moving.
As yet another example, the processor assembly may be configured to present the cursor on the display in a visual appearance indicating data regarding a first game character's current game statistics, where the first game character may be a game character not controlled by the first person.
In terms of graphic-based reactions, in some cases the processor assembly may be configured to receive respective graphic-based reactions from respective people that are viewing the livestream but not controlling a character of the game and, responsive to a predetermined graphic-based reaction threshold being met, alter playout of the computer game. The predetermined graphic-based reaction threshold may relate to more than one graphic-based reaction of a same type being received.
Additionally or alternatively, the processor assembly may be configured to present haptic feedback to the first person based on the graphic-based reaction, where the haptic feedback may vary based on a reaction type associated with the graphic-based reaction.
In another aspect, a method includes executing a computer game in which a first person plays the computer game and receiving input from a second person that is viewing a livestream of the computer game but that is not controlling a character of the game. The input includes a cursor control command and/or a sentiment-based reaction. The method also includes, based on the input, presenting a game output on a display associated with the first person.
In one example, the method may include aggregating sentiment-based reactions provided by plural viewers and, based on the aggregation, presenting a reaction summary to the first person.
In still another aspect, a system includes at least one computer storage that is not a transitory signal. The computer storage includes instructions executable by at least one processor to execute a computer game in which a first person plays the computer game and to receive input from a second person that is viewing a livestream of the computer game. The input includes a cursor control command and/or a sentiment-based reaction. The instructions are also executable to present an output on a display associated with the first person based on the input.
In one example, the instructions may be executable to award at least one trophy to the first person based on an amount of sentiment-based reactions that are received while playing one or more particular aspects of the computer game, and to reward at least one viewer of the livestream based on inputs provided by that viewer. The inputs may be established by one or more cursor control commands and/or sentiment-based reactions.
The details of the present application, both as to its structure and operation, can be best understood in reference to the accompanying drawings, in which like reference numerals refer to like parts, and in which:
This disclosure relates generally to computer ecosystems including aspects of consumer electronics (CE) device networks such as but not limited to computer game networks. A system herein may include server and client components which may be connected over a network such that data may be exchanged between the client and server components. The client components may include one or more computing devices including game consoles such as Sony PlayStation® or a game console made by Microsoft or Nintendo or other manufacturer, virtual reality (VR) headsets, augmented reality (AR) headsets, portable televisions (e.g., smart TVs, Internet-enabled TVs), portable computers such as laptops and tablet computers, and other mobile devices including smart phones and additional examples discussed below. These client devices may operate with a variety of operating environments. For example, some of the client computers may employ, as examples, Linux operating systems, operating systems from Microsoft, or a Unix operating system, or operating systems produced by Apple, Inc., or Google. These operating environments may be used to execute one or more browsing programs, such as a browser made by Microsoft or Google or Mozilla or other browser program that can access websites hosted by the Internet servers discussed below. Also, an operating environment according to present principles may be used to execute one or more computer game programs.
Servers and/or gateways may include one or more processors executing instructions that configure the servers to receive and transmit data over a network such as the Internet. Or a client and server can be connected over a local intranet or a virtual private network. A server or controller may be instantiated by a game console such as a Sony PlayStation©, a personal computer, etc.
Information may be exchanged over a network between the clients and servers. To this end and for security, servers and/or clients can include firewalls, load balancers, temporary storages, and proxies, and other network infrastructure for reliability and security. One or more servers may form an apparatus that implement methods of providing a secure community such as an online social website to network members.
A processor may be a single- or multi-chip processor that can execute logic by means of various lines such as address lines, data lines, and control lines and registers and shift registers. A processor assembly may include one or more processors acting independently or in concert with each other to execute an algorithm, whether those processors are in one device or more than one device.
Components included in one embodiment can be used in other embodiments in any appropriate combination. For example, any of the various components described herein and/or depicted in the Figures may be combined, interchanged, or excluded from other embodiments.
“A system having at least one of A, B, and C” (likewise “a system having at least one of A, B, or C” and “a system having at least one of A, B, C”) includes systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.
Present principles may employ machine learning models, including deep learning models. Machine learning models use various algorithms trained in ways that include supervised learning, unsupervised learning, semi-supervised learning, reinforcement learning, feature learning, self learning, and other forms of learning. Examples of such algorithms, which can be implemented by computer circuitry, include one or more neural networks, such as a convolutional neural network (CNN), recurrent neural network (RNN) which may be appropriate to learn information from a series of images, and a type of RNN known as a long short-term memory (LSTM) network. Support vector machines (SVM) and Bayesian networks also may be considered to be examples of machine learning models.
As understood herein, performing machine learning involves accessing and then training a model on training data to enable the model to process further data to make predictions. A neural network may include an input layer, an output layer, and multiple hidden layers in between that are configured and weighted to make inferences about an appropriate output.
Now specifically referring to
Accordingly, to undertake such principles the AVD 12 can be established by some, or all of the components shown in
In addition to the foregoing, the AVD 12 may also include one or more input and/or output ports 26 such as a high-definition multimedia interface (HDMI) port or a USB port to physically connect to another CE device and/or a headphone port to connect headphones to the AVD 12 for presentation of audio from the AVD 12 to a user through the headphones. For example, the input port 26 may be connected via wire or wirelessly to a cable or satellite source 26a of audio video content. Thus, the source 26a may be a separate or integrated set top box, or a satellite receiver. Or the source 26a may be a game console or disk player containing content. The source 26a when implemented as a game console may include some or all of the components described below in relation to the CE device 48.
The AVD 12 may further include one or more computer memories 28 such as disk-based or solid-state storage that are not transitory signals, in some cases embodied in the chassis of the AVD as standalone devices or as a personal video recording device (PVR) or video disk player either internal or external to the chassis of the AVD for playing back AV programs or as removable memory media or the below-described server. Also, in some embodiments, the AVD 12 can include a position or location receiver such as but not limited to a cellphone receiver, GPS receiver and/or altimeter 30 that is configured to receive geographic position information from a satellite or cellphone base station and provide the information to the processor 24 and/or determine an altitude at which the AVD 12 is disposed in conjunction with the processor 24. The component 30 may also be implemented by an inertial measurement unit (IMU) that typically includes a combination of accelerometers, gyroscopes, and magnetometers to determine the location and orientation of the AVD 12 in three dimension or by an event-based sensors.
Continuing the description of the AVD 12, in some embodiments the AVD 12 may include one or more cameras 32 that may be a thermal imaging camera, a digital camera such as a webcam, an event-based sensor, and/or a camera integrated into the AVD 12 and controllable by the processor 24 to gather pictures/images and/or video in accordance with present principles. Also included on the AVD 12 may be a Bluetooth transceiver 34 and other Near Field Communication (NFC) element 36 for communication with other devices using Bluetooth and/or NFC technology, respectively. An example NFC element can be a radio frequency identification (RFID) element.
Further still, the AVD 12 may include one or more auxiliary sensors 38 (e.g., a motion sensor such as an accelerometer, gyroscope, cyclometer, or a magnetic sensor, an infrared (IR) sensor, an optical sensor, a speed and/or cadence sensor, an event-based sensor, a gesture sensor (e.g., for sensing gesture command), providing input to the processor 24. The AVD 12 may include an over-the-air TV broadcast port 40 for receiving OTA TV broadcasts providing input to the processor 24. In addition to the foregoing, it is noted that the AVD 12 may also include an infrared (IR) transmitter and/or IR receiver and/or IR transceiver 42 such as an IR data association (IRDA) device. A battery (not shown) may be provided for powering the AVD 12, as may be a kinetic energy harvester that may turn kinetic energy into power to charge the battery and/or power the AVD 12. A graphics processing unit (GPU) 44 and field programmable gated array 46 also may be included. One or more haptics generators 47 may be provided for generating tactile signals that can be sensed by a person holding or in contact with the device.
Still referring to
Now in reference to the afore-mentioned at least one server 52, it includes at least one server processor 54, at least one tangible computer readable storage medium 56 such as disk-based or solid-state storage, and at least one network interface 58 that, under control of the server processor 54, allows for communication with the other devices of
Accordingly, in some embodiments the server 52 may be an Internet server or an entire server “farm” and may include and perform “cloud” functions such that the devices of the system 10 may access a “cloud” environment via the server 52 in example embodiments for, e.g., network gaming applications. Or the server 52 may be implemented by one or more game consoles or other computers in the same room as the other devices shown in
The components shown in the following figures may include some or all components shown in
With the foregoing in mind, present principles recognize that screen sharing of computer/video games may allow gainers and viewers to share any game moment and content with each other. Aspects below may be combined with game streaming services where the flows of communication may be more one-to-many, and the viewers need not necessarily even be friends of the gamer but unacquainted viewers. These aspects along with others discussed below may help enhance the social game play experience and the technological capabilities of computer-based gaming systems in particular.
Thus, according to the disclosure below, share screen functionality may fill the gamer's desire to easily and directly share real time gaming moments with friends and others to enjoy the moments together. Gainers may therefore casually stream their gameplay to friends and small groups in real time, while chatting and playing together in parties and other scenarios, creating a sense of togetherness despite the distributed and sometimes detached nature of computer-based gaming in particular. Present principles may be employed on video game consoles, on mobile applications executable at smartphones and other mobile devices, and on other types of devices including laptop and desktop computers.
With the foregoing in mind, reference is made to
Turning to
Turning to
Back to
Also note that while positioning/hovering of the cursor over the game object may be one form of selection of the game object, another form of selection consistent with present principles may be left-click or right-click input to the game object with a mouse after the associated cursor controlled by the mouse has been placed over top of it, or even a touch-based selection of the game object if the viewer is using a touch-enabled display. Further note that, according to the specific example of
Turning to
Now in reference to
Now two examples about using a pointer/cursor for highlighting will be described. Beginning with
Another example in terms of challenging situations is then shown in
Turning to examples involving reactions rather than pointer/cursor action per se, reference is now made to
Continuing now in reference to
Then during deployment of the ANN while a given game instance is being executed, the executing device may receive game state data and/or the current game context from the game engine itself, or may simply receive the A/V data from the game if it does not have access to the game engine itself (e.g. for older legacy games). The device may then determine a context from the game state data/AV data if not provided directly by game engine, and either way then provides the received/determined context and/or raw game state data to the ANN as input for the ANN to then generate an inference output in response. The output may be a cursor image to present or reference identifier for the type of cursor to look up and present. The device may then present the inferred cursor image that corresponds to the output.
Accordingly, note that in some examples the different cursors themselves may all be included in a reference library of cursors that is accessible to the system. The cursors may each have their own reference ID. Some or all of the cursors in the library may have been used as ground truth for the training discussed above, and therefore the cursors need not be dynamically generated on the fly during gameplay but rather the inference output from the ANN may indicate a certain cursor from the library to use. But in other examples, a cursor image may in fact be indicated in the output even if not tied to an existing reference cursor, and the system might therefore use a generative image output by the ANN as the cursor image.
With the foregoing in mind, reference is made specifically to
Turning to
Now in reference to
As shown in
Section 2130 of
Thus, it is to be more generally understood that the system may receive a cursor control command directing placement of a graphical object (a sticker from the section 2130) that is uniquely associated with the viewer at a persistent location within the field of view/POV of the streamer as presented on the streamer's own display. So based on the cursor control command, the system may present the graphical object (sticker) at the persistent location.
Moving on to
Turning to
This is shown in the example of
Responsive to the poll ending after a threshold period of time, streamer-designated period of time, or other trigger, an overlay 2400 as shown in
Referring now to
Beginning at block 2500, the system may execute a computer game in which a first person plays the computer game. The logic may then proceed to block 2505 where the system may stream the first person's gameplay to other viewers, e.g., over the Internet, a third-party streaming website, a dedicated console manufacturer's network, etc. Thereafter the logic may proceed to block 2510 where the system may receive inputs from viewers, including at least a second person that is viewing the livestream of the computer game but that is not controlling a character of the game. The input may include a cursor control command to move one of the cursors mentioned above within the first person's field of view/POV, as well as a graphic-based sentiment reaction as also discussed above (e.g., emojis).
The logic of
From block 2520 the logic may then proceed to decision diamond 2525. At diamond 2525 the system may, possibly after executing at least a first cursor control command during a first portion of the computer game, determine that a cutscene is occurring (e.g., using an indication from the game engine). In various examples, the cutscene may be a non-interactive scene of the game where user inputs are not processed, with the cutscene beginning once the streamer reaches a certain point in the game (e.g., end of level or the streamer's virtual character dying). The cutscene may thus interrupt gameplay with things such as a conversational sequence between two game characters, a sequence of events relevant to the plotline of the game, an award ceremony where the streamer is provided with one or more rewards, etc.
Based on/responsive to a determination that a cutscene is not occurring, the logic may proceed directly to block 2535 as will be described in a moment. However, based on/responsive to a determination that the cutscene is occurring, the logic may instead proceed first to block 2530 where the system may decline to process additional cursor control commands and/or reactions during playout of the cutscene. Accordingly, a multi-viewer UI like in
From block 2530 the logic may then proceed to block 2535. At block 2535 the system may in some implementations rotate permission to issue cursor control commands (and even reactions) between viewers of the computer game. This helps avoid situations where multiple cursors are floating around the streamer's screen at the same time, which can be distracting and unduly obstruct the streamer's view of the game video. Thus, only one cursor might be presented at a given time during gameplay in certain implementations, where the viewer that is allowed to control the cursor may be determined based on past viewer inputs related to the first person's gameplay of the computer game.
Accordingly, this feature might be thought of as passing the virtual baton where cursor control operates on a rotational basis. The UI/overlay may, for example, track viewer engagement in terms of reaction amount. As viewers accumulate points through more interactions (cursor controls and/or reactions), the viewers progress higher in a queue and the viewer at the front/top of the queue gains control of the pointer. Control may then remain with that viewer until one or more conditions are met, such as the controlling viewer providing a select command with the cursor, a threshold amount of time ending (e.g., thirty seconds), or the controlling viewer relinquishing control.
Additionally or alternatively, the streamer may decide who gets to use the pointer at a given time by providing an audible command designating a particular user, or simply by audibly commanding the system to “rotate”, which might then instigate passing of control to the next viewer in the queue. This feature may thus encourage sustained and gamified engagement for the viewers themselves.
From block 2535 the logic may then proceed to block 2540. At block 2540 the system may continue to control outputs, game actions, and/or the game state based on viewer cursor commands and reactions. The logic may then proceed to block 2545 where the system may in some implementations execute an eye tracking algorithm to track the eyes of one or more viewers using images of the viewer(s) as provided by a camera imaging the viewers' eyes. The camera might be located on a headset being worn by the respective viewer or on a television being used by the respective viewer to view the game, for example.
Eye movement from the viewer may constitute input that may then be used at block 2550 to move the cursor on the streamer's display according to the eye movement input. For example, if the viewer looks left then the cursor goes left by a proportional amount, if the viewer looks right the cursor goes right by a proportional amount, and so on. Also at block 2550, in some examples the system may determine that the tracked viewer has closed one eye, such as to wink or blink with one eye (but not simultaneously with both eyes). Based on determining that the viewer has closed one eye, the system may alter the appearance of the cursor as presented on the streamer's display.
So as a specific example, if the viewer were wearing a virtual reality (VR) headset for VR gameplay viewing, eye tracking sensors on the headset can be utilized to establish a VR eye tracking pointer's movement and send pulses to the pointer as presented on the streamer's display (to pulsate) by the viewer blinking/winking with one eye. This may be an easy and intuitive way to draw the streamer's attention to a game object over which the cursor is moved and then hovers.
After block 2550 the logic may proceed to decision diamond 2555. At diamond 2555 the system, based on receiving receive respective graphic-based reactions from respective people that are viewing the livestream but not controlling a character of the game (such as might occur at block 2540), may determine whether a predetermined graphic-based reaction threshold has been met. This might include a predetermined number of all or the same type of reactions being provided by viewers within a set period of time, such as twenty seconds (impliedly indicating that the reactions all pertain to a same game event while still eliminating false positives from outside the time window). Or this might include a predetermined number of all or the same type of reactions being provided by viewers during a particular game scene or game level more generally. Here, same type may relate to specific emojis each of which is considered its own type, or may relate to emojis grouped into different types based on overall sentiment (e.g., positive sentiment emojis like a happy face and laughter emojis being grouped as one type, and negative sentiment emojis like frown face and angry face emojis being grouped as another type).
A negative determination may cause the logic to proceed directly to block 2565, as will be described shortly. However, first note that an affirmative determination at diamond 2555 may instead cause the logic to proceed to block 2560 responsive to the predetermined graphic-based reaction threshold being met. At block 2560 the system may alter playout of the computer game based on the overall or majority/plurality sentiment establishing at least the threshold number. For example, for an overall positive sentiment reaction accumulation, playout of the game may be altered by presenting fireworks in the background of the game scene. For an overall negative sentiment reaction accumulation, playout of the game may be altered by presenting a red flash in the background of the game scene.
From block 2560 the logic may then proceed to block 2565. At this step, based on one or more graphic-based reactions the system may present haptic feedback to the first person (streamer). The haptic feedback may be presented using a vibrator on a game controller being used by the first person, using a vibrator on a VR headset or other headset type being worn by the first person, using a vibrator on the first person's smartphone or another connected device, etc. The haptic feedback may vary based on a reaction type associated with the graphic-based reaction that instigated the vibration.
So, for example, as viewers send reactions, the streamer may receive subtle haptic feedback corresponding to the type and intensity of reactions. Thus, if a smiling emoji were provided, a vibration of a relatively low intensity/amplitude of separate pulses may be provided, while if a laughing emoji were provided (designated as a more intense positive reaction) then a vibration of a higher intensity/amplitude of separate pulses may be provided. Conversely, if a frowning emoji were provided, a single short vibration of a relatively low intensity/amplitude may be provided, while if an angry-faced emoji were provided (designated as a more intense negative reaction) then a single short vibration of a higher intensity/amplitude may be provided.
From block 2565 the logic may then proceed to block 2570. At block 2570, if desired the system may aggregate sentiment-based reactions provided by plural viewers. This may be done for the purpose of presenting, also at block 2570 and based on the aggregation, a reaction summary to the first person (streamer) that summarizes the amount of reactions received (as broken down and displayed visually by reaction type).
From block 2570 the logic may then proceed to block 2575. At this step the system may do one or more of the following. First, the system may award at least one virtual trophy to the first person (streamer) based on an amount of sentiment-based reactions that were received while the first person played one or more particular aspects of the computer game (e.g., a fight sequence, a game level, a game scene, etc.). Second, the system may reward at least one viewer of the livestream based on inputs provided by that viewer, where the inputs may be established by one or both of cursor control commands and sentiment-based reactions. Rewards may increase proportionally as inputs increase.
Accordingly, in terms of trophies and challenges, a new or unique type of trophy may be provided to the streamer based on the amount of viewer reactions they receive, with the trophy becoming incrementally bigger or more significant as incrementally more viewer inputs are received. Streamers may thus achieve trophies that are predefined as tied to viewer inputs and they may be achieved by sequence, timing, and/or amount of reactions for a certain challenge, game sequence, game level, etc. This concept can even be applied to games with daily/weekly in-game challenges.
In terms of reaction tiers and corresponding rewards for viewers, similar to the above, viewer reactions can be divided into tiers based on levels of engagement (e.g., total amount of reactions provided), such as bronze, silver, and gold levels. As viewers consistently engage with their own reactions, they progress through these tiers, incrementally accumulating engagement points and unlocking different rewards as they incrementally provide more reaction input. A progress bar within the UI overlay that the viewer sees (but possibly not the streamer) might even illustrate the viewer's journey toward the next tier, motivating sustained interaction to get to the next tier. This can also be applied to the streamer side (e.g., where the streamer gets more rewards, game points, etc. for more reactions from viewers).
Continuing the detailed description in reference to
As shown in
The GUI 2600 may also include an option 2620 that may be selected to set or configure the system to allow viewer inputs to actually control the game itself rather than simply to present content to the streamer. So, for example, the option 2620 might be selected to command the system to subsequently execute an algorithm where the system presents selectable graphical objects on the viewer displays as part of the game itself. The system may then receive a cursor control command selecting the selectable graphical object to then, based on the cursor control command, control an output of the computer game as presented on the display associated with streamer. The output itself may have an effect that changes the current game state of the computer game itself.
As a specific example, viewers may be permitted to interact with game layer objects. E.g., using a software development kit (SDK), designated objects in the game world can display markers for viewers to interact. When a viewer clicks on one of these markers, the overlay/UI may send a command to the game to simulate the viewer's input. For instance, if a viewer clicks on a lever in the overlay, the game receives the command to trigger the lever's action and change something within the game itself. This two-way communication may bridge the virtual gap between the viewer, the streamer, and the game, allowing viewers to actively participate in the gameplay.
As another example, consider an SDK for game publishers. It may be a “general” SDK that allows game publishers to define specific triggers, events, objects and responses that occur when viewers interact through the overlay/UI. For instance, a developer could integrate a unique weapon drop or a special character animation triggered by viewer actions. From a store perspective, viewers could also “click” on skins and other digital add-ons to see content in the store available for purchase and/or to add to their wish list.
Additionally, in some cases reactive environments may be employed, where the pointer's actions influence the game environment in real-time. Viewers can use the pointer to activate switches, create temporary bridges, or manipulate objects within the game. These interactions impact the streamer's game progress, potentially leading to secret passages or “easter eggs” where hidden items in the game are revealed.
Thus, more generally, the system may receive a cursor control command and, based on the cursor control command, execute a predefined game action that is possibly reserved only for non-character-controlling people (viewers not playing the video game/controlling a game character) to instigate during the computer game.
Viewers may also mark game layer objects. So, for example, viewers can draw or place markers on the streamer's screen to highlight specific game objects, enemies, or tactical points. As an example, to draw the user might left-click and hold down the left mouse button, and then moving the mouse about to draw on the UI. The cursor may also be controlled to place a visual marker in the streamer's POV (e.g., as anchored to a particular virtual location in the game) by positioning the cursor over a game object and providing a left-click selection up/down without moving the mouse. In some instances, this feature might be executed as part of an SDK too.
Additionally, in terms of viewer-generated tasks, the viewer might use the UI/overlay's drawing tools (some of which were just described) to sketch tasks, challenges, or suggestions directly onto the streamer's screen. The streamer can review the tasks, choose to accept or reject them, or negotiate with the viewers for variations.
Moving to reactions, further note that sentiment-based visual effects are also encompassed by present principles. Thus, the systems disclosed herein add functionality that monitors the cumulative sentiment of reactions of viewers by analyzing factors like the type of reaction, frequency, and velocity. When certain reaction thresholds are met/crossed, the UI/overlay may trigger corresponding visual effects for the streamer and/or viewers. For example, a surge of positive reactions might lead to a cascading animation of fireworks as discussed above, while an influx of negative reactions could prompt a subtle shadow to cast over the screen.
Avatar-based reactions may also be used. Thus, depending on the game's characters or avatars, the reactions' appearance for a particular game may change to match the in-game personas (e.g., the avatar used for a reaction may be in the image of one of the game's characters). Additionally, viewers can personalize their reactions by selecting from a range of animations, icons, or even short sound clips.
Additional visual overlays might be expanded by type as well. These additional overlays may include thematic filters that alter the game's visual palette to match different moods or settings, countdown timers that tick down to in-game events, and sound effects and images that overlay on specific sections of the screen. Viewers and the streamer can activate and deactivate these elements, providing a dynamic and customizable visual experience.
Cursor ghosting might also be used consistent with present principles. So a viewer might use the cursor to select and thus tag the player's character, and then the cursor may autonomously follow the character without further input from the viewer. The cursor itself may even create a visual trail (e.g., comet tail) behind it for where it went.
Hysteresis of the cursor is also encompassed as well. The cursor might fade away, or might disappear to signal something. For instance, it might disappear in flame for a sentiment change (“I was wrong”, meaning the viewer was wrong about a suggestion they provided to the streamer).
Additionally, if viewer says “go left” with cursor (e.g., points the cursor to left or writes “L” on screen for “go left”), the cursor/highlighting may stay there where the input is directed. Then when the player doesn't do the action the cursor might persist, and when the player does the action the cursor might disappear. This way viewers know the player responded to their request.
Still further, an artificial intelligence-based model may analyze game objects. So a viewer could point at an object or circle the object with the cursor, and then the object may be highlighted in the game in the color of the pointer itself. The system may thus modify in-game objects based on this feature.
While the particular embodiments are herein shown and described in detail, it is to be understood that the subject matter which is encompassed by the present invention is limited only by the claims.