Online collaboration tools used by organizations allow participants (e.g., employees or workers) to communicate with one another via messaging (e.g., text messages, video messages), and collaborate with one another using file sharing mechanisms or video conferencing platforms. For example, video conferencing platforms allow users in different locations to hold meetings over the internet. As more and more employees or workers choose to work remotely (e.g., from home or other workspace different from their physical workplace), use of such online collaboration tools has become ubiquitous.
Some embodiments provide for a method that comprises using at least one processor to perform: launching a session for each of a first computing device and a second computing device; receiving from the first computing device and during an interactive period of the session: a first input from a first user of the first computing device via a user interface of the first computing device, the first input including a first drawing corresponding to a prompt; and a second input from the first user of the first computing device, the second input indicating a sentiment of the first user of the first computing device to a story and/or drawing shared by a second user of the second computing device; receiving from the second computing device and during the interactive period of the session: a third input from the second user of the second computing device via a user interface of the second computing device, the second input including a second drawing corresponding to the prompt; and a fourth input from the second user of the second computing device, the fourth input indicating a sentiment of the second user of the second computing device to a story and/or drawing shared by the first user of the first computing device; and generating artwork using the first and second inputs received from the first user of the first computing device and the third and fourth inputs received from the second user of the second computing device.
In some embodiments, the method may include launching a session for each of a plurality of computing devices including the first and second computing devices; receiving a plurality of inputs from the plurality of computing devices during the interactive period of the session, the plurality of inputs comprising: a first set of inputs from users of plurality of computing devices, the first set of inputs comprising drawings corresponding to the prompt; and a second set of inputs from the users of the plurality of computing devices, the second set of inputs comprising inputs indicating sentiments of users to each other's drawings and/or stories shared through the respective user interfaces of the plurality of computing devices; and generating the artwork using the first set of inputs and the second set of inputs.
In some embodiments, the method may include receiving requests from the first and second computing devices to join the session, wherein the requests are generated by scanning of a machine-readable code using respective cameras of the first and second computing devices.
In some embodiments, launching a session for each of a first computing device and a second computing device comprises launching a game session for each of the first and second computing devices.
In some embodiments, generating the artwork further comprises generating the artwork using one or more session parameters.
In some embodiments, the method may include receiving a fifth input from the first user of the first computing device indicating a sentiment of the first user prior the interactive period of the session; receiving a sixth input from the first user of the first computing device indicating a sentiment of the first user after the interactive period of the session; receiving a seventh input from the second user of the second computing device indicating a sentiment of the second user prior the interactive period of the session; receiving an eighth input from the second user of the second computing device indicating a sentiment of the second user after the interactive period of the session; and wherein generating the artwork comprises generating the artwork using the first, second, third, fourth, fifth, sixth, seventh, and eighth inputs.
In some embodiments, the method may include analyzing the second input from the first user of the first computing device and the fourth input from the second user of the second computing device; and generating a background image for the artwork based on the analysis.
In some embodiments, generating the background image for the artwork comprises generating the background image using a machine learning model or a statistical model.
In some embodiments, the method may include overlaying the first input from the first user of the first computing device and the third input from the second user of the second computing device onto the background image to generate the artwork.
In some embodiments, the method may include communicating the generated artwork to the first computing device and the second computing device.
Some embodiments provide for a system that comprises at least one processor; and at least one non-transitory computer-readable storage medium storing instructions that, when executed by the at least one processor, cause the at least one processor to perform a method comprising: launching a session for each of a first computing device and a second computing device; receiving from the first computing device and during an interactive period of the session: a first input from a first user of the first computing device via a user interface of the first computing device, the first input including a first drawing corresponding to a prompt; and a second input from the first user of the first computing device, the second input indicating a sentiment of the first user of the first computing device to a story and/or drawing shared by a second user of the second computing device; receiving from the second computing device and during the interactive period of the session: a third input from the second user of the second computing device via a user interface of the second computing device, the second input including a second drawing corresponding to the prompt; and a fourth input from the second user of the second computing device, the fourth input indicating a sentiment of the second user of the second computing device to a story and/or drawing shared by the first user of the first computing device; and generating artwork using the first and second inputs received from the first user of the first computing device and the third and fourth inputs received from the second user of the second computing device.
In some embodiments, the method performed by the at least one processor may include launching a session for each of a plurality of computing devices including the first and second computing devices; receiving a plurality of inputs from the plurality of computing devices during the interactive period of the session, the plurality of inputs comprising: a first set of inputs from users of plurality of computing devices, the first set of inputs comprising drawings corresponding to the prompt; and a second set of inputs from the users of the plurality of computing devices, the second set of inputs comprising inputs indicating sentiments of users to each other's drawings and/or stories shared through the respective user interfaces of the plurality of computing devices; and generating the artwork using the first set of inputs and the second set of inputs.
In some embodiments, the method performed by the at least one processor may include receiving a fifth input from the first user of the first computing device indicating a sentiment of the first user prior the interactive period of the session; receiving a sixth input from the first user of the first computing device indicating a sentiment of the first user after the interactive period of the session; receiving a seventh input from the second user of the second computing device indicating a sentiment of the second user prior the interactive period of the session; receiving an eighth input from the second user of the second computing device indicating a sentiment of the second user after the interactive period of the session; and wherein generating the artwork comprises generating the artwork using the first, second, third, fourth, fifth, sixth, seventh, and eighth inputs.
In some embodiments, the method performed by the at least one processor may include analyzing the second input from the first user of the first computing device and the fourth input from the second user of the second computing device; and generating a background image for the artwork based on the analysis.
In some embodiments, generating the background image for the artwork comprises generating the background image using a machine learning model or a statistical model.
In some embodiments, the method performed by the at least one processor may include overlaying the first input from the first user of the first computing device and the third input from the second user of the second computing device onto the background image to generate the artwork.
In some embodiments, launching a session for each of a first computing device and a second computing device comprises launching a game session for each of the first and second computing devices.
In some embodiments, generating the artwork further comprises generating the artwork using one or more session parameters.
In some embodiments, generating the artwork further comprises generating the artwork using sentiment information received from the first and second computing devices prior to the interactive period of the session and after the interactive period of the session.
Some embodiments provide for at least one non-transitory computer-readable storage medium storing instructions that, when executed by at least one processor, cause the at least one processor to perform a method comprising: launching a session for each of a first computing device and a second computing device; receiving from the first computing device and during an interactive period of the session: a first input from a first user of the first computing device via a user interface of the first computing device, the first input including a first drawing corresponding to a prompt; and a second input from the first user of the first computing device, the second input indicating a sentiment of the first user of the first computing device to a story and/or drawing shared by a second user of the second computing device; receiving from the second computing device and during the interactive period of the session: a third input from the second user of the second computing device via a user interface of the second computing device, the second input including a second drawing corresponding to the prompt; and a fourth input from the second user of the second computing device, the fourth input indicating a sentiment of the second user of the second computing device to a story and/or drawing shared by the first user of the first computing device; and generating artwork using the first and second inputs received from the first user of the first computing device and the third and fourth inputs received from the second user of the second computing device.
In some embodiments, the method may include launching a session for each of a plurality of computing devices including the first and second computing devices; receiving a plurality of inputs from the plurality of computing devices during the interactive period of the session, the plurality of inputs comprising: a first set of inputs from users of plurality of computing devices, the first set of inputs comprising drawings corresponding to the prompt; and a second set of inputs from the users of the plurality of computing devices, the second set of inputs comprising inputs indicating sentiments of users to each other's drawings and/or stories shared through the respective user interfaces of the plurality of computing devices; and generating the artwork using the first set of inputs and the second set of inputs.
In some embodiments, the method may include receiving a fifth input from the first user of the first computing device indicating a sentiment of the first user prior the interactive period of the session; receiving a sixth input from the first user of the first computing device indicating a sentiment of the first user after the interactive period of the session; receiving a seventh input from the second user of the second computing device indicating a sentiment of the second user prior the interactive period of the session; receiving an eighth input from the second user of the second computing device indicating a sentiment of the second user after the interactive period of the session; and wherein generating the artwork comprises generating the artwork using the first, second, third, fourth, fifth, sixth, seventh, and eighth inputs.
In some embodiments, the method may include analyzing the second input from the first user of the first computing device and the fourth input from the second user of the second computing device; and generating a background image for the artwork based on the analysis.
In some embodiments, generating the background image for the artwork comprises generating the background image using a machine learning model or a statistical model.
In some embodiments, the method may include overlaying the first input from the first user of the first computing device and the third input from the second user of the second computing device onto the background image to generate the artwork.
In some embodiments, launching a session for each of a first computing device and a second computing device comprises launching a game session for each of the first and second computing devices.
In some embodiments, generating the artwork further comprises generating the artwork using one or more session parameters.
In some embodiments, generating the artwork further comprises generating the artwork using sentiment information received from the first and second computing devices prior to the interactive period of the session and after the interactive period of the session.
Various aspects and embodiments will be described herein with reference to the following figures. It should be appreciated that the figures are not necessarily drawn to scale. Items appearing in multiple figures are indicated by the same or similar reference number in all the figures in which they appear.
Existing online or web-based collaboration tools, such as, video conferencing platforms and other team collaboration platforms, are being used to support hybrid workplaces. These platforms allow teammates and colleagues to communicate with one another regarding work projects and/or for other reasons. The inventors have recognized that while these existing platforms allow participants in different locations to communicate with one another using audio and video, they lack mechanisms for building meaningful connections and inclusivity in the hybrid workspace. Building such meaningful connections can improve the overall morale of the organization's employees and can promote corporate and employee mental wellness.
The inventors have developed a collaborative platform that includes or works with other online collaboration tools (e.g., video conferencing tools) to allow users to interact or collaborate through a mobile drawing canvas which leads to building deeper connections, trust, and belonging in the hybrid workplace. The collaborative platform may be a collaborative gaming platform that hosts a web-based game or other application type that allows a group of participants to interact with one another by sharing drawings and stories using their computing devices (e.g., mobile computing device such as smartphones). In some embodiments, users utilize their mobile devices as inputs to create drawings which can be displayed to other users within the collaborative environment. Sentiments regarding user's drawings can be collected and presented to the group of users as well as to an organizer or director of the collaborative experience.
Starting with a common prompt (e.g., a prompt question), each participant draws a drawing corresponding to that prompt on a drawing canvas on his/her respective mobile computing device. For example, if a common prompt question is “what is your favorite type of shoe?”, one participant may draw skiing boots as his favorite type of shoe while another participant may draw a particular brand of sneakers as his favorite type of shoe. Each of the participants may then share a story regarding their drawing. As a participant shares his/her story, each of the other participants may provide input indicating his/her sentiment (e.g., feeling of gratitude or connection) to the shared story. After all the participants have had a chance to share their story, the drawings and sentiments collected from the participants may be used to generate artwork for the group that can be shared with each participant of the group.
Such a collaborative platform 1) allows participants (e.g., employees of an organization) to share their stories and reflect on other's shared stories while supporting the social and emotional well-being of individuals and groups, 2) brings visibility to connections across users to enable community around shared or common experiences, 3) can be used to provide a gaming experience that is scalable and drives continuous engagement, and 4) promotes corporate and employee mental wellness (e.g., by providing a social impact by driving change in mental health and wellness at scale via a gaming experience).
In some embodiments, the collaborative platform may be a collaborative gaming platform that hosts or provides access to a web-based gaming application that allows a group of participants to interact with one another by sharing drawings and stories using their respective computing devices. For example, the participants and hosts may interact with the web-based gaming application through their mobile computing devices, such as smartphones.
Some embodiments may be described herein using a collaborative gaming platform for a workspace; however, the disclosure is not limited in this respect. The collaborative platform described herein can be used in different environments and scenarios to boost group connections. As one example, the collaborative platform may be used in a corporate environment. In the corporate environment, the platform may be used as part of a wellness program or a decision-making process for which employee reactions would be helpful. For example, employees may be asked what they would like to see in a new office space and office décor decisions may be made based on the employees' drawings and/or sentiments to each other's drawings. If multiple employees drew bean bag chairs and indicated a feeling of connection amongst each other's drawings of the bean bag chairs, the organization may consider buying a couple of bean bag chairs for the new office space. As another example, the collaborative platform may be used in an educational environment (e.g., in a classroom by teachers and students). As yet another example, the collaborative platform may be used in an event/conference environment (e.g., in an auditorium for small or large groups).
The computing devices may be used by various users to interact with the platform 150. For example, computing device 110 may be used by an admin user to interact with the platform 150. Computing device 110 may be a desktop or a mobile computing device, such as a laptop or smartphone. As another example, computing device 120 may be used by a host user or organizer to interact with the platform 150. Computing device 120 may be a mobile computing device such as a laptop or smartphone. In some embodiments, the host user may use a first computing device (e.g., a laptop) to initiate a meeting using a video conferencing tool and a second computing device (e.g., a smartphone) to participate in a collaborative session hosted by the platform 150. As yet another example, computing devices 130A, 130B, 130C may be used by other non-host participants to interact with the platform 150. Computing devices 130A, 130B, 130C may be mobile computing devices, such as smartphones. In some embodiments, each non-host participant may also use a first computing device (e.g., a laptop) to participate in the meeting using the video conferencing tool and a second computing device (e.g., a smartphone) to participate in the collaborative session hosted by the platform 150.
In some embodiments, the communication network 140 of
In some embodiments, the platform 150 may be used by different organizations. For each organization, the platform 150 may invoke one or more cloud-based services (e.g., services offered by cloud service providers, such as Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), etc.) to create, manage, and track collaborative sessions between groups of participants within the organization. An example system architecture of the platform 150 is shown in
In some embodiments, the platform 150 may collect and track data associated with the collaborative sessions across different groups within an organization or across different organizations. An example data model indicating the types of data collected, tracked, and stored in a database (e.g., database 220 shown in
In some embodiments, the platform 150 may be accessed via a global admin user (e.g., using a laptop or desktop computing device) to manage accounts for various organizations (e.g., create, delete, modify accounts), to create prompts that can be used across organizations, designate one or more employees of an organization as corporate admins for the organization, obtain account information for the organizations (e.g., account status, time since account is active, number of corporate admins for an organization and their user ids and passwords), and/or perform other tasks relating to account management for organizations. In some embodiments, the global admin may manage account permissions for corporate admins of the various organizations (e.g., by indicating at least one corporate admin for an organization who is permitted to access the organization's account). In some embodiments, a corporate admin of an organization may manage account permissions for host users of the organization (e.g., by indicating one or more host users who are permitted to initiate collaborative sessions with participants). Example user personas managed by the platform 150 may include but not be limited to global admin, corporate admin, host user, and participant/player. A corporate admin may be a key account holder of the game. For example, the corporate admin may be a director or a department head who is continually focused on the overall well-being of their group. The collaborative platform 150 allows corporate admins to view sentiment analysis of participants, track participation outcomes, create and manage subset teams, and/or perform other activities for enhancing connections between their team members. Host users may be individuals responsible for setting up and launching game sessions. Host users may be tasked with inviting participants, getting players onboarded to the game, and facilitating the engagement. The collaborative platform 150 allows host users to manage session flow, systematically track and select players when storytelling, clear transitions within the game to navigate onboarding, activity, and wrap up, and/or perform other activities for controlling the game. Participants/players may be individuals who join the game by invitation for different reasons, such as, team bonding, meeting breaks, going away parties, etc. The collaborative platform 150 allows participants/players to interact with a drawing canvas and engage with other players' storytelling. A global admin may control the overall creation and management of game accounts and game settings.
In some embodiments, the collaborative platform 150 may launch a collaborative session between a group of participants (e.g., hosts and other participants) in response to a request from a host participant's computing device, for example, a laptop. Each participant (e.g., including the host and other participants) in the group may join the collaborative session using his/her respective computing device, for example, smartphone. For each computing device that joined the collaborative session, the platform 150 may cause a blank drawing canvas to be presented via a user interface of the computing device. Each participant may draw a drawing on the drawing canvas presented on his/her computing device. For example, the participant may draw on the drawing canvas by moving his/her finger or a drawing tool (e.g., a stylus) across the screen of the computing device. The drawing canvas may include various graphical user elements representing colors (e.g., red, green, blue, etc.) or drawing styles (e.g., line, brush, marker, etc.). A participant may select one or more of these elements when drawing on the drawing canvas. After each participant completes his/her drawing, each participant may be prompted to share his/her story associated with the drawing.
When a participant shares his/her story, the other participants may indicate one or more sentiments (e.g., feeling of gratitude and connection) to the shared story. As the participant shares the story, his/her drawing is presented on the other participants' devices along with multiple graphical user interface elements, where each graphical user interface element corresponds to a particular sentiment (e.g., gratitude, connection, strength, support, reflection, and/or other sentiments). Each other participant may indicate his/her sentiment to the story by selecting one or more of the multiple graphical user interface elements presented on his/her computing device. For example, a first participant may share his story associated with a drawing of a shoe he drew on his device's drawing canvas. A second participant feeling connection with the story may indicate this sentiment by selecting a graphical user interface element corresponding to the “connection” sentiment via his computing device. Similarly, a third participant feeling gratitude and connection with the story may indicate this sentiment by selecting graphical user interface elements corresponding to the “gratitude” and “connection” sentiments, respectively, via his computing device. Each participant gets a turn to share his/her story associated with his/her drawing, and the other participants indicate their sentiments to the story.
The collaborative platform 150 may receive, from each participant's computing device, an input including a drawing drawn by the participant and another input indicating sentiments to stories shared by other participants. These inputs are received during an interactive period of the collaborative session, where the interactive period of the collaborative session is the time period during which the participants are collaborating by drawing and sharing stories with one another. The collaborative platform 150 may generate artwork using the received drawings and sentiment information from all the participants of the collaborative session.
In some embodiments, the collaborative platform 150 may receive additional inputs indicating sentiments of the participants prior to the interactive period of the collaborative session and after the interactive period of the collaborative session. For example, both prior to and after the interactive period of the collaborative session, one or more graphical user interface elements representing a rating scale may be presented on each of the participants' devices. The rating scale may be any suitable rating scale, such as a scale from 1 to 5, where “1” represents a very sad sentiment or mood and “5” represents a very happy sentiment or mood. As another example, both prior to and after the interactive period of the collaborative session, one or more graphical user interface elements representing one or more emoticons may be presented on each of the participants' devices. Any suitable emoticons may be presented ranging from very sad to very happy emoticons. Any type of rating mechanisms (e.g., rating scales, emoticons, emojis, or other mechanisms) may be used to receive the additional inputs indicating sentiments of the participants prior to and after the interactive period of the collaborative session as the disclosure is not limited in this respect. In some embodiments, the same set of graphical user elements are presented to prompt sentiment input both prior to and after the interactive period of the collaborative session. Examples of these sentiment inputs are shown in
In some embodiments, the collaborative platform 150 may analyze the received sentiment inputs from the group of participants and generate a background image for the artwork based on the analysis. For example, the background image for the artwork may be generated based on analysis of the sentiment inputs received prior to the interactive period, sentiment inputs received after the interactive period, and/or sentiment inputs during the interactive period. In some embodiments, sentiment information may be provided as input to a machine learning model or statistical model. The machine learning model or statistical model may process the received input to generate as output the background image for the artwork. In some embodiments, the background image for the artwork may be a heatmap image that visualizes the emotional journey of the participants during the collaborative session.
In some embodiments, analyzing the sentiment inputs may include determining, for each sentiment type, a total number of times it was selected during the collaborative session. In some embodiments, analyzing the sentiment inputs may include determining, for each sentiment type, a number of times it was selected for a particular story shared during the collaborative session. In some embodiments, analyzing the sentiment inputs may include determining whether participants selected one or more common sentiment types for the same story.
In some embodiments, the collaborative platform 150 may generate sentiment scores based on the sentiment inputs and/or analysis of the sentiment inputs. In some embodiments, a sentiment score may be generated for each sentiment type. For example, a first sentiment score may be generated by aggregating or averaging the number of times a first sentiment (e.g., gratitude) was selected during the collaborative session and a second sentiment score may be generated by aggregating or averaging the number of times a second sentiment (e.g., connection) was selected during the collaborative session. In some embodiments, a sentiment score may be generated for each participant that indicates an overall sentiment (e.g., sad, happy, etc.) of the participant during the collaborative session. In some embodiments, a sentiment score may be generated based on collective sentiment inputs received from all participants of the collaborative session.
In some embodiments, one or more sentiment scores may be generated based on the sentiment inputs. In some embodiments, sentiment scores may be generated in three parts: pre_session_sentiment generated based on sentiment inputs received prior to the interactive period of the collaborative session, in_session_sentiment generated based on sentiment inputs received during the interactive period of the collaborative session, and post_session_sentiment generated based on sentiment inputs received after the interactive period of the collaborative session. The pre-session sentiment may be an average of all emoji responses inputted by participants before the interactive period of the collaborative session. These emojis may be stored as a numeric value from 1 to 5, where 1 is very sad, and 5 is very happy. The post-session sentiment may be an average of all emoji responses inputted by participants after the interactive period of the collaborative session. These emojis may be stored as a numeric value from 1 to 5, where 1 is very sad, and 5 is very happy. The in_session_sentiment may be calculated with an algorithm that attempts to quantify engagement levels during the collaborative session. During the collaborative session, participants can input emoji reactions to all drawings completed by other participants. For example, there are 5 (or other number) emoji reactions to choose from, and participants can select from 0 to 5 emoji reactions per drawing. The maximum number of possible reactions in a collaborative session may be calculated based on the number of players in that collaborative session and the number of drawings for that collaborative session. An average in_session_sentiment may be calculated by dividing the actual number of reactions in the collaborative session by the maximum number of possible reactions and scaling it by 5. Thus, the pre_session_sentiment, in_session_sentiment, and post_session_sentiment are numeric values between 1 and 5. Those numeric sentiment values are then converted into feeling descriptors such that values less than 2 may be equivalent to “very sad,” and values greater than 4 may equate to “very happy.”
In some embodiments, an artificial intelligence (AI) model may be used to analyze the participants' sentiment before and after multiple drawings by generating a dynamic visualization (for example, a heatmap or color gradient) that showcases the emotional journey. The visualization can evolve, and change based on the sentiment data. In some embodiments, the generated sentiment scores may be provided as input to a generative AI model. The generated sentiment scores may be provided as a natural-language prompt to the AI model that generates a background image for the artwork based on the natural-language prompt. Example visualizations of background images generated based on sentiment analysis are shown in
In some embodiments, sentiment scores may be input into a machine learning model using a natural-language prompt. In some embodiments, the sentiment scores are inputted into the following example prompt where pre_session_sentiment, in_session_sentiment, and post_session_sentiment correspond to feeling descriptors calculated for each specific collaborative session: “Depict an abstract artistic representation of a heatmap, with gradients of color intensity representing a range of sentiments during a session. The initial sentiment is {pre_session_sentiment}. Then, a sentiment of {in_session_sentiment} and ends with a sentiment of {post_session_sentiment}. There should be no text, no numbers, no axes. The heatmap should cover the entire image.” In some embodiments, the machine learning model comprises a generative machine learning model described in Betker, James et al., “Improving Image Generation with Better Captions,” url=(https://cdn.openai.com/papers/dall-e-3.pdf), which is incorporated by reference herein in its entirety. In some embodiments, the machine learning model was trained using a large set of images with captions that described those images. The machine learning model learned how to generate realistic images based on a text prompt. The natural-language text prompt is input into the machine learning model. The machine learning model determines the colors to be used for the background heatmap image and generates and outputs the background heatmap image.
In some embodiments, the collaborative platform 150 may generate the artwork by overlaying the participants' drawings onto the generated background image. In some embodiments, the platform 150 may determine a placement of the participants' drawings onto the generated background image. For example, the platform 150 may determine where each drawing may be placed relative to the colors depicted in the background image. In some embodiments, placement may be determined based on the sentiment inputs received for a shared drawing and colors mapped to the sentiment inputs. For example, if multiple selections of “gratitude” sentiment were received for a shared drawing, a determination may be made to place the drawing in an area of the background image with a color representative of the “gratitude” sentiment. In some embodiments, placement may be determined based on the drawing inputs. For example, drawings may be scaled based on the number of drawings received. As another example, drawings may be spread with even or random spacing between them. In some embodiments, placement may be determined based on the sentiment inputs and/or the drawings. For example, the sentiment inputs, the drawings, and/or the background image may be provided as input to one or more machine learning models that process these inputs and output a proposed placement of the drawings on the background image. For example, a machine learning model may process the inputs and determine which drawings can be placed together based on sentiment selections for each. If two drawings received similar sentiment selections, they could be placed closer to one another on the background image. As another example, a machine learning model may recognize what the drawings represent (e.g., a book, a stick figure, an airplane, etc.). If two drawings relate to one another, such as a drawing of a book and a drawing of a stick figure representing a person, the two drawings could be placed in such a way that the stick figure is holding the book.
In some embodiments, the collaborative platform 150 may generate the artwork by creating variations of the drawings, for example, by altering a style of the drawing or animating the drawings. Machine learning models may be used to generate such variations. In some embodiments, these variations may be placed on the background image. Example visualizations of such variations of the drawings are shown in
In some embodiments, the collaborative platform 150 may communicate the generated artwork to each of the participant computing devices. The generated artwork may be shared using different communication channels, such as via emails, chats, blogs, social media, video conferencing, and/or other communication channels. The generated artwork may be downloadable and include a title. The generated artwork may be printed, and hardcopies may be shared. Examples of artwork generated by the platform 150 are shown in
The session management module 202 manages collaborative sessions hosted by the platform 150. The session management module 202 may launch a collaborative session between a group of participants (e.g., hosts and other participants). Each participant may join the collaborative session using his/her computing device, for example, a smartphone. In some embodiments, the collaborative session may be a game session that each participant may join by scanning a QR code (encoded with a relevant website or app associated with the game) using the camera of the smartphone. Tapping the notification that appears on the screen when QR code is scanned causes the participant computing device to join the game session. In some embodiments, the session management module 202 may end the game session in response to a request to end the game.
The GUI generation module 204 generates user interfaces for the participant computing devices. In some embodiments, the GUI generation module 204 generates user interfaces to obtain drawing inputs from the participants. For example, these interfaces may include a drawing canvas that allows participants to provide drawing inputs. In some embodiments, the GUI generation module 204 generates user interfaces to obtain sentiment inputs from the participants. For example, these interfaces may be presented prior to an interactive period of the session, during the interactive period of the session, and after the interactive period of the session. Examples interfaces generated for participants as part of a game session are shown in
The GUI generation module 204 generates user interfaces for the host computing devices. In some embodiments, the GUI generation module 204 generates user interfaces to obtain input from the host participant. Example inputs obtained from a host participant device may be a request to launch the game session, a selection of a previously curated prompt or input of a new prompt for the game, a selection of the participants of the game, a request to start the game, a selection highlighting which participant is to share his/or story next (either by selecting a name of the participant in the list of participants or selecting a randomize button that randomly selects a participant), a request to end the game, and/or other inputs. Example interfaces generated for hosts as part of a game session are shown in
The GUI generation module 204 generates user interfaces for the admin computing devices. The user interfaces allow corporate admins of an organization to view statistics relating to the collaborative sessions (e.g., game sessions) hosted for the organization. An example user interface or dashboard 600 presented to a corporate admin is shown in
In some embodiments, the user interfaces generated for admin computing devices include user interfaces for a global admin of the platform 150. An example user interface or dashboard 700 presented to a global admin is shown in
Additional examples of various user interfaces generated by the GUI generation module 204 are shown in
The data collection module 206 collects and tracks data across multiple collaborative sessions (e.g., game sessions) within an organization and across different organizations. The collected data may be stored in database 220. The data collected and tracked may include session data, for example, game session id, information associated with a host participant who launched the game session, title of the game, participant information (e.g., participant id, drawing inputs, sentiment inputs), state of the game session (e.g., created state (e.g., when game session is created), lobby state (e.g., when waiting for all participants to join), drawing state (e.g., when participants are drawing), sharing state (e.g., when participants are sharing their stories), transition state (when transitioning between states), and ended state (e.g., when game session is ended)), information regarding the organization for which the game session is hosted (e.g., organization name, members of the organization allowed to host the game session), and/or prompts used during the game session.
The sentiment analysis module 208 analyzes the received sentiment inputs (e.g., the sentiment inputs received prior to and after the interactive period of session and the sentiment inputs received during the interactive period of the session) from the participants of the collaborative session and generates sentiment scores for the session. In some embodiments, one or more sentiment scores may be generated for each participant. In some embodiments, one or more sentiment scores may be generated for each sentiment type. In some embodiments, the sentiment scores may be generated based on the sentiment inputs and one or more session parameters (e.g., number of participants, the type of prompt used, and/or other session parameters).
The artwork generation module 210 generates the artwork using the drawing inputs and sentiment inputs (e.g., the sentiment inputs received prior to and after the interactive period of session and the sentiment inputs received during the interactive period of the session) from the participants of the collaboration session. In some embodiments, the sentiment inputs may be used to generate a background image of the artwork.
In some embodiments, sentiment information may be provided as input to a machine learning model or statistical model. The machine learning model or statistical model may process the received input to generate as output the background image for the artwork. In some embodiments, the background image for the artwork may be a heatmap image that visualizes the emotional journey of the participants during the collaborative session.
In some embodiments, one or more sentiment scores may be provided as input to a generative artificial intelligence (AI) model. The generated sentiment scores may be provided as a natural-language prompt to the AI model that generates a background image for the artwork based on the natural-language prompt.
In some embodiments, the artwork generation module 210 generates the artwork using the drawing inputs, the sentiment inputs, the background image, and/or one or more session parameters (e.g., number of participants, the type of prompt used, and/or other parameters). In some embodiments, the artwork generation module 210 generates the artwork by overlaying the participants' drawings onto the background image. In some embodiments, the sentiment inputs, the drawing inputs, the session parameters, and/or the background image may be provided as input to one or more machine learning models that process these inputs and output artwork with drawings placed on the background image.
The account management module 212 may manage accounts for various organizations using the platform 150. The account management module 212 may manage accounts for global admins, corporate admins, and hosts. The account management module 212 may manage permissions for account and data access for the different types of users of platform 150.
Process 300 beings at block 302, where the platform performing process 300 launches a collaborative session between participant computing devices. Participants joining the collaborative session using their respective computing devices, such as smartphones, may include the host who initiated the collaborative session and other participants invited to join the collaborative session.
At block 304, the platform performing the process 300 receives drawing inputs from each of the participant computing devices. The drawing inputs may include drawings drawn on a drawing canvas presented on each participant computing device. The drawing inputs may correspond to one or more prompts selected or entered by a host. At block 306, the platform performing the process 300 receives sentiment inputs from each of the participant computing devices. The sentiment inputs indicate participants' sentiments to each other's shared stories and/or drawings.
At block 308, the platform performing the process 300 analyzes the sentiment inputs and generate a background image for artwork based on the analysis. In some embodiments, the background image of the artwork is generated using a machine learning model or a statistical model. At block 310, the platform performing the process 300 generates the artwork by overlaying the drawing inputs onto the background image.
Scribl is a web-based game designed for video meeting/conferencing applications that uses a mobile drawing canvas to build deeper connections, trust, and belonging in the hybrid workplace. Scribl is designed to improve how people connect and collaborate as teams. By harnessing the power of creative expression, a web-based game is designed that is fun, easy to use, and integrates into existing workflows.
An example flow of the game is described with respect to
Selecting the “launch game” button 1616 in interface 1612 of
The “join game” or “lobby” user interface includes a machine-readable code, such as a QR code, for the game. As shown in
When a participant scans the QR code using his/her smartphone, a “welcome” user interface is generated and presented on the smartphone (e.g., user interface 1510 of
Once the game is started, a gameboard may be generated and presented on the host laptop's shared screen (i.e., the screen being shared with the other participants through the video conferencing tool). An example of this gameboard interface is shown in
When all the participants have completed their drawings, the host may initiate a sharing phase where participants share their stories about the drawings they drew. During the sharing phase, the gameboard is updated to include the drawings of all the participants. An example updated gameboard screen in shown in
In some embodiments, the reactions of participants to the shared stories may cause the drawings on the gameboard to move to indicate connections and relationships in real-time. For example, similar reactions to two drawings may cause the gameboard to be updated such that the two drawings are moved closer to one another. The drawings on the gameboard are moved or shuffled in real-time based on the reactions. In some embodiments, the gameboard may be updated by providing visualizations, such as addition of colors or changes in color or hue, lighting changes, and/or other visualizations. In some embodiments, the movement of drawings or the visualizations on the gameboard may inform the generation of the background image and/or the artwork.
Once every participant has shared his/her story, the host may select a “next question” button 1672 if more than one prompt was selected or the “end game” button 1670 to end the game. When multiple prompts are selected, the drawing and sharing phases may be repeated for each selected prompt. The drawing and sharing phases may be referred to as an “interactive period” of the game as shown in
When the game ends (e.g., after the interactive period of the game), each participant may be prompted to select a sentiment indicating his/her current mood (i.e., mood after playing the game). The same set of emoticons that were presented to the participants prior to the interactive period of the game may again be presented to solicit the current mood. An example interface 1520 presented on each participant's smartphone to obtain sentiment after playing the game is shown in
An end-game readout screen may be generated and presented on the host laptop's shared screen (i.e., the screen being shared with the other participants through the video conferencing tool). An example of an end-game screen is shown in
As shown in
Example Collaborative Platform Functionality and Uses:
In some embodiments, the platform 150 may enable a single-player version of the game where a single player plays solo on his computing device. Features of the single-player version may include:
At step 510, each of the participants may, using their respective computing devices (e.g., smartphones) scan the QR code to join the game session. At step 512, the platform 150 may generate a drawing canvas for the participants who scanned the QR code. The participants who scanned the QR code may include the host and the other participants invited to the game session by the host. At step 514, the platform may communicate the drawing canvas to each of the participant computing devices. The participants may draw on the drawing canvas and the drawings may be received by the platform 150 at step 516.
At step 518, the host, using the host computing device, may initiate the story sharing phase. At step 520, the platform 150 may receive participants' sentiments to the shared stories and/or drawings. At step 522, the platform 150 may generate artwork using the participants' drawings and sentiments. The platform 150 may perform sentiment analysis and generate a background image for the artwork. The platform 150 may overlay the drawings onto the background image to generate the artwork. At step 524, the generated artwork may be shared with each of the participants.
A launch screen appears to all users after logging in, examples of which are shown as
In some embodiments, a game may be launched using an existing game pack and prompt by clicking option 2002 in interface 2000 of
In some embodiments, a game may be launched using a custom prompt by clicking option 2004 in interface 2000 of
In some embodiments, interfaces 2300, 2400, 2500, 2600, 2700 shown in
In some embodiments, users can view data in the insights dashboard based on their level of permission. Examples of insights dashboards 2802 and 2902 as part of interfaces 2800 and 2900 are shown in
In some embodiments, global and/or corporate admins may utilize the interfaces 3000, 3100, 3200, 3300, and 3400 shown in
In some embodiments, help menu 3500 shown in
In some embodiments, pre and post sentiment collection may be performed using interfaces 3800, 3802, 3900 and 3902 shown in
In some embodiments, participants may be prompted to answer one or more additional questions or provide one or more additional inputs during the game. For example, following post-sentiment collection screen 3920, participants may be prompted to answer a randomized exit question (as shown in interfaces 3921, 3922, 3923 of
In some embodiments, the emotions presented in the interfaces 3802, 3902 may be chosen from a predetermined list that corresponds to emojis initially selected by the participant (e.g., emojis selected in interfaces 3800, 3900). The list may be broken down into multiple categories (for example, the same categories as in-game reactions). One emotion may be chosen from each of the categories. Example emotions and categories are shown in
In some embodiments, participants may choose from multiple game reaction badges 4202, examples of which are shown in
In some embodiments, the gameboard may display a live tally of in-game reactions received in the top left of the screen-labeled “Session Badges” 4402, examples of which are shown in interface 4400 of
In some embodiments, an end game screen 4500 shown in
In some embodiments, an end game screen, such as screen 4600, 4602, 4604, or 4606 shown in
The computer system 4800 may be a portable computing device (e.g., a smartphone, a tablet computer, a laptop, or any other mobile device), a computer (e.g., a desktop, a rack-mounted computer, a server, etc.), or any other type of computing device.
The terms “program” or “software” are used herein in a generic sense to refer to any type of computer code or set of processor-executable instructions that can be employed to program a computer or other processor (physical or virtual) to implement various aspects of embodiments as discussed above. Additionally, according to one aspect, one or more computer programs that when executed perform methods of the disclosure provided herein need not reside on a single computer or processor but may be distributed in a modular fashion among different computers or processors to implement various aspects of the disclosure provided herein.
Processor-executable instructions may be in many forms, such as program modules, executed by one or more computers or other devices. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform tasks or implement abstract data types. Typically, the functionality of the program modules may be combined or distributed.
Various inventive concepts may be embodied as one or more processes, of which examples have been provided. The acts performed as part of each process may be ordered in any suitable way. Thus, embodiments may be constructed in which acts are performed in an order different than illustrated, which may include performing some acts simultaneously, even though shown as sequential acts in illustrative embodiments.
As used herein in the specification and in the claims, the phrase “at least one,” in reference to a list of one or more elements, should be understood to mean at least one element selected from any one or more of the elements in the list of elements, but not necessarily including at least one of each and every element specifically listed within the list of elements and not excluding any combinations of elements in the list of elements. This definition also allows that elements may optionally be present other than the elements specifically identified within the list of elements to which the phrase “at least one” refers, whether related or unrelated to those elements specifically identified. Thus, for example, “at least one of A and B” (or, equivalently, “at least one of A or B,” or, equivalently “at least one of A and/or B”) can refer, in one embodiment, to at least one, optionally including more than one, A, with no B present (and optionally including elements other than B); in another embodiment, to at least one, optionally including more than one, B, with no A present (and optionally including elements other than A); in yet another embodiment, to at least one, optionally including more than one, A, and at least one, optionally including more than one, B (and optionally including other elements);etc.
The phrase “and/or,” as used herein in the specification and in the claims, should be understood to mean “either or both” of the elements so conjoined, i.e., elements that are conjunctively present in some cases and disjunctively present in other cases. Multiple elements listed with “and/or” should be construed in the same fashion, i.e., “one or more” of the elements so conjoined. Other elements may optionally be present other than the elements specifically identified by the “and/or” clause, whether related or unrelated to those elements specifically identified. Thus, as a non-limiting example, a reference to “A and/or B”, when used in conjunction with open-ended language such as “comprising” can refer, in one embodiment, to A only (optionally including elements other than B); in another embodiment, to B only (optionally including elements other than A); in yet another embodiment, to both A and B (optionally including other elements); etc.
Use of ordinal terms such as “first,” “second,” “third,” etc., in the claims to modify a claim element does not by itself connote any priority, precedence, or order of one claim element over another or the temporal order in which acts of a method are performed. Such terms are used merely as labels to distinguish one claim element having a certain name from another element having a same name (but for use of the ordinal term). The phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. The use of “including,” “comprising,” “having,” “containing”, “involving”, and variations thereof, is meant to encompass the items listed thereafter and additional items.
Having described several embodiments of the techniques described herein in detail, various modifications, and improvements will readily occur to those skilled in the art. Such modifications and improvements are intended to be within the spirit and scope of the disclosure. Accordingly, the foregoing description is by way of example only, and is not intended as limiting. The techniques are limited only as defined by the following claims and the equivalents thereto.
This application claims the benefit of priority under 35 U.S.C. 119(e) to U.S. Provisional Patent Application Ser. No.: 63/602,131, filed on Nov. 22, 2023, titled “Collaborative Platform Utilizing Drawings and Sentiment Analysis”, and U.S. Provisional Patent Application Ser. No.: 63/637,083, filed on Apr. 22, 2024, titled “Collaborative Platform Utilizing Drawings and Sentiment Analysis,” which are hereby incorporated by reference herein in their entirety.
Number | Date | Country | |
---|---|---|---|
63637083 | Apr 2024 | US | |
63602131 | Nov 2023 | US |