Interpersonal physical and virtual communications are often affected by preconceived notions—whether experientially—or prejudicially-based—about the other party. Put simply: how we see someone changes how we hear them. Most virtual communication platforms provide a myriad of uniform features by which participants can communicate that do not focus on enhancing the interaction's collaborative and interpersonal nature and fruitfulness.
A communication application is implemented in which the application implements transmission routing protocols and other parameters for subjects to utilize remote agents during a virtual communication session. A remote host service may host the communication session that each participant, including the subjects and agents, access using extensibility from a proprietary communication application, plugin, or a web browser application (e.g., Safari®, Firefox®, or Chrome®).
One of the user-participants may create the communication session, which, in typical implementations, would have four participants—two subjects and two agents. For clarity in exposition, the participants are referred to herein as Subject A, Subject B, Agent A, and Agent B. Upon creation, the session may enter an inactive state during which the host service monitors for the participants to join the session. Communications may be prohibited between the subjects during the inactive state. The host service may transition the session into an active state upon detecting that each participant, typically the two agents and two subjects, have entered the session.
Initiating the active state may trigger a set of communication protocols and parameters, whether standardized or user-customized. In typical implementations, the communication protocols prohibit any communications between Subject A and Subject B, including A/V (Audio/Video) and text transmissions. Subject A can have unrestricted A/V communications to and from Agent B, and Subject A can transmit outbound audio communications to Agent A. Likewise, Subject B can have unrestricted A/V communications to and from Agent A, and Subject B can transmit outbound audio communications to Agent B. The agents join the communication session using two distinct computing devices to receive the distinct communications from the subjects, or the locally-instantiated application can identify and disperse communications to distinct speakers or earbuds from a single device. The session may prevent any communications between the agents.
The communication application's configuration to permit outbound audio communications to a single agent enables that agent to act as a conduit through which communications travel to the other subject. The unrestricted A/V transmissions between a subject-agent pair create the semblance that the agent is the principal in the conversation instead of merely the other subject's proxy.
The technological implementation of communication protocols and parameters within the communication session facilitates the capability to use agents as conduits through which communications are passed between subjects. The regulation of A/V communications within the session can be performed remotely at the host service or locally on the respective participants' computing devices. Furthermore, the locally-instantiated communication application is configured with a user interface (UI) that propounds the semblance that agents are the actual principal actors to which the respective subjects communicate while simultaneously reducing the agent's appearance as, in fact, a detached and neutral proxy. The technological ecosystem of features leverages software and hardware controls to make agent-based interactions not only possible, but believable.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter. Furthermore, the claimed subject matter is not limited to implementations that solve any or all disadvantages noted in any part of this disclosure. These and various other features will be apparent from a reading of the following Detailed Description and a review of the associated drawings.
Like reference numerals indicate like elements in the drawings. Elements are not drawn to scale unless otherwise indicated.
The computing devices can include a hardware layer 520, operating system (OS) layer 515, and application layer 510. The hardware layer 515 provides an abstraction of the various hardware used by the computing device (e.g., input and output devices, networking and radio hardware, etc.) to the layers above it. In this illustrative example, the hardware layer supports processor(s) 525, memory 530, input/output devices including a microphone 540, speakers 550, camera 552 (e.g., webcam), and headset 554, among other peripheral devices not shown. The computing device may likewise include a network interface 545, such as a network interface card (NIC) that enables wired (e.g., Ethernet) or wireless communications to a router or other computing device. For example, one or more network interface devices may enable the transmission of WiFi signals to a router and be configured with Bluetooth® or NFC (Near Field Communication) capabilities.
The application layer 510 in this illustrative example supports various applications 570, a communication application 565 that facilitates the creation of virtual communication sessions and the exchange of communications between participants, and a web browser application 575 that can access a remotely-instantiated communication application. The locally-instantiated communication application and browser have extensibility 580 to and interoperate with the remote communication application hosted on a remote host service 505. Features implemented by the communication application discussed herein may be performed by the local computing device, the remote host service, or a combination of both. For example, while typically the host service may host a communication session among participants operating respective devices, a local instance of the session may be controlled by a computing device in some implementations. The specific operations may depend on a given implementation and use-scenario. Although the distinct applications or a browser are depicted in
Although only certain applications are depicted in
The OS layer 515 supports, among other operations, managing system 555 and operating applications/programs 560. The OS layer may interoperate with the application and hardware layers in order to perform various functions and features.
Subject A's primary communication channel is with Agent B, and Subject B's primary communication channel is with Agent A, as representatively shown by numerals 715 and 720. These primary communication channels permit two-way A/V (Audio/Video) 705 communications between the subjects and agents to facilitate and enrich the semblance that the agent is the principal in the conversation—and not a proxy. Further to this end, one-way audio relays 730, 735 are provided from Subject A to Agent A and Subject B to Agent B.
The audio relays 730, 735 are received at the respective agents and then spoken by the respective agents to the other subject. Thus, Subject A's comments, which are intently directed to Agent B, are audio-relayed to Agent A, and Agent A mirrors those comments to Subject B. Any response from Subject B is, while also intently directed to Agent A, is audio-relayed to Agent B, and Agent B mirrors the response to Subject A.
The users may be identified at creation using an e-mail address, a phone number, or a previously set up account name (e.g., username and password) associated with each user. The users' credentials may be verified at start-up with some two-step authentication procedure. For example, upon entering the session, a user may be prompted to enter their e-mail address or phone number. The host service 505 (
Upon verification, the host service detects and associates the user to their role within the session, such as a subject, agent, administrator, or spectator. A spectator may be a person who can listen to communications from one, multiple, or all participants. The spectator may be given the option, via the session's UI (user interface), to select from which participant's vantage point they would like to listen to the session. The session may likewise limit to which participants the spectator can listen.
During the creation process, the user may likewise select to use standardized parameters or customized parameters. Typically, the communication protocols executed during the session, such as that shown in
The configuration of a blind session may include subjects and agents being blind to each other or the subjects being blind to each other. In some implementations, the blind option may be directed to the subjects. When the blind option is switched off, the session enables one or both of audio or video between subjects during the inactive state, but then disables the audio and video during the active state. Thus, switching the blind option on disables the subjects' audio, video, or both during the inactive state and active state, but then the subjects may communicate (e.g., over audio, video, or both) with each other during the post-session state.
Once the user creates the communication session 805, a URL 810 may be automatically generated and exposed to the user, such as presented on their display or transmitted to them via e-mail or text message. The URL, such as link 815, may be automatically transmitted by the communication application 565 to each participant or exposed to the user to communicate the link 815 to each participant. The users can enter the communication session by clicking the link at the scheduled time, as representatively shown by input 820.
The one-way audio-relay is received by the agent and relayed to the other subject, as discussed above. Depending on the implementation, the agents may utilize a second computing device to receive these relayed communications. For example, an agent may receive the audio-relay at a smartphone and then speak the received comments to their subject using their first, or primary, device. The agent may keep a single earbud in one ear for the secondary device's audio-relay and use another earbud for their primary device's primary communication channel with their dedicated subject.
While two devices are shown in
The communication application 565, for that particular session 805, may prevent audio or video transmissions from traveling to certain participants while forwarding audio or video communications to appropriate participants. The application's regulations are session-specific so that a user's permissions to receive communications in one session may change in a subsequent session, such as if the user switches from an agent to a subject.
The communication application may control routing based on the identified user at login using either the user's e-mail, phone number, account name, etc. The user accounts may be associated with a specific IP (Internet Protocol) address used by the communication application for routing. Routing techniques implemented by, for example, TCP/IP (Transmission Control Protocol/Internet Protocol), UDP (User Datagram Protocol), SIP (Session Initiation Protocol), among other protocols, may be used by the applications and devices. The communication application may operate at the application layer (
The communication session 805 may enter an active state 1810 upon all subjects and agents joining the session 1820. One or more of the remote communication application or the local applications may execute the automated rules 1830 (
The active state may transition into a post-session state 1815 upon expiry of the session's pre-set time or some other event that triggers an end to session 1825. The communication application may execute one or more post-session actions 1835 at the commencement of the post-session state, such as enable communications between participants 1840 (including subjects), open feedback prompts for the participants (e.g., subjects, agents, spectators, administrators) 1845, or other actions/features 1850. For example, the communication application may disregard the previous routing protocols and allow completely open communications or initiate a new routing protocol. For example, a new routing protocol can include permitting A/V transmissions between subjects while restricting agents to only inbound and outbound audio transmissions. Other routing protocol configurations are also possible.
Various control options along the bottom of the UI 1905 are depicted, which include who the administrator may be visible to 1910, who the participants can see 1915, the administrator's view 1920, the time limit 1925, whether the session is active 1930, whether the session is blind between agents and subjects 1935, and whether the session is recorded 1940. The UI's section 1945 may be where a given participant's video is shown. And boxes 1950 may be panned across the UI so the administrator can see each video-enabled participant, including agents, subjects, and spectators.
In step 2105, a user, such as a participant, may use a computing device to create a communication session. In step 2110, the device may configure the session with communication protocols and start-up parameters. In step 2115, the device may create a unique link for user-distribution. In step 2120, the device may associate the created link to the configured session. In step 2125, the device may distribute the link to all meeting subjects and agents. For example, the communication application may automatically transmit via e-mail or text message the link to participants, or the application may expose the URL to the user for distribution.
In step 2130, the device may initiate an inactive state for the session until all participants join the session. During the inactive state, all communication transmissions may be prohibited by the session, or the routing protocols may be implemented to ensure subjects cannot interact before the session. In some scenarios, however, the inactive state can be configured to enable complete and unfettered transmissions among the participants, including the subjects. Alternatively, the subjects and agents can communicate with each other so that the agents can explain the process and what to expect during the session.
In step 2135, the device may initiate an active state for the session when all participants have joined. In step 2140, the device may execute communication routing protocols, start-up parameters, and individual user interfaces (UIs). The start-up parameters may be standardized or based on the customized parameters input by the session's creator. In step 2145, the device commences the communication session. In step 2150, the device sends notifications to users regarding session status (e.g., time remaining). In step 2155, at the expiry of the session's time, the device transitions from the active state to the post-session state. The post-session state may alternatively occur after the occurrence of some other event, such as one or more agents electing to end the session. In step 2160, the device ends the communication session.
In step 2205, in
In step 2305, in
In step 2405, in
By way of example, and not limitation, computer-readable storage media may include volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules, or other data. For example, computer-readable media includes, but is not limited to, RAM, ROM, EPROM (erasable programmable read-only memory), EEPROM (electrically erasable programmable read-only memory), Flash memory or other solid-state memory technology, CD-ROM, DVDs, HD-DVD (High Definition DVD), Blu-ray, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by the architecture 2500.
According to various embodiments, the architecture 2500 may operate in a networked environment using logical connections to remote computers through a network. The architecture 2500 may connect to the network through a network interface unit 2516 connected to the bus 2510. It may be appreciated that the network interface unit 2516 also may be utilized to connect to other types of networks and remote computer systems. The architecture 2500 also may include an input/output controller 2518 for receiving and processing input from a number of other devices, including a keyboard, mouse, touchpad, touchscreen, control devices such as buttons and switches or electronic stylus (not shown in
It may be appreciated that any software components described herein may, when loaded into the processor 2502 and executed, transform the processor 2502 and the overall architecture 2500 from a general-purpose computing system into a special-purpose computing system customized to facilitate the functionality presented herein. The processor 2502 may be constructed from any number of transistors or other discrete circuit elements, which may individually or collectively assume any number of states. More specifically, the processor 2502 may operate as a finite-state machine, in response to executable instructions contained within the software modules disclosed herein. These computer-executable instructions may transform the processor 2502 by specifying how the processor 2502 transitions between states, thereby transforming the transistors or other discrete hardware elements constituting the processor 2502.
Encoding the software modules presented herein also may transform the physical structure of the computer-readable storage media presented herein. The specific transformation of physical structure may depend on various factors in different implementations of this description. Examples of such factors may include, but are not limited to, the technology used to implement the computer-readable storage media, whether the computer-readable storage media is characterized as primary or secondary storage, and the like. For example, if the computer-readable storage media is implemented as semiconductor-based memory, the software disclosed herein may be encoded on the computer-readable storage media by transforming the physical state of the semiconductor memory. For example, the software may transform the state of transistors, capacitors, or other discrete circuit elements constituting the semiconductor memory. The software also may transform the physical state of such components in order to store data thereupon.
As another example, the computer-readable storage media disclosed herein may be implemented using magnetic or optical technology. In such implementations, the software presented herein may transform the physical state of magnetic or optical media, when the software is encoded therein. These transformations may include altering the magnetic characteristics of particular locations within given magnetic media. These transformations also may include altering the physical features or characteristics of particular locations within given optical media to change the optical characteristics of those locations. Other transformations of physical media are possible without departing from the scope and spirit of the present description, with the foregoing examples provided only to facilitate this discussion.
In light of the above, it may be appreciated that many types of physical transformations take place in architecture 2500 in order to store and execute the software components presented herein. It also may be appreciated that the architecture 2500 may include other types of computing devices, including wearable devices, handheld computers, embedded computer systems, smartphones, PDAs, and other types of computing devices known to those skilled in the art. It is also contemplated that the architecture 2500 may not include all of the components shown in
A number of program modules may be stored on the hard disk, magnetic disk, optical disk 2643, ROM 2617, or RAM 2621, including an operating system 2655, one or more application programs 2657, other program modules 2660, and program data 2663. A user may enter commands and information into the computer system 2600 through input devices such as a keyboard 2666, pointing device (e.g., mouse) 2668, or touchscreen display 2673. Other input devices may include a microphone, joystick, game pad, satellite dish, scanner, trackball, touchpad, touch-sensitive device, voice-command module or device, user motion or user gesture capture device, or the like. These and other input devices are often connected to the processor 2605 through a serial port interface 2671 that is coupled to the system bus 2614, but may be connected by other interfaces, such as a parallel port, game port, or universal serial bus (USB). A monitor 2673 or other type of display device is also connected to the system bus 2614 via an interface, such as a video adapter 2675. In addition to the monitor 2673, personal computers typically include other peripheral output devices (not shown), such as speakers and printers. The illustrative example shown in
The computer system 2600 is operable in a networked environment using logical connections to one or more remote computers, such as a remote computer 2688. The remote computer 2688 may be selected as another personal computer, a server, a router, a network PC, a peer device, or other common network node, and typically includes many or all of the elements described above relative to the computer system 2600, although only a single representative remote memory/storage device 2690 is shown in
When used in a LAN networking environment, the computer system 2600 is connected to the local area network 2693 through a network interface or adapter 2696. When used in a WAN networking environment, the computer system 2600 typically includes a broadband modem 2698, network gateway, or other means for establishing communications over the wide area network 2695, such as the Internet. The broadband modem 2698, which may be internal or external, is connected to the system bus 2614 via a serial port interface 2671. In a networked environment, program modules related to the computer system 2600, or portions thereof, may be stored in the remote memory storage device 2690. It is noted that the network connections shown in
Various embodiments are discussed herein to implement the agent-based communications. One exemplary embodiment includes a computing device, comprising: a network interface; input/output mechanisms including a camera, display, and microphone; one or more processors; and one or more hardware-based memory devices storing a communication application or plugin adapted to create virtual communication sessions between subjects, the memory devices further having instructions which, when executed by the one or more processors, cause the computing device to: create a communication session for which at least two subjects and two agents are invited; configure the communication session with parameters by which to prevent any direct interaction between the two subjects while enabling interaction between the subjects and agents; and enable communications according to the configured parameters upon each subject and agent joining the communication session.
As another example, the communication application is supported on a remote service which a user accesses using a browser instantiated on the computing device. As another example, the communication application or plugin is instantiated on the computing device and interoperates with a remotely hosted communication application that performs some functionality. In a further example, each subject and each agent are remote to each other and operate their own respective computing devices. In another example, preventing any interaction between the two subjects includes prohibiting audio or video data transmission between the two. As another example, the subjects include Subject A and Subject B, and the agents include Agent A and Agent B, and wherein the communication session's parameters enable Subject A to exchange audio and video signals with Agent B. As a further example, the communication session's parameters enable Subject A to transmit one-way audio signals to Agent A, but prevents all other audio and video exchanges between Subject A and Agent A. In another example, the communication session's parameters enable Subject B to exchange audio and video signals with Agent A, and the parameters enable Subject B to transmit one-way audio signals to Agent B, but prevents all other audio and video exchanges between Subject B and Agent B. In another example, upon creation of the communication session, the communication session enters an inactive state by which it is configured to periodically monitor for each subject and each agent to join the communication session, and upon detecting that each subject and agent has joined, the communication session enters an active state by which it executes the configured parameters. As another example, at expiry of the communication session after some pre-set event occurs, the communication session enters a post-active state by which the communication application prompts a request for information to a user of the computing device. In another example, the pre-set event includes any one or more of expiration of time or a participant leaves the communication session.
Another embodiment includes a method performed by a remote host service to utilize agents as proxies for virtual communications between subjects, comprising: setting standardized parameters for communication sessions, in which the standardized parameters include preventing any A/V (Audio/Video) routing between each subject within the communication session during an active state for communication sessions; receiving a request from a user computing device to create a communication session for at least four participants, including two subjects and two agents; responsive to the request, creating the communication session for the four participants; applying the standardized parameters to the communication session; generating one or more links to the created communication session; distributing the one or more links to at least the user computing device; monitoring for the subjects and agents to join the communication session; and entering an active state responsive to detecting each subject and agent has joined the communication session, wherein entering the active state triggers initiation of the standardized parameters.
As another example, the host service modifies the standardized parameters according to any modifications in the received request. In a further example, the communication session permits the subjects to interact during a post-session state of the communication session, which occurs after expiry of the active state. In another example, the standardized parameters are configured to allow each subject to exchange A/V communications with a single and distinct agent. As another example, the created communication session is configured to identify which participant is a subject or agent by the participant's association with a unique account set up with the host service or other unique credential for the participants.
Another exemplary embodiment includes one or more hardware-based non-transitory computer-readable memory devices stored within a computing device, the memory devices including instructions which, when executed by one or more processors, cause the computing device to: access a communication application adapted to create and host virtual video chat rooms among users; using the communication application, configure a communication session for multiple participants, in which the participants include at least two subjects and two agents, and the configuring includes selecting A/V (Audio/Video) routing protocols when the communication session begins; transmit one or more links that are uniquely associated with the configured communication session to each participant; join the communication session using a link of the one or more links; responsive to joining the communication session, identify the computing device's user as a subject or an agent within the communication session; and presenting a UI (user interface) on the computing device's display, wherein the presented UI depends on the user's identification as a subject or agent.
In another example, the identification of the user as a subject or agent also affects the A/V routing protocols applied to the user during the communication session. As another example, responsive to the user being identified as an agent, the user's communication session will allow one-way receipt of audio signals from one of the subjects. As a further example, responsive to the user being identified as a subject, the user's communication session will allow one-way transmission of audio signals to one of the agents.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
This Non-Provisional Utility Patent Application claims the benefit of and priority to U.S. Provisional Patent Application Ser. No. 62/987,547, filed Mar. 10, 2020, entitled “Methods and Systems to Communicate,” the entire contents of which is hereby incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
62987547 | Mar 2020 | US |