1. Field of the Disclosure
The technology of the disclosure relates generally to Web Real-Time Communications (WebRTC) interactive sessions.
2. Technical Background
Web Real-Time Communications (WebRTC) represents an ongoing effort to develop industry standards for integrating real-time communications functionality into web clients, such as web browsers, to enable direct interaction with other web clients. This real-time communications functionality is accessible by web developers via standard markup tags, such as those provided by version 5 of the Hypertext Markup Language (HTML5), and client-side scripting Application Programming Interfaces (APIs) such as JavaScript APIs. More information regarding WebRTC may be found in “WebRTC: APIs and RTCWEB Protocols of the HTML5 Real-Time Web” by Alan B. Johnston and Daniel C. Burnett (2012 Digital Codex LLC), which is incorporated in its entirety herein by reference.
WebRTC provides built-in capabilities for establishing real-time video, audio, and/or data streams in both point-to-point interactive sessions, as well as multi-party interactive sessions. The WebRTC standards are currently under joint development by the World Wide Web Consortium (W3C) and the Internet Engineering Task Force (IETF). Information on the current state of WebRTC standards can be found at, e.g., http://www.w3c.org and http://www/ietf.org.
WebRTC does not provide built-in accessibility capabilities to allow users affected by a sensory impairment, such as a hearing or vision deficiency, to optimize their WebRTC interactive session experience. While the audio and video output of a user's computing device may be manually adjusted, such adjustments typically affect all audio and video generated by the computing device, and are not limited to a WebRTC interactive session. Moreover, a user may be unable to adequately optimize a WebRTC interactive session through manual adjustment. Consequently, users may face challenges in attempting to customize their WebRTC interactive session to compensate for a sensory impairment.
Embodiments disclosed in the detailed description provide compensating for user sensory impairment in Web Real-Time Communications (WebRTC) interactive sessions. Related methods, systems, and computer-readable media are also disclosed. In this regard, in one embodiment, a method for compensating for user sensory impairment in a WebRTC interactive session is provided. The method comprises receiving, by a computing device, an indication of user sensory impairment. The method further comprises receiving a content of a WebRTC interactive flow directed to the computing device. The method also comprises modifying, by the computing device, the content of the WebRTC interactive flow based on the indication of user sensory impairment. The method additionally comprises rendering the modified content of the WebRTC interactive flow. In this manner, a WebRTC interactive flow may be enhanced to compensate for a user sensory impairment, and thus the user's comprehension of the WebRTC interactive session may be improved.
In another embodiment, a system for compensating for a user sensory impairment in a WebRTC interactive session is provided. The system comprises at least one communications interface, and a computing device associated with the at least one communications interface and comprising a sensory impairment compensation agent. The sensory impairment compensation agent is configured to receive an indication of user sensory impairment. The sensory impairment compensation agent is further configured to receive a content of a WebRTC interactive flow directed to the computing device. The sensory impairment compensation agent is also configured to modify the content of the WebRTC interactive flow based on the indication of user sensory impairment. The sensory impairment compensation agent is additionally configured to render the modified content of the WebRTC interactive flow.
In another embodiment, a non-transitory computer-readable medium is provided. The non-transitory computer-readable medium has stored thereon computer-executable instructions to cause a processor to implement a method comprising receiving, by a computing device, an indication of user sensory impairment. The method implemented by the computer-executable instructions further comprises receiving a content of a WebRTC interactive flow directed to the computing device. The method implemented by the computer-executable instructions also comprises modifying, by the computing device, the content of the WebRTC interactive flow based on the indication of user sensory impairment. The method implemented by the computer-executable instructions additionally comprises rendering the modified content of the WebRTC interactive flow.
The accompanying drawing figures incorporated in and forming a part of this specification illustrate several aspects of the disclosure, and together with the description serve to explain the principles of the disclosure.
With reference now to the drawing figures, several exemplary embodiments of the present disclosure are described. The word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any embodiment described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments.
Embodiments disclosed in the detailed description provide compensating for user sensory impairment in Web Real-Time Communications (WebRTC) interactive sessions. Related methods, systems, and computer-readable media are also disclosed. In this regard, in one embodiment, a method for compensating for user sensory impairment in a WebRTC interactive session is provided. The method comprises receiving, by a computing device, an indication of user sensory impairment. The method further comprises receiving a content of a WebRTC interactive flow directed to the computing device. The method also comprises modifying, by the computing device, the content of the WebRTC interactive flow based on the indication of user sensory impairment. The method additionally comprises rendering the modified content of the WebRTC interactive flow. In this manner, a WebRTC interactive flow may be enhanced to compensate for a user sensory impairment, and thus the user's comprehension of the WebRTC interactive session may be improved.
In this regard,
Before discussing details of the sensory impairment compensation agent 12, the establishment of a WebRTC interactive session in the system 10 of
In the system 10 of
The system 10 of
The web client 18 and the computing device 28 then establish secure web connections 30 and 32, respectively, with the web application server 26, and engage in a WebRTC session establishment exchange 34. In some embodiments, the WebRTC session establishment exchange 34 includes a WebRTC offer/answer exchange accomplished through an exchange of WebRTC session description objects (not shown). Once the WebRTC session establishment exchange 34 is complete, a WebRTC interactive flow 36 may be established via a secure peer connection 38 directly between the web client 18 and the computing device 28. Accordingly, in
It is to be understood that some embodiments may utilize topographies other than the WebRTC “triangle” topography illustrated in
As noted above, the WebRTC functionality provider 22 implements protocols, codecs, and APIs necessary to enable real-time interactive sessions via WebRTC. However, the WebRTC functionality provider 22 may not include accessibility options for optimizing a WebRTC interactive session for the user 16 affected by a sensory impairment. In this regard, the sensory impairment compensation agent 12 of
As indicated by bidirectional arrow 42, the sensory impairment compensation agent 12 receives an indication of user sensory impairment 44. The indication of user sensory impairment 44 provides data regarding a sensory impairment affecting the user 16. The indication of user sensory impairment 44, in some embodiments, may specify a type of the sensory impairment (e.g., a hearing impairment and/or a visual impairment), a degree of the sensory impairment, and/or a corrective measure to compensate for the sensory impairment.
Based on the indication of user sensory impairment 44, the sensory impairment compensation agent 12 modifies the content 40 of the WebRTC interactive flow 36 to improve the user 16's comprehension of the WebRTC interactive flow 36. For example, in embodiments where the indication of user sensory impairment 44 indicates a hearing impairment, the sensory impairment compensation agent 12 may modify a real-time audio stream of the content 40 of the WebRTC interactive flow 36. Modifications to the real-time audio stream may include modifying an amplitude of a frequency in the real-time audio stream, and/or substituting one frequency for another in the real-time audio stream (i.e., “audio colorization”). Likewise, in embodiments where the indication of user sensory impairment 44 indicates a vision impairment, the sensory impairment compensation agent 12 may modify a real-time video stream of the content 40 of the WebRTC interactive flow 36. Modifications to the real-time video stream may include modifying an intensity of a color in the real-time video stream, and/or substituting one color for another in the real-time video stream.
After modifying the content 40 of the WebRTC interactive flow 36, the sensory impairment compensation agent 12 renders a modified content 46 of the WebRTC interactive flow 36 for consumption by the user 16. In some embodiments, rendering the modified content 46 may comprise generating audio and/or video output to the user 16 based on the modified content 46.
Some embodiments may provide that the sensory impairment compensation agent 12 may receive the indication of user sensory impairment 44 by accessing a user-provided data file 48 supplied by the user 16. The user-provided data file 48 may indicate a type and/or a degree of user sensory impairment affecting the user 16, and may be generated by an assessment administered to the user 16 by a medical professional. The user-provided data file 48 may also include a corrective measure to compensate for user sensor impairment. In some embodiments, the sensory impairment compensation agent 12 itself may administer a sensory impairment assessment 50 to the user 16. The sensory impairment compensation agent 12 may then receive the indication of user sensory impairment 44 based on a result of the sensory impairment assessment 52. Some embodiments may provide that the sensory impairment compensation agent 12 receives the indication of user sensory impairment 44 by accessing a user profile 54 associated with the user 16. The user profile 54 may store previously-determined information about a sensory impairment of the user 16, enabling the sensory impairment compensation agent 12 to subsequently access the information from the user profile 54 without requiring additional input from or testing of the user 16.
To generally describe exemplary operations of the sensory impairment compensation agent 12 of
With continuing reference to
Referring now to
If the sensory impairment compensation agent 12 determines at block 64 that the indication of user sensory impairment 44 is provided by a result of a sensory impairment assessment 52, the sensory impairment compensation agent 12 administers a sensory impairment assessment 50 to the user 16 to assess the type and degree of the user sensory impairment (block 70). For instance, the sensory impairment compensation agent 12 may provide an interactive hearing and/or vision test to evaluate whether the user 16 is affected by a user sensory impairment. The sensory impairment compensation agent 12 then determines the indication of user sensory impairment 44 based on a result of the sensory impairment assessment, such as the result of the sensory impairment assessment 52 (block 72).
In embodiments where the indication of user sensory impairment 44 is determined based on a user-provided data file 48 or a result of a sensory impairment assessment 52, the sensory impairment compensation agent 12 may optionally store the indication of user sensory impairment 44 in a user profile, such as the user profile 54, for later access (block 74). Processing then continues at block 76 of
Returning to the decision point at block 64 of
Referring now to
The sensory impairment compensation agent 12 next determines whether the indication of user sensory impairment 44 includes an indication of user vision impairment (block 84). If not, processing proceeds to block 88. If the indication of user sensory impairment 44 does include an indication of user vision impairment, the sensory impairment compensation agent 12 modifies a real-time video stream of the content 40 of the WebRTC interactive flow 36 based on the indication of user sensory impairment 44 (block 90). Processing then proceeds to block 88. At block 88, the sensory impairment compensation agent 12 renders a modified content of the WebRTC interactive flow, such as the modified content 46 of the WebRTC interactive flow 36 of
The exemplary computer system 94 includes a processing device or processor 96, a main memory 98 (as non-limiting examples, read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM), etc.), and a static memory 100 (as non-limiting examples, flash memory, static random access memory (SRAM), etc.), which may communicate with each other via a bus 102. Alternatively, the processing device 96 may be connected to the main memory 98 and/or the static memory 100 directly or via some other connectivity means.
The processing device 96 represents one or more processing devices such as a microprocessor, central processing unit (CPU), or the like. More particularly, the processing device 96 may be a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a processor implementing other instruction sets, or processors implementing a combination of instruction sets. The processing device 96 is configured to execute processing logic in instructions 104 and/or cached instructions 106 for performing the operations and steps discussed herein.
The computer system 94 may further include a communications interface in the form of a network interface device 108. It also may or may not include an input 110 to receive input and selections to be communicated to the computer system 94 when executing the instructions 104, 106. It also may or may not include an output 112, including but not limited to display(s) 114. The display(s) 114 may be a video display unit (as non-limiting examples, a liquid crystal display (LCD) or a cathode ray tube (CRT)), an alphanumeric input device (as a non-limiting example, a keyboard), a cursor control device (as a non-limiting example, a mouse), and/or a touch screen device (as a non-limiting example, a tablet input device or screen).
The computer system 94 may or may not include a data storage device 115 that includes using drive(s) 116 to store the functions described herein in a computer-readable medium 118, on which is stored one or more sets of instructions 120 (e.g., software) embodying any one or more of the methodologies or functions described herein. The functions can include the methods and/or other functions of the processing system 92, a participant user device, and/or a licensing server, as non-limiting examples. The one or more sets of instructions 120 may also reside, completely or at least partially, within the main memory 98 and/or within the processing device 96 during execution thereof by the computer system 94. The main memory 98 and the processing device 96 also constitute machine-accessible storage media. The instructions 104, 106, and/or 120 may further be transmitted or received over a network 122 via the network interface device 108. The network 122 may be an intra-network or an inter-network.
While the computer-readable medium 118 is shown in an exemplary embodiment to be a single medium, the term “machine-accessible storage medium” should be taken to include a single medium or multiple media (as non-limiting examples, a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions 120. The term “machine-accessible storage medium” shall also be taken to include any medium that is capable of storing, encoding, or carrying a set of instructions 104, 106, and/or 120 for execution by the machine, and that cause the machine to perform any one or more of the methodologies disclosed herein. The term “machine-accessible storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical and magnetic media, and carrier wave signals.
The embodiments disclosed herein may be embodied in hardware and in instructions that are stored in hardware, and may reside, as non-limiting examples, in Random Access Memory (RAM), flash memory, Read Only Memory (ROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), registers, a hard disk, a removable disk, a CD-ROM, or any other form of computer readable medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an Application Specific Integrated Circuit (ASIC). The ASIC may reside in a remote station. In the alternative, the processor and the storage medium may reside as discrete components in a remote station, base station, or server.
It is also noted that the operational steps described in any of the exemplary embodiments herein are described to provide examples and discussion. The operations described may be performed in numerous different sequences other than the illustrated sequences. Furthermore, operations described in a single operational step may actually be performed in a number of different steps. Additionally, one or more operational steps discussed in the exemplary embodiments may be combined. It is to be understood that the operational steps illustrated in the flow chart diagrams may be subject to numerous different modifications as will be readily apparent to one of skill in the art. Those of skill in the art would also understand that information and signals may be represented using any of a variety of different technologies and techniques. As non-limiting examples, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.
The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the spirit or scope of the disclosure. Thus, the disclosure is not intended to be limited to the examples and designs described herein, but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.