The disclosed subject matter relates to the field of online meeting. More particularly, but not exclusively, the subject matter relates to magnification of video stream during an online meeting.
The rapid rise in the internet usage across the globe has reshaped the way people connect with each other. Moreover, with a good internet connection, video conferencing has made communication over the internet feel as real as communicating in person. The video conferencing has been typically used in business meetings, tele medicines, recruitments and so forth. However, it shall be noted that, off-late, the video conferencing has found its application beyond conventional applications. As an example, video conferencing is now being used to conduct webinars for online teaching, live streaming of weddings, live streaming of rallies so on and so forth.
In such applications, typically a host or a streaming device streams one or more video streams with the participants of the online event. The participants are able to view the streamed video streams using devices such as a mobile phone, computer and so forth. Typically, a streamed video may cover a large area, in which case, the participants may not be able to see fine details of covered in the video. As an example, a video stream may cover a party hall and the user may want to know the brand of a loud speaker but is unable to clearly see the brand name. In such cases, a magnification feature to magnify the particular region of the video to clearly see the brand name of the loud speaker may be desirable.
It shall be noted that, conventional video streaming tools do not offer the ability to magnify a video stream as required by the user.
In view of the foregoing, it is apparent that there is a need for an improved video conferencing system enabling magnification of the video stream.
In one embodiment, a system enabling magnification of a video stream during an online event is disclosed. The system comprises a first data processing system and a second data processing system. The first data processing system comprises a first processor module and a first digital client, wherein the first processor module causes the first digital client to share at least a first video stream with the second data processing system. The second data processing system comprises a second processor module, a second digital client and a second digital client display interface, wherein in the second digital client comprises a second digital client display interface, wherein the second digital client displays in the second digital client display interface, visual content of the first video stream in a display window. The second processor module is configured to receive an instruction from a user associated with the second data processing system, wherein the instruction comprises information related to a region of the first video stream to be magnified. Further, the second processor module is configured to magnify the region of the first video stream based on the instruction provided by the user and cause the second digital client display interface to display the magnified region of the first video stream in the display window.
Embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements and in which:
The following detailed description includes references to the accompanying drawings, which form a part of the detailed description. The drawings show illustrations in accordance with example embodiments. These example embodiments, which may be herein also referred to as “examples” are described in enough detail to enable those skilled in the art to practice the present subject matter. However, it may be apparent to one with ordinary skill in the art, that the present invention may be practised without these specific details. In other instances, well-known methods, procedures and components have not been described in detail so as not to unnecessarily obscure aspects of the embodiments. The embodiments can be combined, other embodiments can be utilized, or structural, logical, and design changes can be made without departing from the scope of the claims. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope is defined by the appended claims and their equivalents.
In this document, the terms “a” or “an” are used, as is common in patent documents, to include one or more than one. In this document, the term “or” is used to refer to a nonexclusive “or,” such that “A or B” includes “A but not B,” “B but not A,” and “A and B,” unless otherwise indicated.
In one embodiment, the first video stream 110 may comprise an audio component and a video component. The video component and the audio component of the first video stream 110 shared by the first data processing system 102 may be obtained from a first camera and a first microphone respectively of the first data processing system 102.
In one embodiment, the first data processing system 102 and the second data processing system 104 may include, but not limited to, desktop computer, laptop, smartphone or the like.
The first processor module 202 may be implemented in the form of one or more processors and may be implemented as appropriate in hardware, computer-executable instructions, firmware, or combinations thereof. Computer-executable instruction or firmware implementations of the first processor module 202 may include computer-executable or machine-executable instructions written in any suitable programming language to perform the various functions described.
The memory module 204 may include a permanent memory such as hard disk drive, may be configured to store data, and executable program instructions that are implemented by the processor module. The memory module 204 may be implemented in the form of a primary and a secondary memory. The memory module 204 may store additional data and program instructions that are loadable and executable on the first processor module 202, as well as data generated during the execution of these programs. Further, the memory module 204 may be volatile memory, such as random-access memory and/or a disk drive, or non-volatile memory. The memory module 204 may comprise of removable memory such as a Compact Flash card, Memory Stick, Smart Media, Multimedia Card, Secure Digital memory, or any other memory storage that exists currently or may exist in the future.
In an embodiment, the memory module 204 may further comprise a first digital client 214, an Application Programming Interface (API) 216, a codec 218, an encryptor 220 and a decryptor 222. The first digital client 214 may be a web browser or a software application enabling multiple screen sharing simultaneously, wherein the first digital client 214 may further comprise a first digital client display interface. The first digital client interface may enable the interaction of the user with the data processing system. The codec 218 may include computer-executable or machine-executable instructions written in any suitable programming language to perform compress outgoing data and decompress incoming data. The encryptor 220 may encrypt the data being sent and decryptor 222 may decrypt the incoming data.
The display module 206 may display an image, a video, or data to a user. For example, the display module 206 may include a panel, and the panel may be an LCD, LED or an AM-OLED.
The input modules 208 may provide an interface for input devices such as keypad, touch screen, mouse and stylus among other input devices. In an embodiment, the input modules 208 includes a camera and a microphone.
The output modules 210 may provide an interface for output devices such as display screen, speakers, printer and haptic feedback devices, among other output devices.
The communication module 212 may be used by the first data processing system 102 to communicate with the remote server 106. The communication module 212, as an example, may be a GPRS module, or other modules that enable wireless communication.
In one embodiment, the input device 314 may be a mouse, a touch screen, a keyboard or the like.
The processing unit 402 may be implemented in the form of one or more processors and may be implemented as appropriate in hardware, computer-executable instructions, firmware, or combinations thereof. Computer-executable instruction or firmware implementations of the processing unit 402 may include computer-executable or machine-executable instructions written in any suitable programming language to perform the various functions described.
The memory unit 404 may include a permanent memory such as hard disk drive, may be configured to store data, and executable program instructions that are implemented by the processor module.
The communication unit 406 may be used by the remote server 106 to communicate with the first data processing system 102 and the second data processing system 104. The communication unit 406, as an example, may be a GPRS module, or other modules that enable wireless communication.
The routing unit 408 may enable identification of data processing systems to which the data must be transmitted.
The encrypting/decrypting unit 410 may encrypt the incoming data from each of the data processing systems and decrypt the outgoing data from the remote server 106.
The authenticating unit 412 may authenticate each of the data processing systems before establishing a connection.
Upon establishing the connection, the first data processing system 102 may publish a first video stream 110. The first video stream 110 may comprise a video component obtained from a web camera and an audio component obtained from a microphone respectively of the first data processing system 102.
In one embodiment, the first digital client 214 of the first data processing system 102 may create a first publishing data channel 504 for the first video stream 110, wherein the first publishing data channel 504 may publish the first video stream 110 published by the first digital client 214.
In one embodiment, the first publishing data channel 504 may comprise a video track and an audio track, wherein each of the video track and the audio track of each publishing data channel forms a UDP socket 502c with the remote server 106 to publish the first video stream 110 from the first data processing system 102.
In one embodiment, the number of publishing data channels created by the first data processing system 102 may be based on the number of video streams shared by the first data processing system 102. As an example, if the first data processing system 102 shares three video streams, the first digital client may create three publishing data channels, wherein each publishing channel correspond to one video stream.
In one embodiment, the second digital client 316 of the second data processing system 104 may create a first receiving data channel 506 for the first video stream 110 published by the first data processing system 102, wherein the first receiving data channel 506 may receive the first video stream 110 published by the first digital client 214 of the first data processing system 102.
In one embodiment, the number of receiving data channels created by the second data processing system 104 may be based on the number of video streams shared by the first data processing system 102. As an example, if the first data processing system 102 shares three video streams, the second digital client may create three publishing channels, wherein each receiving data channel correspond to one video stream.
At step 604, the remote server 106 may receive the request from the first data processing system 102 and may authenticate the request using the authenticating unit 412.
At step 606, after successful authentication, the remote server 106 may establish a connection with the first data processing system 102 via the signalling channels (508a and 508b).
At step 608, the second data processing system 104 may request the remote server 106 to establish a connection with the first data processing system 102. As an example, the second data processing system 104 may provide an online meeting identifier for connecting with the first data processing system 102.
At step 610, the remote server 106 may authenticate the request received from the second data processing system 104 using the authenticating unit 412.
At step 612, after successful authentication, the remote server 106 may establish a connection between the first data processing system 102 and the second data processing system 104 using the signalling channels (508a and 508b).
In one embodiment, the first data processing system 102 may be configured to publish more than one video stream with the second data processing system 104
At step 704, the second data processing system 104 may receive the first video stream 110 published by the first data processing system 102. The second data processing system 104 may display the received first video stream 110 on the second digital client display interface of the second data processing system 104.
In one embodiment, the second data processing system 104 may receive multiple video streams published by the first data processing system 102. Further, the second data processing system 104 may display the received multiple video stream in individual display windows on the second digital client display interface.
At step 706, the second data processing system 104 may receive an instruction from a user associated with the second data processing system 104. The instruction may pertain to magnifying a region of the first video stream 110 that is displayed on the second digital client display interface.
In one embodiment, the user associated with the second data processing system 104 may provide the instruction to the second data processing system 104 using an input device 314.
At step 708, the second data processing system 104 may magnify the region of the first video stream 110 as instructed by the user associated with the second data processing system 104.
At step 710, the second data processing system 104 may display the magnified region of the first video stream 110 on the second digital client display interface.
In one embodiment, the magnified region of the first video may occupy the display window that displays the first video stream 110.
In one embodiment, the input device 314 may be a mouse that is connected to the second data processing system 104. The input device 314 may create a pointer image on the first video stream 110 that is displayed on the second digital client display interface. The position of the pointer image may be changed by changing the orientation of the input device 314. As an example, by moving the mouse, the position of the pointer image displayed on the second digital client display interface may be changed.
In another embodiment, the input device 314 may be a touchscreen that is connected to the second data processing system 104. The user may select a region of the first video to be magnified by touching the region of the first video stream 110 displayed on the second digital client display interface.
In one embodiment, when multiple video streams are displayed in multiple display windows on the second digital client display interface, the second processor module 302 may determine a video and a region of the video that is to be magnified. As an example, the user may move the mouse in a manner that the pointer image is positioned within a display window that displays the video stream that is to be magnified.
At step 804, the second processing module 302 may create an active site on the first video stream 110 displayed on the second digital client display interface based on the first input received from the user. The active site may relate to the region of the first video stream 110 to be magnified.
In one embodiment, the active site may be formed around the region of the pointer image of the input device 314 that is displayed on the first video stream 110. The user can change the active site (region of the first video stream 110 to be magnified) by changing the orientation of the mouse.
In another embodiment, the active site may be formed around the region where the user has provided a touch input in a touchscreen based input device 314.
At step 806, the second data processing system 104 may receive a second input from the user via the input device 314. The second input may relate to the amount of magnification to be performed in the selected region of the first video stream 110.
In one embodiment, the user may provide the second input using a wheel provided on the mouse. By scrolling the wheel of the mouse the user may determine the amount of magnification to be performed on the select region of the first video stream 110 that is to be magnified.
In another embodiment, the user may make a gesture on the touchscreen to magnify the region of the first video stream 110. The gesture may be pressing two fingers together on the touchscreen and moving them away from each other as if stretching them apart.
At step 808, the second data processing system 104 upon receive the second input from the user via the input device 314 and determine the amount of magnification to be performed based on the received second input.
At step 810, the second data processing system 104 may magnify the region of the first video stream 110 that is displayed on the second digital client display interface. The second data processing system 104 may magnify the region of the first video stream 110 based on the first input and the second input received from the user via the input device 314. The first input may relate to the region to be magnified and the second input may relate to the amount of magnification to be performed.
In one embodiment, when multiple video streams are displayed on the second digital client display interface, the second processor may determine the region of a specific video stream to be magnified and the amount of magnification to be performed based on the first input and second input received from the user via the input device 314.
Referring to
In one embodiment, the second processor module 302 may be configured to mute the audio of the video streams upon receiving an instruction from the user associated with the second data processing system 104.
In one embodiment, the server 106 may be configured to create an identity (refer
In one embodiment, the identities created by the server 106 are unique compared to each other.
The processes described above is described as a sequence of steps, this was done solely for the sake of illustration. Accordingly, it is contemplated that some steps may be added, some steps may be omitted, the order of the steps may be re-arranged, or some steps may be performed simultaneously.
The example embodiments described herein may be implemented in an operating environment comprising software installed on a computer, in hardware, or in a combination of software and hardware.
Although embodiments have been described with reference to specific example embodiments, it will be evident that various modifications and changes may be made to these embodiments without departing from the broader spirit and scope of the system and method described herein. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense.
Many alterations and modifications of the present invention will no doubt become apparent to a person of ordinary skill in the art after having read the foregoing description. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. It is to be understood that the description above contains many specifications, these should not be construed as limiting the scope of the invention but as merely providing illustrations of some of the personally preferred embodiments of this invention.