Every business depends on the free flow of information and ideas to improve their products and services. Effective collaboration between people both inside and outside an organization increases product quality, improves product or project development lead times, and reduces costs. However, effective collaboration is often difficult. One issue that hinders effective meetings is that often people are not physically co-located. Traveling to meetings can be very time-consuming when considering lengthy travel times, and can be expensive. For a two or three hour meeting people often will travel a day to the meeting and a day back, especially if the meeting is across country or located in another country or on another continent.
Web (intranet or Internet) meetings or other virtual meetings have become more and more popular thanks to the emergence of high speed Internet connections and reduced prices for high quality web cameras. Web conferences can save large amounts of time and money because meeting participants in remote places can almost instantaneously join a meeting without wasting time and money traveling to the meeting site. Often web meetings can be spontaneously planned because there is no need for airline or hotel reservations and a meeting of an hour or two need not take days of preparations and travel.
An issue with web meetings is that depending on the type of meeting different equipments may be necessary at the end user. For example, some web or Internet meetings are provided by subscribing to a service. These meetings are typically conducted over the Internet with a service provider providing a server to conduct the meeting, herein termed a meeting service server. The connections to the Internet for these types of meetings are sometimes dial up and often quite slow, much slower than a corporate network typically would be. Alternately, meetings can be conducted over a network that is owned by a corporation using a server, herein termed an office communication server, which may be presumed to have greater throughput speed and bandwidth than the World Wide Web or Internet. Typically, client equipment to interface with these two types of servers is specialized and only interfaces with one type of net meeting or the other.
Availability of real-time, rich video from a variety of devices is a key feature in the present panoramic video technique and in the server/service that collaborates to enable video capability in a unified live meeting client. The unified live meeting client is otherwise termed as the unified client because it can operate with either an office communication server or a meeting service server without requiring specialized equipment or software.
The present panoramic video technique embodied in the unified client provides panoramic video and other data from various sources for live web-based conferencing applications. In one embodiment, the panoramic video is provided by a panoramic collaboration and communication device, termed a RoundTable Device (RTD). The RTD is a collaboration tool with a 360-degree view, a speaker and a microphone array that, together with the unified client, delivers an immersive conferencing experience that virtually extends the meeting room across multiple locations. This enables live network meeting scenarios that were not possible before. The present panoramic video technique can, using web-based equipments and techniques, provide a one presenter live meeting video broadcast, a conference room panoramic video broadcast, a multi-speaker broadcast and a one-to-one conference room broadcast where panoramic video of multiple sites is captured and transmitted.
The present panoramic video technique is hosted on a unified client that can be part of an enterprise server-client network that is part of an entity such as a corporate network. In this case the corporation or other entity owns the server and runs its own live meeting software via one or more servers that are employed for this purpose. Or the client may be on the Internet or other network and be subscribed to as a service that provides live meeting support (herein termed a live meeting service configuration). In this second configuration, the live meeting service, and therefore the live meeting service server, is provided by another entity than the meeting participants. The unified client also has the capability to seamlessly traverse firewalls and hence would work from home with an office communication on-premise server as a backend. In either configuration, the office communication server or meeting service configuration, the unified client is configured so that it can participate in either type of meeting without requiring different software or hardware. To the user of the unified client whether the unified client is connected to a live meeting service or an enterprise server (e.g., an office communication server) is transparent.
It is noted that while the foregoing limitations in existing techniques for overcoming web conferencing issues described in the Background section can be resolved by a particular implementation of the present panoramic video system and technique described herein, this technique is in no way limited to implementations that just solve any or all of the noted disadvantages. Rather, the present technique has a much wider application as will become evident from the descriptions to follow.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
The specific features, aspects, and advantages of the disclosure will become better understood with regard to the following description, appended claims, and accompanying drawings where:
In the following description, reference is made to the accompanying drawings that form a part hereof, and in which is shown by way of illustration specific embodiments of the disclosure. It is understood that other embodiments may be utilized and structural changes may be made without departing from the scope of the present disclosure.
Before providing a description of embodiments of the present panoramic video technique, a brief, general description of a suitable computing environment in which portions of the technique may be implemented will be described. The technique is operational with numerous general purpose or special purpose computing system environments or configurations. Examples of well known computing systems, environments, and/or configurations that may be suitable for use with the process include, but are not limited to, personal computers, server computers, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
Device 100 may also contain communications connection(s) 112 that allow the device to communicate with other devices. Communications connection(s) 112 is an example of communication media. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. The term computer readable media as used herein includes both storage media and communication media.
Device 100 may also have input device(s) 114 such as keyboard, mouse, camera, microphone, pen, voice input device, touch input device, etc. In particular, such input devices include a video camera, a web camera and an omni-directional camera integrated with a microphone array (e.g., an RTD). Output device(s) 116 such as a display, speakers, printer, etc. may also be included.
The present technique may be described in the general context of computer-executable instructions, such as program modules, being executed by a computing device. Generally, program modules include routines, programs, objects, components, data structures, and so on that perform particular tasks or implement particular abstract data types. The process may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network, an example of which is shown in
The exemplary operating environment having now been discussed, the remaining parts of this description section will be devoted to a description of the program modules embodying the present system and technique.
Availability of real-time, rich video from a variety of devices is a feature in the present panoramic video technique which is embodied in a unified client that can interchangeably be used with a meeting service or a standalone client-server network (e.g., with an office communication server). The unified client is employed in conjunction with a server/service that collaborates to enable video capability in the client using a collaboration tool that provides an immersive conferencing experience. In addition to supporting video from a standard webcam, the unified client also supports a RoundTable Device (RTD) that is a collaboration tool with a 360-degree view, a speaker, a microphone array that essentially extends the meeting room across multiple locations by allowing multiple meeting sites to share the meeting experience. This enables scenarios not possible before.
In a live meeting service environment, which typically has demanding scaling requirements amidst low bandwidth networks, only single video feed is preferably streamed to and from a client. The RTD in this environment typically behaves as a standard webcam. The unified client running against an office communication server is expected to have the benefit of high bandwidth enterprise networks. In such a network streaming of panorama video in addition to the presenter video is possible.
Various client and server components interact for rendering real-time video streams and associated audio in the unified client. Entities employed in the panoramic video technique are:
a. One or more contributing video sources. The present panoramic video technique employs one or more video sources, such as, for example, a standalone webcam, or a RTD. The webcam typically provides a single video stream and the RTD typically provides two video streams—an active speaker video stream, which automatically shows the person who is speaking, and the panorama video stream, which is the 360 degree view of the room. This is typically due to bandwidth constraints. It should be noted, however, that even though there is no panoramic video available at a first given site because it is only configured with a webcam, it may still receive and display panoramic video if another site is sending panoramic video. This is true whether the first site has a webcam or even no camera at all.
b. One or more unified clients—The present panoramic video technique includes one or more unified client(s) that render the video. They also receive audio/video (A/V) data from their own sources and send it on, and they receive A/V data over a network or the Internet. The unified client follows a layered model. Abstraction layers are built in the unified client to isolate it from the complexity of operating in a live meeting server configuration versus a live meeting service environment. In one embodiment, there is a distributed object (DO) which abstracts the signaling transactions between the unified client and office communication client server or the meeting service server. Similarly, conference control and media transactions between the client and an Audio Video Media Control Unit (AVMCU) which mixes and transmits the video may be abstracted. The code required for sending and receiving video streams are built on top of these infrastructure pieces. The present panoramic video technique also includes a User Interface layer at the unified client that allows set up, control and display of the system and data. The unified client can automatically detect a RTD in a unified client, including speaker, microphones and video streams of the RTD. It also provides the ability to modify behavior of acoustic echo cancellation between the RTD and any echo cancellation in the client. Typically this is done by turning off the acoustic echo cancellation at the unified client and only employing the echo cancellation processing in the RTD. The unified client can also process integrated audio such as voice over Internet Protocol (VOIP) and Public System Telephone Network (PSTN) using a RTD.
c. A meeting server—The present panoramic video technique includes a server entity that hosts the meeting either in the configuration of a meeting service or a stand-alone client-server network (e.g., in an office communication server configuration). The meeting server also includes a UI layer for configuring the server for receiving, sending, rendering video streamsand related notifications.
d. The Audio Video Media Control Unit (AVMCU)—The present panoramic video technique includes a server entity, the AVMCU, which mixes and transmits the video to the various clients as stated previously. This AVMCU server entity can be part of the meeting server (e.g., office communication server or the meeting service server) or the AVMCU can be a stand-alone server.
A general schematic of one version of the overall operating environment of the present panoramic video system is shown in
One embodiment of a system diagram for the panoramic video technique embodied in a unified client is shown in
The above discussed configuration can be extended to many clients as shown in
A flow diagram for receiving and streaming panoramic video via a unified client is shown in
The following paragraphs provide some exemplary scenarios wherein the present panoramic video system and technique embodied in a unified client can be employed.
In the presenter video broadcast, as shown in
In the conference room video broadcast mode, as shown in
In the Multi-Party Speaker video mode, shown in
In the One-to-One Conference Room Video Relay mode, shown in
The scenarios/modes that are outlined above are better understood as transitory states within a conference. For example, the presenter broadcast mode becomes the multi-party speaker video mode if another presenter joins the conference and has a video source connected to her machine. Similarly, the scenario presenter broadcast mode becomes the conference room video broadcast if the conference is being held in a meeting service mode that does not have enough bandwidth to support a panorama video stream.
For each of the various video modes, the user interface (UI) and display may change. An exemplary display 1002 showing both a speaker pane 1004 and panoramic pane 1006 on the display of a unified client are shown in
Conceptually, enabling video can be broken into two main modules—a user interface module and a controller module both which are resident in the unified client software/firmware. The UI module encompasses class(es) for the presenter and a panorama pane which is capable of rendering video or business card displays. The UI is capable of maintaining the aspect ratios of panes on resizing. It also shows the menus, buttons and any other controls. This UI module is supported by a controller module which processes meeting participant arrival/departure, active presenter changes, media available (e.g., whether presenter/panoramic video is available) and related notifications from the meeting server or AVMCU. The controller module also maintains the state of the conference at the unified client based on the conference state document received from the server (for instance, live client server versus live meeting service conference; synchronizing of the presenter video stream with the panorama stream, fetching of user names, business card and other data).
The UI can provide many capabilities. For example, the UI may provide the ability to tear off a video pane for both the speaker pane and panorama from the meeting client. Furthermore, it may provide the ability to maximize the speaker and panoramic video panes. In one embodiment, the UI provides the ability to render both the actual video streams being sent by the AVMCU as well as preview of the video streams captured by the locally attached webcam or RTD. Other capabilities that may be implemented in the UI are described in more detail below.
1. Command bar “Video” Button: The command bar video button 1016 allows the user to access the video menu which allows a user to specify what type of video they would like to receive.
2. Voice and Video Pane: The voice & video pane 1022 includes a dockable window which can be docked anywhere on the unified client's display. The speaker window 1004 and/or the panoramic window 1006 may be displayed in the voice & video pane. The voice and video pane may also include a Video Contributor Selector button and View buttons. It may have a window resize capability that resizes a window that is chosen by a user. The voice and video pane also has a full screen video mode wherein a chosen video (e.g., speaker window or panoramic window pane) is displayed on the whole screen. It is also possible to overlay text on the voice and video pane, for example, to provide additional user information. The video pane may also provide for presenter synchronization and user information tool tips.
3. User Info/Business Card Support. In one embodiment, the UI displays user information or a business card in the voice and video pane when video of the speaker is not available.
4. “Show video” Opt-in Dialog: In one embodiment the UI of the unified client will show a video opt-in dialog, which allows a user to decide whether they would like to receive any video transmitted to them.
5. Presenter Selection: A presenter selection menu may allow a user to decide which speaker should be designated as the presenter for purposes of displaying the speaker to the speaker video frame.
6. Local Video Preview: The present panoramic video technique also includes a preview mode that allows a user to view themselves before sending this video to other sites.
7. Pause control [including A/V behavior]: The UI of the present panoramic video technique may also include a pause control which includes pausing audio/video behavior.
8. Start/stop Receiving Video. The UI of the present panoramic video technique may also include a start/stop receiving video control.
9. Start/Stop Sending Video. The UI of the present panoramic video technique may also include a start/stop sending video control.
10. Panorama Pane: The panorama pane typically displays a panorama view of a meeting room and may include an enabling panorama button which enables the panoramic video display in the panoramic pane. The panorama pane can be dockable to a desired location on the display and can be resized and maximized while maintaining the aspect ratio of the window. It may also include placeholder text that allows for text to be entered when video is not available.
11. Panorama Stream Bonding: Panorama stream bonding allows for the synchronization of various video streams. For example, it allows synchronization of the panorama video stream and the speaker video stream.
12. Video Tab In User Preferences: A video tab in a user preferences menu item can allow a user to select preferences to auto-send video, and to access camera settings and video set up.
13. “Attendee Can/can't Contribute Video” Permission—The UI may also include a permission setting that specifies whether an attendee can or cannot contribute video.
14. Video settings: In one mode, the user can manually set video settings to service or server.
It should also be noted that any or all of the aforementioned alternate embodiments may be used in any combination desired to form additional hybrid embodiments. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
This application claims priority under 35 U.S.C. Section 119(e)(1) of provisional application No. 60/805,868, filed Jun. 26, 2006 and entitled “Panoramic Video in a Live Meeting Client”.
Number | Name | Date | Kind |
---|---|---|---|
5539483 | Nalwa | Jul 1996 | A |
5745305 | Nalwa | Apr 1998 | A |
5793527 | Nalwa | Aug 1998 | A |
5990934 | Nalwa | Nov 1999 | A |
6005611 | Gullichsen et al. | Dec 1999 | A |
6043837 | Driscoll, Jr. et al. | Mar 2000 | A |
6111702 | Nalwa | Aug 2000 | A |
6115176 | Nalwa | Sep 2000 | A |
6128143 | Nalwa | Oct 2000 | A |
6141145 | Nalwa | Oct 2000 | A |
6144501 | Nalwa | Nov 2000 | A |
6175454 | Hoogland et al. | Jan 2001 | B1 |
6195204 | Nalwa | Feb 2001 | B1 |
6219089 | Driscoll, Jr. et al. | Apr 2001 | B1 |
6222683 | Hoogland et al. | Apr 2001 | B1 |
6285365 | Nalwa | Sep 2001 | B1 |
6313865 | Driscoll, Jr. et al. | Nov 2001 | B1 |
6331869 | Furlan et al. | Dec 2001 | B1 |
6337708 | Furlan et al. | Jan 2002 | B1 |
6341044 | Driscoll, Jr. et al. | Jan 2002 | B1 |
6346967 | Gullichsen et al. | Feb 2002 | B1 |
6356296 | Driscoll, Jr. et al. | Mar 2002 | B1 |
6356397 | Nalwa | Mar 2002 | B1 |
6369818 | Hoffman et al. | Apr 2002 | B1 |
6373642 | Wallerstein et al. | Apr 2002 | B1 |
6388820 | Wallerstein et al. | May 2002 | B1 |
6392687 | Driscoll, Jr. et al. | May 2002 | B1 |
6424377 | Driscoll, Jr. et al. | Jul 2002 | B1 |
6426774 | Driscoll, Jr. et al. | Jul 2002 | B1 |
6459451 | Driscoll, Jr. et al. | Oct 2002 | B2 |
6466254 | Furlan et al. | Oct 2002 | B1 |
6480229 | Driscoll, Jr. et al. | Nov 2002 | B1 |
6493032 | Wallerstein et al. | Dec 2002 | B1 |
6515696 | Driscoll, Jr. et al. | Feb 2003 | B1 |
6539547 | Driscoll, Jr. et al. | Jun 2003 | B2 |
6583815 | Driscoll, Jr. et al. | Jun 2003 | B1 |
6593969 | Driscoll, Jr. et al. | Jul 2003 | B1 |
6597520 | Wallerstein et al. | Jul 2003 | B2 |
6700711 | Nalwa | Mar 2004 | B2 |
6741250 | Furlan et al. | May 2004 | B1 |
6756990 | Koller | Jun 2004 | B2 |
6885509 | Wallerstein et al. | Apr 2005 | B2 |
6924832 | Shiffer et al. | Aug 2005 | B1 |
20010038624 | Greenberg et al. | Nov 2001 | A1 |
20020034020 | Wallerstein et al. | Mar 2002 | A1 |
20020063802 | Gullichsen et al. | May 2002 | A1 |
20020094132 | Hoffman et al. | Jul 2002 | A1 |
20020154417 | Wallerstein et al. | Oct 2002 | A1 |
20030101247 | Kumbalimutt et al. | May 2003 | A1 |
20030142402 | Carbo, Jr. et al. | Jul 2003 | A1 |
20030193606 | Driscoll, Jr. et al. | Oct 2003 | A1 |
20030193607 | Driscoll, Jr. et al. | Oct 2003 | A1 |
20040001091 | Kressin | Jan 2004 | A1 |
20040008407 | Wallerstein et al. | Jan 2004 | A1 |
20040008423 | Driscoll, Jr. et al. | Jan 2004 | A1 |
20040021764 | Driscoll, Jr. et al. | Feb 2004 | A1 |
20040252384 | Wallerstein et al. | Dec 2004 | A1 |
20040254982 | Hoffman et al. | Dec 2004 | A1 |
Entry |
---|
Circarana photographic unit, last accessed on Sep. 21, 2004 at Http://Cinerama.topcities.com/circarama.html. |
Number | Date | Country | |
---|---|---|---|
20070299912 A1 | Dec 2007 | US |
Number | Date | Country | |
---|---|---|---|
60805868 | Jun 2006 | US |