A video conference session can be formed among participants located at different locations, which can be geographically spread apart at different locations within a city, or across different cities, states, or countries, or even in different rooms of an office space or campus. In a video conference session, video conference equipment is located at each location, where the video conference equipment includes a camera to capture a video of the participant(s) at each location, as well as a display device to display a video of participant(s) at a remote location (or remote locations).
Some implementations of the present disclosure are described with respect to the following figures.
In the present disclosure, use of the term “a,” “an”, or “the” is intended to include the plural forms as well, unless the context clearly indicates otherwise. Also, the term “includes,” “including,” “comprises,” “comprising,” “have,” or “having” when used in this disclosure specifies the presence of the stated elements, but do not preclude the presence or addition of other elements.
During a video conference session, cameras at respective locations of the video conference session are usually focused on the human participants of the video conference session at the respective locations. In some cases, a participant at a first location may wish to present information on a physical user collaborative area, such as a whiteboard, a chalk board, a piece of paper, or any other physical area in which a user can input marks, such as by using a pen, marker, and so forth. In some cases, the physical user collaborative area can be a digital board or touch-sensitive display device in which a user can use a digital pen, a stylus, a user's finger, and so forth, to make markings on the digital board or touch-sensitive display device.
It can be difficult for remote participants (at locations that are remote from the first location) to view the content on the physical user collaborative area at the first location. In some examples, manual intervention by a local participant at the first location is performed to physically move (e.g., pan and tilt) the camera at the first location to focus on the physical user collaborative area, and to manually zoom the camera into the physical user collaborative area. However, once the camera at the first location is adjusted such that it is focused on and zoomed into the physical user collaborative area at the first location, the camera may not capture the local participant(s) at the first location, such that the remote participant(s) would no longer be able to view the local participant(s).
In accordance with some implementations of the present disclosure, automated techniques or mechanisms are provided to allow for a system to automatically identify a boundary of a physical user collaborative area at a first location during a video conference session, and to zoom into the physical user collaborative area during the video conference session. The video of the physical user collaborative area can be displayed as a separate video feed (in addition to the video feed of the participant(s) at the first location). In such examples, the video of the physical user collaborative area at the first location and the video of the participant(s) at the first location can be simultaneously displayed by video conference equipment at another location. Alternatively, the video of the physical user collaborative area and the video of the participant(s) are displayed at different times, with the remote participants selecting which to display at any given time.
The video conference system 102 is coupled over a network to video conference equipment at respective locations, where a video conference equipment at each location can include a display device to display video of remote locations, and a camera to capture a video of a local location.
In the example of
Note that the video conference equipment at each location can also include a computer (or computers) that can perform the control of display of videos at the respective display devices, and the communication with the video conference system 102. A computer 130 at location 1 is communicatively coupled to the display device 104 and the camera 106, and a computer 132 at location 2 is communicatively coupled to the camera 112, the display device 110, and an optical sensor 116 (discussed below).
In accordance with some implementations of the present disclosure, the video conference equipment at location 2 further includes the optical sensor 116 that is able to sense light from a marker 118 that is at a specified location with respect to a physical user collaborative area 120, in which a user can input marks 121 such as during a video conference session. The marker 118 can include a light emitter or a light reflector. A light emitter includes a light source that can generate a light. A light reflector reflects light produced from another light source.
The marker 118 is distinct from the physical user collaborative area 120. For example, the marker 118 is physically separate from the physical user collaborative area 120, although the marker 118 can be attached to the physical user collaborative area 120. More generally, the marker 118 is distinct from the physical user collaborative area 120 if the marker 118 is not part of the physical user collaborative area 120. For example, the marker 118 is not written on or printed on to the physical user collaborative area 120.
Although just one marker 118 is depicted in
In some examples, the optical sensor 116 that captures light from the marker 118 can include an infrared (IR) optical sensor to capture IR light. In other examples, the optical sensor 116 can capture light in the visible spectrum. Although
Measurement information acquired by the optical sensor 116, in response to light from the marker 118, is provided by the optical sensor 116 to a user collaborative area focus module 122 that includes machine-readable instructions executable in the video conference system 102. The information received by the user collaborative area focus module 122 from the optical sensor 116 can indicate the boundary of the physical user collaborative area 120 at location 2.
Based on the indicated boundary of the physical user collaborative area 120, the user collaborative area focus module 122 is able to control the camera 112 to perform a video zoom into the physical user collaborative area 120. The video zoom involves the camera focusing into a region that includes the physical user collaborative area 120, such that a video of the region including the physical user collaborative area 120 is enlarged. The zoomed video of the physical user collaborative area 120 is communicated by the video conference system 102 to the display device 104 at location 1, which displays the zoomed video 124 of the physical user collaborative area 120.
In the example of
The video 126 of the participants 114 at location 2 can be displayed in a first window by the display device 104, and the video 124 of the physical user collaborative area 120 is displayed in a second window by the display device 104. The first and second windows can be simultaneously displayed, or can be displayed one at a time based on user or program selection.
The process further includes determining (at 204), based on the received information, the boundary of the physical user collaborative area 120. In examples where there is just one marker 118, the user collaborative area focus module 122 is able to use information regarding a shape of the physical user collaborative area 120 (e.g., a rectangular shape, a circular shape, an oval shape, a triangular shape, etc.) to determine where the physical user collaborative area 120 is based on the location of the marker 118. The information regarding the shape of the physical user collaborative area 120 can be entered by a user, an administrator, by a program, and so forth.
In examples where there are multiple markers 118, the user collaborative area focus module 122 is able to determine the boundary of the physical user collaborative area 120 from the locations of the multiple markers 118. For example, if the physical user collaborative area 120 is generally rectangular in shape, and there are four markers 118 at the corners of the physical user collaborative area 120, then the user collaborative area focus module 122 is able to determine the boundary of the physical user collaborative area 120 based on the determined corners. Similarly, if the physical user collaborative area 120 is generally triangular in shape, and there are three markers 118 at the corners of the triangle, then the user collaborative area focus module 122 can determine boundary based on the determined corners.
In addition, the process includes controlling (at 206), based on the determined boundary, a video zoom into the physical user collaborative area 120, so that a remote participant can more readily see the content of the physical user collaborative area 120. Controlling the video zoom into the physical user collaborative area 120 involves controlling the camera 112 (or a different camera) at location 2 to focus into a region that includes the physical user collaborative area 120, such that an enlarged view of the physical user collaborative area 120 is possible at location 1. In some cases, controlling the video zoom into the physical user collaborative area 120 can also involving panning and tilting the camera 112 (or a different camera) to be directed in a direction towards the physical user collaborative area 120.
In further examples, where the markers 308 are light reflectors, the different light reflectors can include different patterns on the reflective surfaces of the light reflectors, such that the light signals reflected from the different light reflectors would provide different patterns of reflected signals. The different patterns of reflected signals provide the different information that allows the markers 308 to be distinguished from one another.
The user collaborative area focus module 122 is able to detect the different information encoded into the light signals, as received by an optical sensor 310 and communicated to the user collaborative area focus module 122. An image of the markers 308 captured by the optical sensor 310 includes sub-images of the markers 308. The user collaborative area focus module 122 is able to distinguish between the different markers 308 using the different encoded information. Based on distinguishing between the different markers, the user collaborative area focus module 122 is able to determine the boundary of the physical user collaborative area 306. By being able to distinguish between the different markers, the user collaborative area focus module 122 is able to determine that a first marker corresponds to a first point on the boundary of the physical user collaborative area 306, that a second marker corresponds to a second point on the boundary of physical user collaborative area 306, and so forth. From these points on the boundary, the user collaborative area focus module 122 is able to derive the boundary of the physical user collaborative area 306.
Although
In some examples, the determination of the boundary of the physical user collaborative area 120 is performed without any user input to trigger the determining of the boundary of the physical user collaborative area 120. In other words, the user collaborative area focus module 122 is able to determine the boundary of the physical user collaborative area 120 without a user having to manually participate in the process of determining this boundary, such as by activating a button, inputting a command, and so forth.
In other examples, a user can trigger the determination of the boundary of the physical user collaborative area 120, such as by waving a digital pen or other input device in front of the physical user collaborative area 120, by inputting a command, by activating a button, and so forth.
In addition to use of the marker 118 or markers 308 of
It is noted that after the boundary of the physical user collaborative area (120 or 306) has been determined by the user collaborative area focus module 122, the location and the boundary of the area 120 or 306 can be saved into a profile. A “profile” can refer to any information that can be stored, such as by the user collaborative area focus module 122. After the location and boundary of the physical user collaborative area is saved into the profile, the marker 118 or the markers 308 can be removed, since the user collaborative area focus module 122 can use the saved profile to determine the location and boundary of the physical user collaborative area 120 or 306 for a subsequent video conference session that includes the location where the physical user collaborative area 120 or 306 is located.
The process of
The process of
Instructions executable on a processor can refer to instructions executable on one processor or on multiple processors. The machine-readable instructions include optical sensor information receiving instructions 506 to receive information sensed by an optical sensor responsive to light from a marker arranged to indicate a boundary of a physical user collaborative area, where the marker is distinct from the physical user collaborative area. The machine-readable instructions further include boundary determining instructions 508 to determine, based on the received information, the boundary of the physical user collaborative area. In addition, the machine-readable instructions include video zoom control instructions 510 to control, based on the determined boundary, a video zoom into the physical user collaborative area during the video conference session.
The storage medium 504 or 600 can include any or some combination of the following: a semiconductor memory device such as a dynamic or static random access memory (a DRAM or SRAM), an erasable and programmable read-only memory (EPROM), an electrically erasable and programmable read-only memory (EEPROM) and flash memory; a magnetic disk such as a fixed, floppy and removable disk; another magnetic medium including tape; an optical medium such as a compact disk (CD) or a digital video disk (DVD); or another type of storage device. Note that the instructions discussed above can be provided on one computer-readable or machine-readable storage medium, or alternatively, can be provided on multiple computer-readable or machine-readable storage media distributed in a large system having possibly plural nodes. Such computer-readable or machine-readable storage medium or media is (are) considered to be part of an article (or article of manufacture). An article or article of manufacture can refer to any manufactured single component or multiple components. The storage medium or media can be located either in the machine running the machine-readable instructions, or located at a remote site from which machine-readable instructions can be downloaded over a network for execution.
In the foregoing description, numerous details are set forth to provide an understanding of the subject disclosed herein. However, implementations may be practiced without some of these details. Other implementations may include modifications and variations from the details discussed above. It is intended that the appended claims cover such modifications and variations.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2017/015707 | 1/31/2017 | WO | 00 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2018/143909 | 8/9/2018 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
6970181 | Fadel | Nov 2005 | B1 |
8456503 | Hoelsaeter | Jun 2013 | B2 |
9065974 | Halavy | Jun 2015 | B2 |
9154737 | Thomas | Oct 2015 | B2 |
9307192 | Oyman | Apr 2016 | B2 |
9438858 | Evans et al. | Sep 2016 | B1 |
10403050 | Beall | Sep 2019 | B1 |
20060077258 | Allen et al. | Apr 2006 | A1 |
20060092178 | Tanguay et al. | May 2006 | A1 |
20060132467 | Saund et al. | Jun 2006 | A1 |
20090079812 | Crenshaw et al. | Mar 2009 | A1 |
20110141222 | Hoelsaeter | Jun 2011 | A1 |
20110285809 | Feng et al. | Nov 2011 | A1 |
20110299832 | Butcher | Dec 2011 | A1 |
20120229425 | Barrus | Sep 2012 | A1 |
20120229589 | Barrus | Sep 2012 | A1 |
20130242137 | Kirkland | Sep 2013 | A1 |
20130265382 | Guleryuz | Oct 2013 | A1 |
20150033146 | Wu et al. | Jan 2015 | A1 |
20150262013 | Yamashita et al. | Sep 2015 | A1 |
20160041724 | Kirkby et al. | Feb 2016 | A1 |
20160100099 | Oyman et al. | Apr 2016 | A1 |
20160127617 | Partouche | May 2016 | A1 |
20180314323 | Mikhailov | Nov 2018 | A1 |
Number | Date | Country |
---|---|---|
102244807 | Nov 2011 | CN |
104364825 | Feb 2015 | CN |
106127203 | Nov 2016 | CN |
Entry |
---|
“Altia Systems to Showcase the PanaCast 2 Huddle Room Solution at InfoComm”, Altia Systems, Business Wire, Retrieve from internet—https://www.businesswire.com/news/home/20160607005680/en/Altia-Systems-Showcase-PanaCast-2-Huddle-Room, 2016, 4 Pages. |
“PanaCast 2 Huddle Room Installation and Use”, Applications Brief, Panacast, Retrieved from Internet—www.getpanacast.com, 2015, 2 Pages. |
Liao et al., “Automatic Zooming Mechanism for Capturing Object Image Using High Definition Fixed Camera”, IEEE, Retrieved from Internet—http://ieeexplore.ieee.org/document/7423424, 2016, 3 Pages. |
Number | Date | Country | |
---|---|---|---|
20190052812 A1 | Feb 2019 | US |