Consumers appreciate ease of use and flexibility in electronic devices. Adaptability to the needs of consumers is also desirable. Businesses may, therefore, endeavor to design electronic devices directed toward one or more of these objectives.
The following detailed description references the drawings, wherein:
People often value eye contact during conversations for a variety of reasons, such as enhancing a sense of connectedness, attention, interest and understanding. This can be challenging to achieve in the context of videoconferencing systems due to the placement of system cameras relative to system displays. For example, when a user of a videoconferencing system in one location looks at the image of another person at a different location connected to the system, that user cannot also simultaneously look directly at the camera capturing his or her image. The larger the distance between the camera and the display showing the projected person at a particular location, the greater the lack of eye contact can be between that user and the person.
This situation can be exacerbated during videoconferences involving multiple users at one location where only one camera is present. For example, all of the users at the one location may not be visible at the same time on the display at the other remote location. If multiple users at one location are visible, then their distances from the camera at that location may be different. This can result in differing degrees of lack of eye contact of their images at the remote location.
Another problem that can arise with such videoconferencing systems occurs in the context of remote users working with shared content. For example, the displayed image of a remote user may obscure part or all of a local working environment on which the shared content is positioned or displayed. Additionally or alternatively, the remote user may be too far from the remote working environment for his or her image to be visible on a local display, thereby hindering the goal of collaboration through such videoconferencing.
As used herein, the terms “displaying”, “display” and “displayed” are defined to include, but are not limited to, projecting and projection. The term “image” is defined to include, but is not limited to, one or more video streams of the same or different content. This image may come from any of a variety of sources such as the internet, a computer, a handheld device (e.g., mobile phone, tablet or personal digital assistant (PDA)), etc. This image may also be in any of a variety of formats such as MPEG, PDF, WAV, JPEG, etc.
The term “display rendering device” is defined to include, but is not limited to, a projector. The term “camera” is defined to include, but is not limited to, a device that captures visible content or data associated with one or more persons or objects for subsequent display. The term “surface” is defined to include, but is not limited to, any two or three-dimensional object having an area or volume on which an image may be displayed (e.g., a screen). The term “orientation” includes, but is not limited to, X, Y and Z Cartesian coordinates on a working environment, as well as angles relative to the working environment (e.g., <x, <y, and <z or roll, pitch and yaw). The term “capture device” is defined to include, but is not limited to, an imaging device, sensor or detector.
As used herein, the terms “non-transitory storage medium” and non-transitory computer-readable storage medium” refer to any media that can contain, store, or maintain programs, information, and data. Non-transitory storage medium and non-transitory computer-readable storage medium may include any one of many physical media such as, for example, electronic, magnetic, optical, electromagnetic, or semiconductor media. More specific examples of suitable non-transitory storage medium and non-transitory computer-readable storage medium include, but are not limited to, a magnetic computer diskette such as floppy diskettes or hard drives, magnetic tape, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory, a flash drive, a compact disc (CD), or a digital video disk (DVD).
As used herein, the term “processor” refers to an instruction execution system such as a computer/processor based system, an Application Specific Integrated Circuit (ASIC), or a hardware and/or software system that can fetch or obtain the logic from a non-transitory storage medium or a non-transitory computer-readable storage medium and execute the instructions contained therein.
An example of a system 10 for displaying an image 12 that is directed to addressing those issues discussed above with videoconferencing systems is shown in
System 10 further includes a non-transitory computer-readable storage medium 38. Non-transitory computer-readable storage medium 38 includes instructions that, when executed by processor 28, cause processor 28 to determine the dimensions (e.g., length and width) of surface 14 detected by capture device 24 and to determine a first orientation in space of surface 14 relative to capture device 24. Non-transitory computer-readable storage medium 38 includes additional instructions that, when executed by processor 28, cause processor 28 to convert image 12 through the use of image conversion device 32 to display on surface 14 via display rendering device 16 based on the determined dimensions of surface 14 and the determined first orientation of surface 14 relative to capture device 24.
As can also be seen in
Non-transitory computer-readable storage medium 38 includes further instructions that, when executed by processor 28, cause processor 28 to detect this repositioning of surface 14 to the second orientation relative to capture device 24, as indicated by dashed arrow 42, and to determine the second orientation in space of surface 14 relative to capture device 24. Non-transitory computer-readable storage medium 38 includes yet further instructions that, when executed by processor 28, cause processor 28 to convert image 12 through the use of image conversion device 32 to display on surface 14 via display rendering device 16 based on the previously determined dimensions of surface 14 and the determined second orientation of surface 14 relative to capture device 24, as indicated by dashed arrows 44, 46 and 48.
Another example of system 10 for displaying image 12 on two surfaces 50 and 52 is shown in
As can be seen in
As can also be seen in
Non-transitory computer-readable storage medium 38 includes further instructions that, when executed by processor 28, cause processor 28 to detect this repositioning of surfaces 50 and 52 to the respective third and fourth orientations relative to capture device 24, as indicated by dashed arrows 72 and 74, and to determine the respective third and fourth orientations in space of surfaces 50 and 52 relative to capture device 24. Non-transitory computer-readable storage medium 38 includes yet further instructions that, when executed by processor 28, cause processor 28 to convert image 12 through the use of image conversion device 32 to display on surfaces 50 and 52 via display rendering device 16 based on the previously determined dimensions of surfaces 50 and 52, and the respective determined third and fourth orientations of surfaces 50 and 52 relative to capture device 24, as indicated by dashed arrows 76, 78 and 80 for surface 50 and dashed arrows 82, 84 and 86 for surface 52.
An example of calibration of sensors of capture device 24 is shown in
A homography matrix (H) 104 may be created, as indicated by arrow 106, to perform this calibration as follows:
Once corners p196, p298, p3100, and p4102 of projection area 94 within area 88 covered by sensors of capture device 24 have been determined and a homography matrix (H) 104 created, as illustrated in
Corners c1112, c2114, c3116, and c4118 of surface 120 of wedge 121 on which image 12 is displayed in working environment 122 need to be located.
An example of another way in which corners c1126, c2128, c3130, and c4132 of surface 134 of a different wedge 136 on which image 12 is displayed in working environment 138 may be located through the use of infrared (IR) or Red, Green and Blue (RGB) sensing by capture device 24 is shown in
Capture device 24, utilizing this exemplary technique illustrated in
An example of the determination of the dimensions (e.g., length and width) of a surface, such as surface 134 of wedge 136, and its orientation in space is illustrated in
An example of the determination of the angle phi (Φ) of a surface, such as surface 134 of wedge 136, from working environment 138 is illustrated in
Φ=tan−1(z/L) where,
L is the distance from corner c3130 to corner c4132 and is equal to the length of vector L 162, discussed above in connection with
An example of transforming from a two-dimensional image provided by display rendering device 164 to a two-dimensional back-projection of a three-dimensional plane of surface 166 positioned in a working environment 168 is shown in
For example, given that:
Next, the coordinates of the capture device (not shown in
An example of a wedge or puck 194 having a screen 196 on which image 12 may be displayed is shown in
As can further be seen in
An example of a method 204 of displaying an image on a surface located in a working environment is shown in
As can be seen in
As can be seen in
Although several examples have been described and illustrated in detail, it is to be clearly understood that the same are intended by way of illustration and example only. These examples are not intended to be exhaustive or to limit the invention to the precise form or to the exemplary embodiments disclosed. Modifications and variations may well be apparent to those of ordinary skill in the art. For example, although two surfaces 50 and 52 have been illustrated in
Additionally, reference to an element in the singular is not intended to mean one and only one, unless explicitly so stated, but rather means one or more. Moreover, no element or component is intended to be dedicated to the public regardless of whether the element or component is explicitly recited in the following claims.
| Filing Document | Filing Date | Country | Kind | 371c Date |
|---|---|---|---|---|
| PCT/US2012/034787 | 4/24/2012 | WO | 00 | 5/11/2015 |
| Publishing Document | Publishing Date | Country | Kind |
|---|---|---|---|
| WO2013/162515 | 10/31/2013 | WO | A |
| Number | Name | Date | Kind |
|---|---|---|---|
| 7023536 | Zhang et al. | Apr 2006 | B2 |
| 7038846 | Mandella et al. | May 2006 | B2 |
| 7088440 | Buermann et al. | Aug 2006 | B2 |
| 7110100 | Buermann et al. | Sep 2006 | B2 |
| 7113270 | Buermann et al. | Sep 2006 | B2 |
| 7161664 | Buermann et al. | Jan 2007 | B2 |
| 7203384 | Carl et al. | Apr 2007 | B2 |
| 7209160 | McNelley et al. | Apr 2007 | B2 |
| 7268956 | Mandella et al. | Sep 2007 | B2 |
| 7352913 | Karuta | Apr 2008 | B2 |
| 7474809 | Carl et al. | Jan 2009 | B2 |
| 7729515 | Mandella et al. | Jun 2010 | B2 |
| 7826641 | Mandella et al. | Nov 2010 | B2 |
| 7961909 | Mandella et al. | Jun 2011 | B2 |
| 20020021418 | Raskar | Feb 2002 | A1 |
| 20050168437 | Carl et al. | Aug 2005 | A1 |
| 20100231811 | Sajadi | Sep 2010 | A1 |
| 20100238265 | White | Sep 2010 | A1 |
| 20100295920 | McGowan | Nov 2010 | A1 |
| 20100302344 | Large et al. | Dec 2010 | A1 |
| 20110227915 | Mandella et al. | Sep 2011 | A1 |
| 20120038549 | Mandella et al. | Feb 2012 | A1 |
| 20130194418 | Gonzalez-Banos et al. | Aug 2013 | A1 |
| Number | Date | Country |
|---|---|---|
| 2002-027419 | Jan 2002 | JP |
| 2002-062842 | Feb 2002 | JP |
| 2003-153126 | May 2003 | JP |
| 2004-274283 | Sep 2004 | JP |
| WO-0147259 | Jun 2001 | WO |
| Entry |
|---|
| Andrew D. Wilson et al., “Combining Multiple Depth Cameras and Projectors for Interactions on, Above, and Between Surfaces,” UIST'10; Oct. 3, 2010; New York, New York; pp. 1-10, ACM; Available at: <research.microsoft.com/en-us/um/people/awilson/publications/WilsonUIST2010/WilsonUIST2010.html>. |
| Andrew D. Wilson, “Using a Depth Camera as a Touch Sensor,” ITS 2010: Devices & Algorithms; Nov. 7, 2010; Saarbrucken, Germany; pp. 69-72, ACM, Available at: <research.microsoft.com/en-us/um/people/awilson/publications/WilsonITS2010/WilsonITS2010.html>. |
| Beverly Harrison and Ryder Ziola, “Bringing Toys to Life: Intel Labs Oasis Project,” Augmented Engineering; Jan. 26, 2011; 1 page, Available at: <augmentedengineering.wordpress.com/2011/01/26/bringing-toys-to-life-intel-labs-oasis-project/>. |
| Bjorn Hartmann et al., “Pictionaire: Supporting Collaborative Design Work by Integrating Physical and Digital Artifacts,” CSCW 2010; Feb. 6, 2010; Savannah, Georgia; pp. 1-4, Available at: <research.microsoft.com/en-us/um/people/awilson/publications/HartmannCSCW2010/HartmannCSCW2010.html>. |
| Chris Harrison et al., “OmniTouch: Wearable Muititouch Interaction Everywhere,” UIST'11; Oct. 16, 2011; Santa Barbara, California; pp. 441-450, ACM, Available at: <research.microsoft.com/en-us/um/people/awilson/publications/HarrisonUIST2011/HarrisonUIST2011.html>. |
| Donald Melanson, “Microsoft Research Working on Portable Surface,” Mar. 2, 2010, pp, 1-2, Available at: <engadget.com/2010/03/02/microsoft-research-working-on-portable-surface/>. |
| Donald Melanson, “Wiimote Repurposed for Multi-Point Interactive Whiteboard,” Dec. 10, 2007, pp. 1-2, Available at: <engadget.com/2007/12/10/wiimote-repurposed-for-multi-point-interactive-whiteboard/>. |
| Gao, Rui et al; Microsoft Research—Mobile Surface; Microsoft Research; 2010; 1 page; Available at: <research.microsoft.com/en-us/projects/mobilesurface/>. |
| Hand, Randall; Infinite Z Launches zSpace Virtual Holographic 3D Display for Designers; VizWorld.com; Dec. 13, 2011; http://www.vizworld.com/2011/12/infinite-launches-zspace-virtual-holographic-3d-display-designers/#sthash.j6Ys61PX.dpbs. |
| Ken Hinckley et al., “Pen+Touch=New Tools,” UIST'10; Oct. 3, 2010; New York, New York; pp. 27-36, ACM, Available at: <research.microsoft.com/en-us/um/people/awilson/publications/HinckleyUIST2010/HinckleyUIST2010.html>. |
| Linder, Natan et al; LuminAR: Portable Robotic Augmented Reality Interface Design and Prototype; UIST'10, Oct. 3, 2010; New York, New York; http://fluid.media.mit.edu/sites/default/files/2010-10-03-luminar—uist10—demo.pdf. |
| Sasa Junuzovic et al., Microsoft Research—IllumiShare, Microsoft Research, 2012, pp. 1-2. |
| Shahram Izadi et al., “C-Slate: A Multi-Touch and Object Recognition System for Remote Collaboration Using Horizontal Surfaces,” Second Annual IEEE International Workshop on Horizonal Interactive Human-Computer System, 2007, pp. 3-10, Available at: <research.microsoft.com/pubs/132551/cslate1.pdf>. |
| Shaun K. Kane et al., “Bonfire: A Nomadic System for Hybrid Laptop-Tabletop Interaction,” UIST'09, Oct. 4, 2009; Victoria, British Columbia, Canada; pp. 129-138, Available at: <dub.washington.edu/djangosite/media/papers/uist09.pdf>. |
| Tom Simonite, “A Kitchen Countertop With a Brain,” MIT Technology Review; Jul. 2, 2010, pp. 1-2, Available at: <technologyreview.com/news/419639/a-kitchen-countertop-with-a-brain/>. |
| David Pogue, “Solving a Video Chat Problem,” Pogue's Posts, The Latest in Technology from David Pogue, NY Times, Jan. 24, 2007, 1 page, Available at: <pogue.blogs.nytimes.com/2007/01/24/solving-a-video-chat-problem/>. |
| Franc Solina and Robert Ravnik, “Fixing Missing Eye-Contact in Video Conferencing Systems,” Proceedings of the ITI 2011 33rd Int. Conf. on Information Technology Interfaces, Jun. 27-30, 2011, pp. 233-236 Cavtat, Croatia, IEEE, Available at: <http://ieeexplore.ieee.org/search/srchabstract.jsp?tp=&arnumber=5974027>. |
| International Search Report and Written Opinion, International Application No. PCT/US2012/034787, Date of Mailing: Dec. 28, 2012, pp. 1-7. |
| Johnny Chung Lee, “Projector-Based Location Discovery and Tracking,” Human Computer Interaction Institute, CMU-HCII-08-12, May 2008, pp. 1-106, Available at: </johnnylee.net/projects/thesis/thesis—document.pdf>. |
| Number | Date | Country | |
|---|---|---|---|
| 20150244983 A1 | Aug 2015 | US |