The present application is related to U.S. patent application Ser. No. 11/485,788 filed Jul. 13, 2006, entitled “Gesture Recognition Interface System,” assigned to the same assignee as the present application and incorporated herein by reference in its entirety.
The present invention relates generally to network collaboration systems, and specifically to a networked gesture collaboration system.
As the range of activities accomplished with a computer increases, new and innovative ways to provide an interface with a computer are often developed to complement the changes in computer functionality and packaging. For example, advancements in computer networking, particularly with the speed at which information can be transferred between networked computers, allow multiple computer users at physically or geographically separate locations to collaborate regarding the same information at substantially the same time. These multiple computer users can communicate with one another via instant messaging, voice, and even video data. However, instant messaging, voice, and video data transfer in a network collaboration often requires large amounts of bandwidth. In addition, while collaboration regarding the same visual data can be accomplished via instant messaging, voice, and video data, collaboration requires input from several collaborators and repeated reference to the visual data. Thus, instant message, voice, and video transfer could require a lot of time and effort in describing to what a user is directing other users' attention regarding the collaborated visual data, and presentation can be limited by an amount of space available on a computer monitor when instant message and/or video transfer is also present.
Therefore, it is typically very difficult or even impossible to collaborate over a network in a way that accurate simulates a face-to-face conference room setting. Furthermore, despite a network collaboration allowing users to be geographically separated from one another, the individual locations of the users may still be limited only to locations that are suitable for network collaboration, such as a location that the user is capable of connecting to the appropriate network.
One embodiment of the present invention includes a collaboration input/output (I/O) system. The collaboration I/O system may comprise a display screen and an image system. The gesture image system may be configured to generate image data associated with a location and an orientation of a first sensorless input object relative to a background surface. The collaboration I/O system may also comprise a transceiver. The transceiver could be configured to transmit the image data to at least one additional collaboration I/O system at at least one remote location. The transceiver can be further configured to receive image data from each of the at least one additional collaboration I/O system, such that the display screen can be configured to display the image data associated with a location and orientation of a sensorless input object associated with each of the at least one additional collaboration I/O system superimposed over a common image of visual data.
Another embodiment of the present invention includes a method for providing collaboration of visual data between a plurality of users, each of the plurality of users being located at a remote location separate from each other. The method comprises transmitting first image data associated with a sensorless input object of a first user of the plurality of users to each of the remaining plurality of users. The first image data can comprise location and orientation data of the sensorless input object of the first user relative to a display screen of the first user. The method may also comprise receiving second image data associated with a sensorless input object of each of the remaining plurality of users. The second image data can comprise location and orientation data of the sensorless input object of each of the remaining plurality of users relative to the visual data. The method can further comprise projecting an image of the visual data and the second image data onto the display screen of the first user. The second image data can be superimposed on the image of the visual data. The image of the visual data can be common to the plurality of users.
Another embodiment of the present invention includes a system for collaborating visual data with a first user and at least one additional user located at a remote location that is separate from the first user. The system can comprise means for generating first image data. The first image data can be associated with a location and an orientation of a sensorless input object of the first user relative to a background surface. The system can also comprise means for transmitting the first image data to the at least one additional user and for receiving second image data. The second image data can be associated with a location and an orientation of a sensorless input object of each of the at least one additional user. The system can also comprise means for combining the second image data with the visual data. The visual data can be common to both the first user and each of the at least one additional user. The system can further comprise means for displaying the second image data and the image of the visual data to the first user, such that the second image data is superimposed over the image of the visual data.
The present invention relates generally to network collaboration systems, and specifically to a networked gesture collaboration system. A plurality of users, each at locations that are physically separate from one another can collaborate over common visual data via respective collaboration input/output (I/O) systems. Each of the collaboration I/O systems can include a gesture image system that generates image data. The image data can correspond to location and orientation data of a sensorless input object for which the respective user provides gestures on the common visual data. As an example, the sensorless input object could merely be the respective user's hand. The location and orientation data of the sensorless input object could be relative to a background surface, which could be a display screen on which the common visual data image appears. The common visual data image could be projected onto the display screen, such that a local user can gesture directly at the visual data on the display screen to generate the image data. The image data can be transmitted to other collaboration I/O systems, and thus collaboration I/O systems can receive image data from the other collaboration I/O systems. The received image data is combined with the common visual data image. Therefore, the display screen displays silhouette images of all users' sensorless input object gestures superimposed over the common visual data image, thus allowing fast visual collaboration over the common visual data.
Each of the collaboration I/O systems 12 can be communicatively coupled to a satellite 14, a base station 16, and/or a wired network 18. In the example of
In the example of
It is to be understood that each of the collaboration I/O systems 12 may not be communicatively coupled to all three of the satellite 14, the base station 16, and the wired network 18. As an example, a given one of the collaboration I/O systems 12 may only have a satellite transceiver for communication with the satellite 14, or may have both a wireless transceiver and an RJ-45 jack for communication with both the base station 16 and the wired network 18. However, in the example of
The information sent from one of the collaboration I/O systems 12 to one or more of the other collaboration I/O systems 12 can include visual data, such as the visual data for which the respective users of each of the collaboration I/O systems 12 are collaborating. For example, a given one of the collaboration I/O systems 12 can be a master site, such that it transmits the visual data to each of the other collaboration I/O systems 12 for collaboration. Additionally, the information transmitted between the collaboration I/O systems 12 can include feedback and reference to the visual data. For example, a user of one of the collaboration I/O systems 12 can point to a certain portion of the visual data with a sensorless input object, such that image data associated with the sensorless input object can be generated and transmitted from the respective one of the collaboration I/O systems 12 to the other collaboration I/O systems 12 for the purpose of provoking discussion amongst the users. In addition, the image data can be combined with voice data, such that a face-to-face conference room setting can be simulated.
In the example of
In the foregoing discussion, it is to be assumed that the gesture collaboration display 50 is located at Collaboration I/O System 1. In the example of
The silhouette images can be semi-transparent, such that a given user can see both the silhouette image and the portion of the visual data 52 over which the silhouette image is superimposed. This is demonstrated in the example of
As the image data transmitted between the users 54, 56, 58, and 60 is merely a set of pixels that is representative of the given sensorless input object, transmission bandwidth can be significantly reduced. As an example, the positional relationship between the visual data 52 and the given sensorless input object can be digitally compressed, for example, in a run-length encoding algorithm. Thus, only a small amount data is transmitted, such that it can be combined with the visual data 52 at the respective gesture collaboration display. Therefore, transmission delays can be greatly reduced, allowing for a more accurate and efficient collaboration between the users 54, 56, 58, and 60.
As described above, the image data that is transmitted between the users 54, 56, 58, and 60 can be combined with voice data. As such, the users 54, 56, 58, and 60 can collaborate from separate geographical locations more efficiently and effectively. For example, the user 54 can verbally communicate the importance of the armored unit 62 while simultaneously pointing to it. Therefore, because the user 54 gestures directly at the point of interest on the visual data 52, the users 56, 58, and 60 instantaneously and unequivocally know to what the user 54 is directing discussion. Accordingly, the user 54 need not waste time explaining to what objects on the visual data the user 54 is referring, or expend unnecessary bandwidth in transmitting video data through a web camera as a way of referring to objects of interest regarding the visual data. Therefore, the users 54, 56, 58, and 60 can collaborate in such a way as to simulate an actual face-to-face conference room setting.
Further to allowing a collaborative effort between the geographically separate users, one or more of the users 54, 56, 58, and 60 may be able to employ gestures to manipulate the visual data displayed on the gesture collaboration display 50. For example, the collaboration I/O system associated with the user 54 could be a gesture collaboration/interface system. The gesture collaboration/interface system may be able to interpret hand gestures made by the user 54 and translate the hand gestures into device inputs. The device inputs could include, for example, simulated mouse inputs. As an example, the user 54 could gesture with a pinching motion of the thumb and forefinger over the armored unit 62. The gesture collaboration/interface system could interpret the gesture and translate it to a left mouse-click, thus allowing the user 54 to “click and drag” the armored unit 62 across the collaboration display system 50. It is to be understood that a given gesture collaboration/interface system can be programmed to implement inputs such as zooming, panning, rotating, or any of a variety of gestures and corresponding simulated device inputs in such a manner. The other users 56, 58, and 60 may be able to view the simulated input as it occurs from their respective collaboration I/O systems, and also could possibly have similar visual data interaction capability. The operation of a gesture collaboration/interface system will be described in greater detail below in the example of
It is to be understood that the gesture collaboration display 50 is not intended to be limited to the example of
In the example of
The collaboration I/O system 100 can include a projector 112. The projector 112 can provide an output interface to provide visual data, such as, for example, computer monitor data, for which the user can interact and provide collaborative gestures. The retroreflective screen 108 can thus be the display screen on which the visual data is projected. Therefore, the sensorless input object 110 can be used to provide gestures regarding the visual data directly on the visual data itself as it is being displayed on the retroreflective screen 108. Because the IR light source 104 does not illuminate visible light, the IR illumination does not interfere with the visual data projected from the projector 112.
The gesture image system 102 includes a controller 114. The controller 114 receives the image of the sensorless input object 110 captured by the camera 106 and converts the image into the image data. The image data can be a two-dimensional digital representation of the positional relationship between sensorless input object 110 and the retroreflective screen 108. The positional relationship can include, for example, information regarding a location and orientation of the sensorless input object relative to the retroreflective screen 108. As an example, the controller 114 can perform a run-length encoding digital compression algorithm on the image data, such that the amount of data of the image data is reduced.
The controller 114 is coupled to a transceiver 116, which can receive the image data from the controller 114 and transmit the image data to other collaboration I/O systems via an antenna 118. In addition to transmitting the image data to other collaboration I/O systems, the transceiver 116 can also receive image data from other collaboration I/O systems. The image data received can include digital data representative of a positional relationship of one or more other user's sensorless input objects relative to the same visual data. The collaboration I/O system 100 also includes an image combiner 117. The image combiner 117 is configured to combine the image data from the other collaboration I/O systems with the visual data. Thus, the projector 112 can project both the visual data and the image data of the other collaboration I/O systems, such that the image data is superimposed over the visual data. In addition, in a given networked collaboration system, one of the collaboration I/O systems may be a master site, such that the master site also transmits the common visual data to all of the other collaboration I/O systems. Alternatively, each of the collaboration I/O systems could separately launch or display the visual data.
To ensure that the image data accurately represents the positional relationship of the sensorless input object 110 relative to the retroreflective screen 108, and thus relative to the visual data itself, the gesture image system 102 can be calibrated. In the example of
It is to be understood that the collaboration I/O system 100 is not intended to be limited to the example of
The automated calibration pattern 120 includes a plurality of black dots 122 and a non-continuous border 124. The automated calibration pattern 120 can be, for example, printed on a retroreflective surface that can be placed underneath and illuminated by the IR light source 104. As another example, the retroreflective screen 108 in the example of
An example of an automated calibration procedure employing the automated calibration pattern 120 follows. The non-continuous border 124 includes a gap 126, a gap 128, and a gap 130. A user places the automated calibration pattern 120 in the viewing area of the camera 106 such that the automated calibration pattern 120 is oriented in a specific top-wise and left-wise arrangement. For example, the longer side of the non-continuous border 124 with the gap 126 can be designated a top side, as indicated in the example of
Upon setting the projection boundary of the projector 112 with the non-continuous border 124, the controller can then begin a calibration operation. Upon placing the automated calibration pattern 120 in view of the camera 106 and the IR light source 104, the automated calibration unit 119 could be programmed to simply begin a calibration operation after a given amount of time has passed without the detection of any motion. Alternatively, the automated calibration unit 119 could receive an input from a user to begin a calibration operation. The automated calibration unit 119 calibrates by detecting the position of the black dots 122 via the camera 106 relative to the boundaries of the retroreflective screen 108 or the projection boundaries. For example, the black dots 122 can be sized to be approximately the size of a fingertip in diameter (e.g., ½″), and can thus be tuned by the automated calibration unit 119 to be detected. Upon calibration, the controller 114 could be configured to issue a noise to signify that a calibration operation has been completed, such that the user can return the retroreflective screen 108 to the field of view of the camera 106, such as by turning it over, as described above.
It is to be understood that neither the automated calibration pattern 120 nor the manner in which a given gesture collaboration/interface system is calibrated are intended to be limited by the example of
The collaboration I/O system 100 can include a projector 162. The projector 162 can provide an output interface to provide visual data, such as, for example, computer monitor data, for which the user can interact and provide collaborative gestures. The retroreflective screen 158 can thus be the display screen on which the visual data is projected. Therefore, the sensorless input object 160 can be used to provide gestures regarding the visual data directly on the visual data itself as it is being displayed on the retroreflective screen 158. Because the IR light source 154 does not illuminate visible light, the IR illumination does not interfere with the visual data projected from the projector 162.
The gesture image system 152 includes a controller 164. The controller 164 receives the image of the sensorless input object 160 captured by the camera 156 and converts the image into the image data. The image data can be a two-dimensional digital representation of the positional relationship between sensorless input object 160 and the retroreflective screen 158. The positional relationship can include, for example, information regarding a location and orientation of the sensorless input object relative to the retroreflective screen 158. As an example, the controller 164 can perform a run-length encoding digital compression algorithm on the image data, such that the amount of data of the image data is reduced.
The controller 164 is coupled to a transceiver 166, which can receive the image data from the controller 164 and transmit the image data to other collaboration I/O systems via an antenna 168. In addition to transmitting the image data to other collaboration I/O systems, the transceiver 166 can also receive image data from other collaboration I/O systems, such as digital data representative of a positional relationship of one or more other user's sensorless input objects relative to the same visual data. The collaboration I/O system 150 also includes an image combiner 167. The image combiner 167 is configured to combine the image data from the other collaboration I/O systems with the visual data. Thus, the projector 162 can project both the visual data and the image data of the other collaboration I/O systems, such that the image data is superimposed over the visual data. In addition, in a given networked collaboration system, one of the collaboration I/O systems may be a master site, such that the master site also transmits the common visual data to all of the other collaboration I/O systems.
The collaboration I/O system 150 also includes a beamsplitter 170. The beamsplitter 170, in the example of
It is to be understood that the collaboration I/O system 150 is not intended to be limited to the example of
In addition to providing gestures for the purpose of collaboration with other users at geographically separate locations, gestures can also be used to provide device inputs in a collaborative environment.
The first IR light source 206 and the second IR light source 208 each illuminate a retroreflective screen 210, such that IR light from the first IR light source 206 is reflected substantially directly back to the first camera 202 and IR light from the second IR light source 208 is reflected substantially directly back to the second camera 204. Accordingly, an object that is placed above the retroreflective screen 210 may reflect a significantly lesser amount of IR light back to each of the first camera 202 and the second camera 204, respectively. Therefore, such an object can appear to each of the first camera 202 and the second camera 204 as a silhouette image, such that it can appear as a substantially darker object in the foreground of a highly illuminated background surface. It is to be understood that the retroreflective screen 210 may not be completely retroreflective, but may include a Lambertian factor to facilitate viewing by users at various angles relative to the retroreflective screen 210.
A sensorless input object 212 can provide gesture inputs over the retroreflective screen 210. In the example of
In the example of
The first camera 202 and the second camera 204 can each provide their respective separate silhouette images of the sensorless input object 212 to a controller 214. The controller 214 could reside, for example, within a computer (not shown) for which the gesture collaboration/interface system 200 is designed to provide a gesture collaboration/interface. It is to be understood, however, that the hosting of a controller is not limited to a standalone computer, but could be included in embedded processors. The controller 214 can process the respective silhouette images associated with the sensorless input object 212 to generate three-dimensional location data associated with the sensorless input object 212.
For example, each of the first camera 202 and the second camera 204 could be mounted at a pre-determined angle relative to the retroreflective screen 210. For a given matched pair of images of the sensorless input object 212, if the pre-determined angle of each of the cameras 202 and 204 is equal, then each point of the sensorless input object 212 in two-dimensional space in a given image from the camera 202 is equidistant from a corresponding point of the sensorless input object 212 in the respective matched image from the camera 204. As such, the controller 214 could determine the three-dimensional physical location of the sensorless input object 212 based on a relative parallax separation of the matched pair of images of the sensorless input object 212 at a given time.
In addition, using a computer algorithm, the controller 214 could also determine the three-dimensional physical location of at least one end-point, such as a fingertip, associated with the sensorless input object 212, as will be described in greater detail in the example of
The gesture collaboration/interface system 200 can also include a projector 218. The projector 218 can provide an output interface, such as, for example, visual data, for which the user can interact and provide inputs. The visual data can be visual data for which other users in geographically separate locations can collaborate from respective collaboration I/O systems, or from other gesture collaboration/interface systems. In the example of
As an example, the controller 214 could interpret two-dimensional motion of an end-point of the sensorless input object 212 across the retroreflective screen 210 as a mouse cursor, which can be projected as part of the visual data by the projector 218. Furthermore, as another example, by determining the three-dimensional physical location of the end-point of the sensorless input object 212, the controller 214 could interpret a touch of the retroreflective screen 210 by the end-point of the sensorless input object 212 as a left mouse-click. Accordingly, a user of the gesture collaboration/interface system 200 could navigate through a number of computer menus associated with a computer merely by moving his or her fingertip through the air above the retroreflective screen 210 and by touching icons projected onto the retroreflective screen 210, and can move displayed objects on the retroreflective screen 210 in a similar manner.
The controller 214 is coupled to a transceiver 220, which can receive the image data from the controller 214 and transmit the image data to other collaboration I/O systems and/or other gesture collaboration/interface systems via an antenna 222. In addition to transmitting the image data to other collaboration I/O systems, the transceiver 220 can also receive image data from other collaboration I/O systems and/or other gesture collaboration/interface systems. The image data received can include digital data representative of a positional relationship of one or more other user's sensorless input objects relative to the same visual data. The image data from the other collaboration I/O systems can be combined with the visual data. Thus, the projector 218 can project both the visual data and the image data of the other collaboration I/O systems, such that the image data is superimposed over the visual data.
In addition, in a given networked collaboration system, one of the collaboration I/O systems and/or other gesture collaboration/interface systems may be a master site, such that the master site also transmits the common visual data to all of the other collaboration I/O systems and/or other gesture collaboration/interface systems. Alternatively, each of the collaboration I/O systems and/or other gesture collaboration/interface systems could separately launch or display the visual data. Furthermore, a gesture collaboration system that includes more than one gesture collaboration/interface system can be configured such that any of the gesture collaboration/interface systems can provide device inputs regarding the visual data at a given time. Alternatively, the gesture collaboration system can be configured to allow device inputs regarding the visual data from only one gesture collaboration/interface system at a time.
As will be apparent in the following discussion, the gesture collaboration/interface system 200 in the example of
The portable collaboration I/O system 250 can be collapsible, such that it can fit in a carrying case or brief case. In addition, the portable collaboration I/O system 250 can be configured as a self-contained, standalone unit. For example, the swivel arm 256 can also include a projector, such that collaborative visual data can be projected onto the retroreflective screen 252. The portable collaborative I/O system 250 can include an integral transceiver, such as a Wi-Fi connection, or it can include a receptacle for a plug-in device with communication capability, such as a cellular phone. As an alternative, the portable collaboration I/O system 250 can connect to a personal computer, such as through a USB or other connection, such that the user merely provides the collaboration gestures over the retroreflective screen while the visual data, including the user's own image data, appears on the computer monitor.
The portable collaboration I/O system 300 can be collapsible, such that it can fit in a carrying case. As such, the portable collaboration I/O system 300 can be configured as a self-contained, standalone unit. For example, the portable collaboration I/O system 300 can be transported to remote locations to allow collaboration in an environment that is not well suited for the use of a portable computer, such as in a jungle, at sea, or on a mountain.
It is to be understood that the portable collaboration I/O systems described in the examples of
In view of the foregoing structural and functional features described above, a methodology in accordance with various aspects of the present invention will be better appreciated with reference to
At 356, image data associated with the other users of the plurality of users is received. The image data associated with the other users of the plurality of users can be data associated with a location and orientation of sensorless input objects relative to the visual data. The image data from each respective other user can be color coded to correspond to the specific other user. At 358, the visual data and the image data of the other users is combined. The combination could be such that the image data of the other users is superimposed over the visual data. At 360, the visual data and the image data of the other users is displayed. The display could be a projection onto the display screen from a projector.
What have been described above are examples of the present invention. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the present invention, but one of ordinary skill in the art will recognize that many further combinations and permutations of the present invention are possible. Accordingly, the present invention is intended to embrace all such alterations, modifications and variations that fall within the spirit and scope of the appended claims.
Number | Name | Date | Kind |
---|---|---|---|
4468694 | Edgar | Aug 1984 | A |
4843568 | Krueger et al. | Jun 1989 | A |
4924506 | Crossley et al. | May 1990 | A |
5025314 | Tang et al. | Jun 1991 | A |
5220441 | Gerstenberger | Jun 1993 | A |
5239373 | Tang et al. | Aug 1993 | A |
5475422 | Mori et al. | Dec 1995 | A |
5483261 | Yasutake | Jan 1996 | A |
5913727 | Ahdoot | Jun 1999 | A |
5999185 | Kato et al. | Dec 1999 | A |
6002808 | Freeman | Dec 1999 | A |
6128003 | Smith et al. | Oct 2000 | A |
6147678 | Kumar et al. | Nov 2000 | A |
6195104 | Lyons | Feb 2001 | B1 |
6204852 | Kumar et al. | Mar 2001 | B1 |
6327381 | Rogina et al. | Dec 2001 | B1 |
6353428 | Maggioni et al. | Mar 2002 | B1 |
6359612 | Peter et al. | Mar 2002 | B1 |
6434255 | Harakawa | Aug 2002 | B1 |
6512507 | Furihata et al. | Jan 2003 | B1 |
6624833 | Kumar et al. | Sep 2003 | B1 |
6681031 | Cohen et al. | Jan 2004 | B2 |
6695770 | Choy et al. | Feb 2004 | B1 |
6714901 | Cotin et al. | Mar 2004 | B1 |
6720949 | Pryor et al. | Apr 2004 | B1 |
6788809 | Grzeszczuk et al. | Sep 2004 | B1 |
6796656 | Dadourian | Sep 2004 | B1 |
6806849 | Sullivan | Oct 2004 | B2 |
6857746 | Dyner | Feb 2005 | B2 |
6950534 | Cohen et al. | Sep 2005 | B2 |
6956573 | Bergen et al. | Oct 2005 | B1 |
6983065 | Akgul et al. | Jan 2006 | B1 |
7042440 | Pryor et al. | May 2006 | B2 |
7129927 | Mattsson | Oct 2006 | B2 |
7259747 | Bell | Aug 2007 | B2 |
7620900 | Kawashima et al. | Nov 2009 | B2 |
7701439 | Hillis et al. | Apr 2010 | B2 |
20010006426 | Son et al. | Jul 2001 | A1 |
20010043719 | Harakawa et al. | Nov 2001 | A1 |
20020030637 | Mann | Mar 2002 | A1 |
20020090146 | Heger et al. | Jul 2002 | A1 |
20020093666 | Foote et al. | Jul 2002 | A1 |
20020122113 | Foote | Sep 2002 | A1 |
20020126161 | Kuzunuki et al. | Sep 2002 | A1 |
20020186221 | Bell | Dec 2002 | A1 |
20030058341 | Brodsky et al. | Mar 2003 | A1 |
20030067537 | Myers | Apr 2003 | A1 |
20030085866 | Bimber et al. | May 2003 | A1 |
20030156756 | Gokturk et al. | Aug 2003 | A1 |
20030218761 | Tomasi et al. | Nov 2003 | A1 |
20040046747 | Bustamante | Mar 2004 | A1 |
20040108990 | Lieberman et al. | Jun 2004 | A1 |
20040113885 | Genc et al. | Jun 2004 | A1 |
20040125207 | Mittal et al. | Jul 2004 | A1 |
20040183775 | Bell | Sep 2004 | A1 |
20040190776 | Higaki et al. | Sep 2004 | A1 |
20040193413 | Wilson et al. | Sep 2004 | A1 |
20040239761 | Jin et al. | Dec 2004 | A1 |
20050002074 | McPheters et al. | Jan 2005 | A1 |
20050012817 | Hampapur et al. | Jan 2005 | A1 |
20050052714 | Klug et al. | Mar 2005 | A1 |
20050068537 | Han et al. | Mar 2005 | A1 |
20050088714 | Kremen | Apr 2005 | A1 |
20050110964 | Bell et al. | May 2005 | A1 |
20050151850 | Ahn et al. | Jul 2005 | A1 |
20050166163 | Chang et al. | Jul 2005 | A1 |
20050275628 | Balakrishnan et al. | Dec 2005 | A1 |
20050285945 | Usui et al. | Dec 2005 | A1 |
20050286101 | Garner et al. | Dec 2005 | A1 |
20060010400 | Dehlin et al. | Jan 2006 | A1 |
20060036944 | Wilson | Feb 2006 | A1 |
20060052953 | Vilanova et al. | Mar 2006 | A1 |
20060092178 | Tanguay, Jr. | May 2006 | A1 |
20060125799 | Hillis et al. | Jun 2006 | A1 |
20060187196 | Underkoffler et al. | Aug 2006 | A1 |
20060203363 | Levy-Rosenthal | Sep 2006 | A1 |
20060209021 | Yoo et al. | Sep 2006 | A1 |
20070024590 | Krepec | Feb 2007 | A1 |
20070064092 | Sandbeg et al. | Mar 2007 | A1 |
20080013826 | Hillis et al. | Jan 2008 | A1 |
20080043106 | Hassapis et al. | Feb 2008 | A1 |
20080150913 | Bell et al. | Jun 2008 | A1 |
20080244468 | Nishihara et al. | Oct 2008 | A1 |
20090015791 | Chang et al. | Jan 2009 | A1 |
20090103780 | Nishihara et al. | Apr 2009 | A1 |
20090115721 | Aull et al. | May 2009 | A1 |
20090116742 | Nishihara | May 2009 | A1 |
20090316952 | Ferren et al. | Dec 2009 | A1 |
20100050133 | Nishihara et al. | Feb 2010 | A1 |
Number | Date | Country |
---|---|---|
197 39 285 C 1 | Nov 1998 | DE |
0 571 702 | Dec 1993 | EP |
0 571 702 | Dec 1993 | EP |
0 913 790 | May 1999 | EP |
1 223 537 | Dec 2001 | EP |
1 689 172 | Aug 2006 | EP |
1 879 129 | Jan 2008 | EP |
1 879 130 | Jan 2008 | EP |
2 056 185 | May 2009 | EP |
2 068 230 | Jun 2009 | EP |
2460937 | Dec 2009 | GB |
62264390 | Jan 1987 | JP |
4271423 | Feb 1991 | JP |
04031996 | Feb 1992 | JP |
WO 9813746 | Apr 1998 | WO |
WO 0002187 | Jan 2000 | WO |
WO 0021023 | Apr 2000 | WO |
WO 03026299 | Mar 2003 | WO |
WO 2008001202 | Jan 2008 | WO |
Number | Date | Country | |
---|---|---|---|
20080028325 A1 | Jan 2008 | US |