1. Technical Field
This invention is directed toward an integrated omni-directional camera and microphone array. More specifically, this invention is directed towards an integrated omni-directional camera and microphone array that can be used for teleconferencing and meeting recording.
2. Background Art
Video conferencing systems have had limited commercial success. This is due to many factors. In particular, there are typically numerous technical deficiencies in these systems. Poor camera viewpoints and insufficient image resolution make it difficult for meeting participants to see the person speaking. This is compounded by inaccurate speaker detection (especially for systems with pan-tilt-zoom cameras) that causes the camera not to be directed at the person speaking. Additionally, poor video compression techniques often result in poor video image quality and “choppy” image display.
The capturing devices of systems used for teleconferencing tend to focus on a few major sources of data that are valuable for videoconferencing and meeting viewing. These include video data, audio data, and electronic documents or presentations shown on a computer monitor. Given that numerous software solutions exist to share documents and presentations, the capture of audio and video data in improved ways is of special interest.
Three different methods exist to capture video data: pan/tilt/zoom (PTZ) cameras, mirror-based omni-directional cameras, and camera arrays. While PTZ cameras are currently the most popular choice, they have two major limitations. First, they can only capture a limited field of view. If they zoom in too closely, the context of the meeting room is lost; if they zoom out too far, people's expressions become invisible. Second, because the controlling motor takes time to move the camera, the camera's response to the meeting (e.g., switching between speakers) is slow. In fact, PTZ cameras cannot move too much or too fast, otherwise people watching the meeting can be quite distracted.
Given these drawbacks and recent technological advances in mirror/prism-based omni-directional vision sensors, researchers have started to rethink the way video is captured and analyzed. For example, BeHere Corporation provides 360° Internet video technology in entertainment, news and sports webcasts. With its interface, remote users can control personalized 360° camera angles independent of other viewers to gain a “be here” experience. While this approach overcomes the two difficulties of limited field of view and slow camera response faced by the PTZ cameras, these types of devices tend to be too expensive to build given today's technology and market demand. In addition, these mirror prism-based omni-directional cameras suffer from low resolution (even with 1 MP sensors) and defocusing problems, which result in inferior video quality.
In another approach, multiple inexpensive cameras or video sensors are assembled to form an omni-directional camera array. For example, one known system employs four National Television System Committee (NTSC) cameras to construct a panoramic view of a meeting room. However, there are disadvantages with this design. First, NTSC cameras provide a relatively low quality video signal. In addition, the four cameras require four video capture boards to digitize the signal before it can be analyzed, transmitted or recorded. The requirement for four video capturing boards increases the cost and complexity of such a system, and makes it more difficult to manufacture and maintain.
Besides the problems noted with video capture, capturing high-quality audio in a meeting room is also challenging. The audio capturing system needs to remove a variety of noises and reverberation. It also must adjust the gain for different levels of input signal. In general, there are three approaches to address these requirements. The simplest approach is to use close-up microphones (e.g., via headset), but this is cumbersome and intrusive to the user/speaker. A second approach is to place a microphone on the meeting room table. This prevents multiple acoustic paths and is currently the most common approach to recording meeting audio. These systems use several (usually three) hypercardioid microphones to provide omni-directional characteristics. The third approach is provided in a desktop teleconferencing system. In this approach, a unidirectional microphone is mounted on top of a PTZ camera, which points at the speaker. The camera/microphone group is controlled by a computer that uses a separate group of microphones to perform sound source localization. This approach, however, requires two separate sets of microphones.
The present invention is directed towards a system and process that overcomes the aforementioned limitations in videoconferencing and meeting recording systems. Specifically, the present system and method employs an integrated omni-directional camera and microphone array to accomplish this task.
In the most general sense, the invention consists of a cylindrical rod that is thin enough to be acoustically invisible for the frequency ranges of human speech (50-4000 Hz) and connects a camera array to a microphone array. As a result, sound diffraction and shadowing are eliminated.
The integrated camera and microphone array employs a 360-degree camera designed to solve each of the aforementioned problems with video conferencing. The 360-degree camera can be positioned in the center of a conference table, which gives a superior camera viewpoint of the participants compared to a typical video conferencing system (in which the camera is at one end of the room). The camera is elevated from the table to provide a near frontal viewpoint of the meeting participants. Additionally, the integrated camera and microphone array provides sufficient resolution for a remote viewer to see facial expressions from meeting participants (e.g., in one working embodiment it has a resolution of 3000×480). The camera can be of any omni-directional type, either employing a camera array or a single video sensor with a hyperbolic mirror.
The microphone array is in a planar configuration. The microphones are preferably mounted in a microphone array base, so as to be located as close to the desktop as possible to eliminate sound reflections from the table. As mentioned previously, the camera is connected to the microphone array base with a thin cylindrical rod, which is acoustically invisible to the microphone array for the frequency range of the human voice (i.e., about 50-4000 Hz). This provides a direct path from the person talking, to all of the microphones in the array, making it superior for sound source localization (determining the location of the speaker) and beam-forming (improving the sound quality of the speaker by filtering out sound not coming from the direction of the speaker). The integrated microphone array is used to perform real-time sound source localization, and the camera array is used with computer vision based human detection and tracking to accurately detect where speakers are located in the image. The audio and video based speaker detection can be used for automatic camera management, as well as greatly improved video compression (e.g., by using more bits on facial regions than the background).
The output of the integrated camera and microphone array is preferably connected to the PC, where such applications as image stitching and compression, sound source localization, beam-forming, and camera management may take place.
One working embodiment of the integrated camera and microphone array uses a 1394 bus to transfer video to the PC, and analog cables to transfer audio to a Personal Computer (PC). Five IEEE 1394 cameras that provide superior video quality and only require a single 1394 card are employed in this embodiment. Another alternate embodiment uses a single Printed Circuit Board (PCB) for all cameras and microphones, so that all audio and video is transmitted over a single 1394 cable. The 1394 cable also provides power, so only a single cable is needed between the camera and PC.
The microphones used can be either omni-directional or unidirectional, though omni-directional are preferred, as they give a uniform response for all sound angles of interest. The minimum number of microphones needed is three, though a preferred embodiment of the invention uses eight for increased sound source localization accuracy, better beam-forming and robustness of the whole audio system. The microphones are preferably equilaterally disposed in a circle around the circumference of round, planar microphone base, although other configurations are also possible. The more microphones that are used the better the omni-directional audio coverage and signal to noise ratio. However, the cost and complexity of greater numbers of microphones is a tradeoff. Additionally, with more microphones, processing of the audio signals becomes more complex. To reduce table noise, the microphones may be mounted in a rubber casing, and sound insulation is placed below the microphone.
The camera may employ a lens shield, which is up in normal operating mode, and down in privacy mode. Alternately, the shutter for the camera sensors can be turned off or the camera can be electronically isolated to turn off the camera while in privacy mode. The microphones are also preferably turned off when the privacy mode is evoked. During recording, a light on the camera is on to let users know the camera is active. When the camera is in privacy mode the light is turned off.
Various alternate embodiments of the integrated omni-directional camera and microphone design are possible. This is in part due to the modular nature of the system. For instance, in one embodiment an omni-directional camera is used that employs multiple video sensors to achieve 360 degree camera coverage. Alternately, in another embodiment of the invention, an omni-directional camera that employs one video sensor and a hyperbolic lens that captures light from 360 degrees to achieve panoramic coverage is used. Furthermore, either of these camera setups may be used by themselves, elevated on the acoustically transparent cylindrical rod, to provide a frontal view of the meeting participants. Or they can be integrated with the aforementioned microphone array. Alternately, other camera designs could also be used in conjunction with the cylindrical rod. The rod connecting the camera and microphone array also need not be cylindrical, as long as it is thin enough to not diffract sound in the (50-4000) Hz range.
Likewise, as discussed previously, in one embodiment the microphone array consists of microphones disposed at equilateral distances around the circumference of a circle and as near to a table surface as possible to achieve a clear path to any speaker in the room with minimum reflection of sound off the table. However, other microphone configurations are possible that can be integrated with an omni-directional camera setup using the acoustically transparent rod. Additionally, the omni-directional microphone array just discussed can be used without any camera to achieve optimum 360 degree sound coverage. This coverage is especially useful in sound source localization and beam-forming as multi-path problems are minimized or eliminated.
One embodiment employing the camera and microphone array of the invention uses a computer to optimize the image data and audio signals. The digital image output of the camera and the audio output of the microphone array (via an analog to digital converter) is routed into a computer. The computer performs various functions to enhance and utilize the image and audio input. For instance, a panoramic image filter stitches together images that are taken by various sensors in the omni-directional camera. Additionally, the image data can be compressed to make it more compatible for broadcast over a network (such as the Internet) or saved to a computer readable medium, preferably via a splitter that splits the video and audio output to be transmitted and/or recorded. Optionally, the image data can also be input into a person detector/tracker to improve camera management. For instance, the portions of the image/video containing the speaker can be identified, and associated with the audio signal, such that the camera view shown in the videoconference can be directed towards the speaker when they speak. Additionally, speaker location can be used to improve video compression by allowing greater resolution for facial regions than background.
The audio input can be also be used for various purposes. For instance, the audio can be used for sound source localization, so that the audio can be optimized for the speaker's direction at any given time. Additionally, a beam forming module can be used in the computer to improve the beam shape of the audio thereby further improving filtering of audio from a given direction. A noise reduction and automatic gain control module can also be used to improve the signal to noise ratio by reducing the noise and adjusting the gain to better capture the audio signals from a speaker, as opposed to the background noise of the room. Each of these image and audio processing modules can be used alone, or in combination, or not at all.
The video and audio signals, either enhanced or not, can be broadcast to another video conferencing site or the Internet. They also can be saved to a computer readable medium for later viewing.
The primary application for the above-described integrated camera and microphone array is videoconferencing and meeting recording. By integrating the microphone array with the omni-directional camera, the calibration between the video and audio needed is greatly simplified (a precisely manufactured camera and microphone array needs no calibration) and gathering audio and video information from a conference room with a single device is achieved.
The specific features, aspects, and advantages of the present invention will become better understood with regard to the following description, appended claims, and accompanying drawings where:
1.0 Exemplary Operating Environment
In the following description of the preferred embodiments of the present invention, reference is made to the accompanying drawings which form a part hereof, and in which is shown by way of illustration specific embodiments in which the invention may be practiced. It is understood that other embodiments may be utilized and structural changes may be made without departing from the scope of the present invention.
The invention is operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known computing systems, environments, and/or configurations that may be suitable for use with the invention include, but are not limited to, personal computers, server computers, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
The invention may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
With reference to
Computer 110 typically includes a variety of computer readable media. Computer readable media can be any available media that can be accessed by computer 110 and includes both volatile and nonvolatile media, removable and non-removable media. By way of example, and not limitation, computer readable media may comprise computer storage media and communication media. Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computer 110. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of the any of the above should also be included within the scope of computer readable media.
The system memory 130 includes computer storage media in the form of volatile and/or nonvolatile memory such as read only memory (ROM) 131 and random access memory (RAM) 132. A basic input/output system 133 (BIOS), containing the basic routines that help to transfer information between elements within computer 110, such as during start-up, is typically stored in ROM 131. RAM 132 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 120. By way of example, and not limitation,
The computer 110 may also include other removable/non-removable, volatile/nonvolatile computer storage media. By way of example only,
The drives and their associated computer storage media discussed above and illustrated in
The computer 110 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 180. The remote computer 180 may be a personal computer, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer 110, although only a memory storage device 181 has been illustrated in
When used in a LAN networking environment, the computer 110 is connected to the LAN 171 through a network interface or adapter 170. When used in a WAN networking environment, the computer 110 typically includes a modem 172 or other means for establishing communications over the WAN 173, such as the Internet. The modem 172, which may be internal or external, may be connected to the system bus 121 via the user input interface 160, or other appropriate mechanism. In a networked environment, program modules depicted relative to the computer 110, or portions thereof, may be stored in the remote memory storage device. By way of example, and not limitation,
The exemplary operating environment having now been discussed, the remaining parts of this description section will be devoted to a description of the program modules embodying the invention.
3.0 Integrated Omni-Directional Camera and Microphone Array
In this section, the integrated omni-directional camera and microphone array, connected via an acoustically transparent rod, is discussed.
3.1 Overview
This invention addresses how to optimally integrate an omni-directional camera with a microphone array. The goals of the design were that:
The integrated camera and microphone array employs a cylindrical pole that connects the microphone base to the camera array. This pole is acoustically invisible for the frequency ranges of human speech (50-4000 Hz).
As shown in
The design provides a clear path to all microphones from any given speaker or sound source and places the microphone array close to the table top to avoid multi-path problems caused from sound reflections from the table. Additionally, the design elevates the camera from the desktop, thus providing a frontal or near frontal view of all meeting participants.
The integrated camera and microphone array ensures a good beam shape that can be used for improving the sound quality of the speaker by filtering sound from only one direction. Furthermore, the integrated nature of the camera and microphone is advantageous because it eliminates the need for repeated calibrations. Since the camera and microphone are integrated as a single device, only one initial calibration is necessary. Also, since the integrated camera and microphone can be of a compact, fixed design, it is much less intrusive than two separate camera and microphone components that would require separate cables and additional space on the conference table.
3.3 System Components.
One embodiment of the integrated omni-directional camera and microphone array is shown in
3.3.1 Omni-Directional Camera
A variety of omni-directional camera technologies exist. These include one camera type wherein multiple video sensors are tightly packed together in a back-to-back fashion. Another omni-directional camera type employs a single video sensor with a hyperbolic lens that captures light rays from 360 degrees. The integrated camera and microphone array design of the invention can use any such omni-directional camera. It is preferable that the camera head 302 should be small enough so as not to be intrusive when set on a conference room table or other surface.
If a multi-sensor camera configuration is used, a plurality of camera or video sensors can be employed. A preferred number is eight. These sensors should preferably be disposed in a back-to-back fashion such that the center of projection of each sensor is an equal angular distance apart. For example, if eight sensors are used, then each sensor would be 45 degrees from the sensors adjacent to it. However, it is possible to employ different lenses and different camera placement if it is necessary to capture images at different distances. For instance, such would be the case in a rectangular or oval conference table. Lenses with longer, narrower fields of view can be used for the longer distances, and wider, shorter fields of view could be used to capture images at shorter distances. In this case the camera sensors might not be equilaterally disposed around the camera head. Camera sensors with a wider field of view can be placed further away from camera sensors with a narrower field of view. Alternately, cameras with a variable field of view (that rotate and zoom in and out to adjust to a given situation) can also be employed.
One working embodiment of the invention, shown in
3.3.2 Cylinder
Referring to the embodiment shown in
Referring again to the working embodiment shown in
3.3.3 Microphone Base
In general, the microphone base holds the microphones, microphone preamplifier, and A/D converter. It connects to the cylinder, and provides a connection outlet for the camera cables. The microphone base is low profile, to minimize the distance between the desktop and the microphones. The base allows a direct path from each microphone to the participant(s).
In the working embodiment of the integrated camera and microphone array is shown in
3.3.4 Microphones
The microphones used can be either omni-directional or unidirectional, though omni-directional microphones are preferred, as they give a uniform response for all sound angles of interest. The minimum number of microphones needed is three, though the embodiment of the invention uses eight for increased sound source localization accuracy, better beam-forming and robustness of the whole audio system.
To reduce table noise, the microphones may be mounted in a rubber casing, and sound insulation may be placed below the microphones for the same purpose.
Referring again to the working embodiment shown in
3.3.5 Microphone Preamplifier, A/D Converter
The microphone preamplifier 310 and analog to digital (A/D) converter (not shown) are preferably integrated into the microphone base 306, as shown in
In this embodiment, the signal sampling of the signals from the microphones is synchronized to within 1 microsecond of each other, to facilitate sound source localization and beam-forming.
3.4 Privacy Mode
The camera may employ a lens shield, which is open in normal operating mode, and closed in privacy mode. Alternately, the shutter for the camera sensors can be turned off or the camera could be electronically isolated to turn off the camera while in privacy mode. The microphones are also preferably turned off when the privacy mode is evoked. During recording, a light on the top of the camera is on to let users know the camera is active. When privacy mode is on the light is turned off.
4.0 Alternate Embodiments Due to Modular Nature
Various alternate embodiments of the integrated omni-directional camera and microphone design are possible. This is in part due to the modular nature of the system.
For instance, various camera embodiments can be employed. In one embodiment, an omni-directional camera is used that employs multiple video sensors to achieve 360 degree camera coverage. Alternately, in another embodiment of the invention, an omni-directional camera that employs one video sensor and a hyperbolic lens that captures light from 360 degrees to achieve panoramic coverage is used. Furthermore, either of these cameras may be used by themselves, elevated on the acoustically transparent cylindrical rod, to provide a frontal view of the meeting participants. Or either of the cameras can be integrated with a microphone array. Alternately, other omni-directional camera designs can also be used in conjunction with the cylindrical rod and/or microphone array.
Likewise, various microphone configurations can be employed. In one embodiment the microphone array consists of microphones disposed at equilateral distances around the circumference of a circle and as near to a table surface as possible to achieve a clear path to any speaker in the room. However, other microphone configurations are possible that can be integrated with a camera using the acoustically transparent rod. Alternately, the omni-directional microphone array just discussed can be used without any camera to achieve optimum 360 degree sound coverage. This coverage is especially useful in sound source localization and beam-forming as multi-path problems are minimized or eliminated.
In one embodiment of the integrated camera and microphone array, image stitching and compression are performed on a PC. An alternate embodiment performs the image stitching and compression in the camera with a Field Programmable Gate Array (FPGA) or other gate array. This design uses a USB interface to interface the camera and PC, and allows the PC more CPU cycles to do other tasks such as image compression and recording/broadcasting the meeting.
5.0 Exemplary Working Embodiment
One working embodiment employing the camera 502 and microphone array 504 of the invention is shown in
The audio input can be also be used for various purposes. For instance, the audio can be input into a sound source localization module 526, so that the audio from the speaker is isolated. Additionally, a beam-forming module 528 can be used in the computer 508 to improve the beam shape of the audio. A noise reduction and automatic gain control module 530 can also be used to improve the signal to noise ratio by reducing the noise and adjusting the gain to better capture the audio signals from a speaker, as opposed to the background noise of the room.
As mentioned previously, the video and audio signals can be broadcast to another video conferencing site or the Internet. They also can be saved to a computer readable medium for later viewing.
The foregoing description of the invention has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form disclosed. For instance, embodiments of the integrated camera and microphone array as discussed above could be applied to a surveillance system. Many modifications and variations are possible in light of the above teaching. It is intended that the scope of the invention be limited not by this detailed description, but rather by the claims appended hereto.
Number | Name | Date | Kind |
---|---|---|---|
4658425 | Julstrom | Apr 1987 | A |
5539483 | Nalwa | Jul 1996 | A |
5625410 | Washino et al. | Apr 1997 | A |
5745305 | Nalwa | Apr 1998 | A |
5793527 | Nalwa | Aug 1998 | A |
5990934 | Nalwa | Nov 1999 | A |
6005611 | Gullichsen et al. | Dec 1999 | A |
6043837 | Driscoll, Jr. et al. | Mar 2000 | A |
6072522 | Ippolito et al. | Jun 2000 | A |
6111702 | Nalwa | Aug 2000 | A |
6115176 | Nalwa | Sep 2000 | A |
6128143 | Nalwa | Oct 2000 | A |
6141145 | Nalwa | Oct 2000 | A |
6144501 | Nalwa | Nov 2000 | A |
6175454 | Hoogland et al. | Jan 2001 | B1 |
6195204 | Nalwa | Feb 2001 | B1 |
6219089 | Driscoll, Jr. et al. | Apr 2001 | B1 |
6219090 | Nalwa | Apr 2001 | B1 |
6222683 | Hoogland et al. | Apr 2001 | B1 |
6244759 | Russo | Jun 2001 | B1 |
6278478 | Ferriere | Aug 2001 | B1 |
6285365 | Nalwa | Sep 2001 | B1 |
6313865 | Driscoll, Jr. et al. | Nov 2001 | B1 |
6331869 | Furlan et al. | Dec 2001 | B1 |
6337708 | Furlan et al. | Jan 2002 | B1 |
6341044 | Driscoll, Jr. et al. | Jan 2002 | B1 |
6346967 | Gullichsen et al. | Feb 2002 | B1 |
6356296 | Driscoll, Jr. et al. | Mar 2002 | B1 |
6356397 | Nalwa | Mar 2002 | B1 |
6369818 | Hoffman et al. | Apr 2002 | B1 |
6373642 | Wallerstein et al. | Apr 2002 | B1 |
6375370 | Wesselink | Apr 2002 | B1 |
6388820 | Wallerstein et al. | May 2002 | B1 |
6392687 | Driscoll, Jr. et al. | May 2002 | B1 |
6424377 | Driscoll, Jr. et al. | Jul 2002 | B1 |
6426774 | Driscoll, Jr. et al. | Jul 2002 | B1 |
6439515 | Powers | Aug 2002 | B1 |
6459451 | Driscoll, Jr. et al. | Oct 2002 | B2 |
6466254 | Furlan et al. | Oct 2002 | B1 |
6480229 | Driscoll, Jr. et al. | Nov 2002 | B1 |
6493032 | Wallerstein et al. | Dec 2002 | B1 |
6515696 | Driscoll, Jr. et al. | Feb 2003 | B1 |
6539547 | Driscoll, Jr. et al. | Mar 2003 | B2 |
6583815 | Driscoll, Jr. et al. | Jun 2003 | B1 |
6593969 | Driscoll, Jr. et al. | Jul 2003 | B1 |
6597520 | Wallerstein et al. | Jul 2003 | B2 |
6628897 | Suzuki | Sep 2003 | B2 |
6700711 | Nalwa | Mar 2004 | B2 |
6714249 | May et al. | Mar 2004 | B2 |
6741250 | Furlan et al. | May 2004 | B1 |
6756990 | Koller | Jun 2004 | B2 |
6885509 | Wallerstein et al. | Apr 2005 | B2 |
6924832 | Shiffer et al. | Aug 2005 | B1 |
20020012066 | Nagai | Jan 2002 | A1 |
20020034020 | Wallerstein et al. | Mar 2002 | A1 |
20020063802 | Gullichsen et al. | May 2002 | A1 |
20020094132 | Hoffman et al. | Jul 2002 | A1 |
20020141595 | Jouppi | Oct 2002 | A1 |
20020154417 | Wallerstein et al. | Oct 2002 | A1 |
20020191071 | Rui et al. | Dec 2002 | A1 |
20030142402 | Carbo, Jr. et al. | Jul 2003 | A1 |
20030193606 | Driscoll, Jr. et al. | Oct 2003 | A1 |
20030193607 | Driscoll, Jr. et al. | Oct 2003 | A1 |
20040008407 | Wallerstein et al. | Jan 2004 | A1 |
20040008423 | Driscoll, Jr. et al. | Jan 2004 | A1 |
20040021764 | Driscoll, Jr. et al. | Feb 2004 | A1 |
20040252384 | Wallerstein et al. | Dec 2004 | A1 |
20040254982 | Hoffman et al. | Dec 2004 | A1 |
Number | Date | Country |
---|---|---|
404210194 | Jul 1992 | JP |
06-295333 | Oct 1994 | JP |
08-032945 | Feb 1996 | JP |
408032945 | Feb 1996 | JP |
08-340466 | Dec 1996 | JP |
10-149447 | Jun 1998 | JP |
304329 | Nov 1998 | JP |
11-041577 | Feb 1999 | JP |
411041577 | Feb 1999 | JP |
2000-106556 | Apr 2000 | JP |
02000106566 | Apr 2000 | JP |
2000-181498 | Jun 2000 | JP |
02000181498 | Jun 2000 | JP |
2000-308040 | Nov 2000 | JP |
02000308040 | Nov 2000 | JP |
Number | Date | Country | |
---|---|---|---|
20040001137 A1 | Jan 2004 | US |