The present invention relates generally to camcorder systems, and in particular, to a three-dimensional camcorder system.
Camcorders (video camera recorders) receive video information and convert it to electronic video signal, which is recorded on the storage medium in analog or digital formats. While most conventional camcorders record and display two-dimensional video images, some camcorders are capable of recording particular information required for stereoscopic viewing. When human eyes see a scene, right and left side eyes have two different perspectives due to their separation. The brain fuses these two images and assesses the visual depth. Three-dimensional camcorders need to provide two perspective images for each video frame. In order to provide different viewing angles simultaneously, they usually use two cameras which are separated at a certain distance as human eyes are.
Conventional three-dimensional display devices use two perspective images of a scene and provide methods to let each eye see an only intended perspective. While conventional three-dimensional display devices require special glasses for a viewer to see a three-dimensional image from two perspective images, autostereoscopic display devices do not require special glasses. Autostereoscopic devices display two different perspective images concurrently and use barriers to let each eye see the intended image as disclosed by U.S. Pat. No. 6,046,849 to Moseley. Usually, these devices have several viewing areas and the scene appears to leap out of the screen, which provides a virtual-reality environment. These autostereoscopic display devices can be built in the three-dimensional camcorders as electronic viewfinders like normal LCD screens in two-dimensional camcorders in order to find and focus views in the three-dimensional space and replay recorded three-dimensional images or movies.
The present invention provides a three-dimensional camcorder comprising at least one variable focal length MicroMirror Array Lens (MMAL) having a plurality of micromirrors for three-dimensional imaging, recording, and displaying.
An objective of the invention is to provide a three-dimensional camcorder that provides depthwise images and the depth information for each depthwise image, an all-in-focus image and the depth information for each pixel of the all-in-focus image, or a pair of stereoscopic images using a single camera system. The three-dimensional camcorder comprises at least one variable focal length MicroMirror Array Lens (MMAL), an imaging unit, and an image processing unit for three-dimensional imaging and recording, and a three-dimensional viewfinder for displaying three-dimensional images.
The variable focal length MMAL comprises a plurality of micromirrors, wherein each of the micromirrors in the MMAL is controlled to change the focal length of the MMAL. Each micromirror has the same function as a mirror. Micromirrors in the MMAL are arranged in a shape depending on the geometry of the imaging system on a substantially flat plane. The MMAL works as a reflective focusing lens by making all light scattered from one point of an object have the same periodical phase and converge at one point on the image plane. Each micromirror in the MMAL is controlled to have desired translation and rotation to satisfy the convergence and phase matching conditions for forming an image of the object, wherein each micromirror of the MMAL is actuated by the electrostatic and/or the electromagnetic force. The focal length and the optical axis of the MMAL are changed by controlling the rotation and translation of each micromirror in the MMAL. Also, the aberrations of the three-dimensional camcorder are corrected by controlling the rotation and translation of each micromirror in the MMAL.
The following US patents and applications describe the variable focal length MMAL: U.S. Pat. No. 6,934,072 to Kim, U.S. Pat. No. 6,934,073 to Kim, U.S. patent application Ser. No. 10/855,554 filed May 27, 2004, U.S. patent application Ser. No. 10/855,715 filed May 27, 2004, U.S. patent application Ser. No. 10/857,714 filed May 28, 2004, U.S. patent application Ser. No. 10/857,280 filed May 28, 2004, U.S. patent application Ser. No. 10/893,039 filed May. 28, 2004, and U.S. patent application Ser. No. 10/983,353 filed Mar. 4, 2005, all of which are hereby incorporated by reference.
The variable focal length MMAL changes its surface profile to change its focal length by controlling the rotation and translation of each micromirror. The focal length of the variable focal length MMAL is changed in a plurality of steps in order to scan the whole object (or scene).
The imaging unit captures images formed on the image plane by the variable focal length MMAL. As the focal length of the variable focal length MMAL is changed, the in-focus regions of the object are also changed accordingly.
The image processing unit produces three-dimensional image data using the images captured by the imaging unit and the focal length information of the MMAL. The image processing unit extracts the substantially in-focus pixels of each captured image received from the imaging unit and generates a corresponding depthwise image using the extracted in-focus pixels of each captured image. Depth information, or the distance between the imaging system and the in-focus region of the object is determined by known imaging system parameters including the focal length and the distance between the MMAL and the image plane. A set of depthwise images taken at different focal lengths with a fast imaging rate represents the object or scene at a given moment. The image processing unit can combine these depthwise images to make an all-in-focus image. There are several methods for the image processing unit to obtain depthwise images or an all-in-focus image (e.g. edge detection filter). Recent advances in both the image sensor and the image processing unit make them as fast as they are required to be. The pair of stereoscopic images is generated by the image processing unit using all-in-focus image and the depth information for each pixel of the all-in-focus image, where the three-dimensional information of the object is projected on two virtual image planes whose centers are separated at certain distance, which simulates two-camera systems having different viewing angles. The three-dimensional camcorder may further comprise a storage medium storing three-dimensional image data. Depending on the display method of the three-dimensional display system, the image processing unit generates a set of depthwise images representing the object at a given moment and the depth information for each depthwise image using the images captured by the imaging unit, an all-in-focus image and the depth information for each pixel of the all-in-focus image using the images captured by the imaging unit, or a pair of stereoscopic images using an all-in-focus image and the depth information for each pixel of the all-in-focus image. These image data are transferred to the three-dimensional viewfinder to find and focus the object or recorded on the storage medium. Image sensing and image processing time is faster than the persistent rate of human eyes to have real-time three-dimensional images.
Another objective of the invention is to provide a three-dimensional camcorder, further comprising additional MMAL or MMALs for providing zoom function, wherein the MMALs magnify the image and keep the image in-focus by changing the focal length of each MMAL without macroscopic mechanical movements of lenses.
Still another objective of the invention is to provide a three-dimensional camcorder having auto focusing function by changing the focal length of MMAL, wherein the focal length of MMAL for auto focusing is determined by the depth information. The auto focusing systems using the MMAL are described in U.S. patent application Ser. No. 10/896,146, the contents of which are hereby incorporated by reference.
Still another objective of the invention is to provide a three-dimensional camcorder having vibration correction capability, wherein the three-dimensional camcorder further comprises a vibration determination device, communicatively coupled to the MMAL, configured to measure vibration of the three-dimensional camcorder and to generate a vibration correction signal. The MMAL is adjusted to change its optical axis by controlling the rotation and translation of each micromirror in the MMAL, based at least in part on the vibration correction signal to correct for the vibration of the three-dimensional camcorder. The vibration correction system using the MMAL is described in U.S. patent application Ser. No. 10/979,612, the contents of which are hereby incorporated by reference.
Still another objective of the invention is to provide a three-dimensional camcorder imaging an object which does not lie on the nominal optical axis by using at least one MMAL without macroscopic mechanical movement of the imaging system.
Still another objective of the invention is to provide a three-dimensional camcorder compensating for aberrations using at least one MMAL. Since the MMAL is an adaptive optical component, the MMAL compensates for phase errors of light introduced by the medium between an object and its image and/or corrects the defects of the three-dimensional camcorder that may cause the image to deviate from the rules of paraxial imagery by controlling individual micromirrors in the MMAL.
The variable focal length MMAL is reflective lens. The three-dimensional camcorder may further comprise a beam splitter positioned in the path of light between the imaging unit and the MMAL to have normal incident optical geometry onto the MMAL. Alternatively, in order to deflect the light into a sensor, the MMAL is tilted in the imaging system of the camcorder so that the normal direction of the MMAL is different from the optical axis of the imaging system. When the MMAL is tilted about an axis (tilting axis), which is perpendicular to the optical axis of the imaging system, the surface profile of the MMAL is symmetric about an axis which is perpendicular to the optical axis and tilting axis. The tilted MMAL can cause non axis-symmetric aberrations. To have the desired focal length and compensate for non axis-symmetric aberrations, each micromirror has one translational motion along the normal axis to the plane of each MMAL and two rotational motions about two axes in the plane of each MMAL.
In order to obtain a color image, the MMAL is controlled to compensate for chromatic aberration by satisfying the phase matching condition for each wavelength of Red, Green, and Blue (RGB), or Yellow, Cyan, and Magenta (YCM), respectively. The three-dimensional camcorder may further comprise a plurality of bandpass filters for color imaging. Also, the three-dimensional camcorder may further comprise a photoelectric sensor. The photoelectric sensor comprises Red, Green, and Blue sensors or Yellow, Cyan, and Magenta sensors, wherein color images are obtained by the treatments of the electrical signals from each sensor. The treatment of electrical signal from sensor is synchronized and/or matched with the control of the MMAL to satisfy the phase matching condition for each wavelength of Red, Green, and Blue or Yellow, Cyan, and Magenta, respectively.
Furthermore, the MMAL can be controlled to satisfy phase matching condition at an optimal wavelength to minimize chromatic aberration, wherein optimal wavelength phase matching is used for getting a color image. The MMAL is controlled to satisfy phase matching condition for the least common multiple wavelength of Red, Green, and Blue or Yellow, Cyan, and Magenta lights to get a color image.
The three-dimensional camcorder may further comprise an optical filter or filters for image quality enhancement.
The three-dimensional camcorder may further comprise an auxiliary lens or group of lenses for image quality enhancement.
The three-dimensional camcorder may further comprise extra MMAL or MMALs to compensate for aberrations of the three-dimensional camcorder including chromatic aberration.
Still another objective of the invention provides a three-dimensional viewfinder for the three-dimensional camcorder, which allows users to find and focus an object in the three-dimensional space and replay recorded three-dimensional video images using one of various display methods including stereoscopic display and volumetric display.
To display three-dimensional video images in the stereoscopic display, the three-dimensional viewfinder comprises an image data input unit, a two-dimensional display, and a stereoscopic glasses. The image data input unit receives a pair of stereoscopic images from the image processing unit or from the storage medium. The two-dimensional display, communicatively coupled to the image data input unit, displays these stereoscopic images in turns within the persistent rate of the human eye. These images can be viewed by stereoscopic glasses.
Alternatively, the three-dimensional viewfinder can comprise an image input unit and an autostereoscopic two-dimensional display. The image data input unit receives a pair of stereoscopic images from the image processing unit or from the storage medium. The autostereoscopic two-dimensional display, communicatively coupled to the image data input unit, displays stereoscopic images in turns within the persistent rate of the human eyes. When autostereoscopic viewfinder is used, the image can be viewed without stereoscopic glasses.
Since the pair of stereoscopic images is generated using all-in-focus image and the depth information for each pixel of the all-in-focus image, the resulting three-dimensional image is all-in-focused (or sharp) unlike the three-dimensional images provided by conventional stereoscopic imaging systems, where the images are in-focused only for a portion of object within the depth of field of the imaging system.
To display three-dimensional video images in a volumetric display, the three-dimensional viewfinder comprises an image data input unit, a two-dimensional display, and a variable focal length MMAL. The image data input unit receives the depthwise image data or from the storage medium. The two-dimensional display, communicatively coupled to the image data input unit, displays depthwise images sequentially within the persistent rate of the human eyes. The variable focal length MMAL, optically coupled to the two-dimensional display, receives light from the two-dimensional display and forms a corresponding image at the required location in the space using the depth information of the depthwise image by changing the focal length of the MMAL. The image formed by the variable focal length MMAL is parallel to the three-dimensional viewfinder screen and located at the corresponding depth along the surface normal direction of the three-dimensional viewfinder screen. The location of the image formed in space is adjusted by changing the focal length of the variable focal length MMAL, which is synchronized with the two-dimensional display so that the variable focal length MMAL can have a focal length corresponding to the depth information of the depthwise image displayed in the two-dimensional display. As a set of depthwise images representing an object at a given moment are sequentially displayed in the two-dimensional display, a three-dimensional image of the object is formed in the space accordingly and perceived as three-dimensional by a viewer. The number of depthwise images representing the object at a given moment (number of depths) depends on the depth resolution requirement, the refresh rate of the two-dimensional display, and the focusing speed of the variable focal length MMAL, and may increase for a better image quality. A set of depthwise image representing an object at a given moment is displayed at least at the persistent rate of the human eye. The focusing speed of the variable focal length MMAL is at least equal to the product of the persistent rate of the human eye and the number of depths so that three-dimensional images formed in space looks realistic to the viewer.
The three-dimensional image information recorded on the storage medium can be displayed in the three-dimensional viewfinder as well as other three-dimensional display devices such as TV, computer monitor, etc.
Still another objective of the invention provides a camcorder, comprising a two-dimensional viewfinder displaying the all-in-focus image.
Still another objective of the invention provides the three-dimensional viewfinder, wherein the focal length of the variable focal length MMAL is fixed to be used as a two-dimensional viewfinder.
Although the present invention is briefly summarized herein, the full understanding of the invention can be obtained by the following drawings, detailed description, and appended claims.
The present invention will now be described in detail with reference to a few embodiments thereof as illustrated in the accompanying drawings. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be apparent, however, to one skilled in the art, that the present invention may be practiced without some or all of these specific details. In other instances, well known process steps and/or structures have not been described in detail in order to not unnecessarily obscure the present invention.
The imaging unit 32 comprising at least one two-dimensional image sensor captures images formed on the image plane by the variable focal length MMAL 31. As the focal length of the variable focal length MMAL 31 is changed, the in-focus regions of the object are also changed accordingly.
The image processing unit 33 extracts the substantially in-focus pixels of each captured image received from the imaging unit and generates a corresponding depthwise image using the extracted in-focus pixels of each captured image. Depth information or the distance between the imaging system and the in-focus region of the object is determined by known imaging system parameters including the focal length of the MMAL and the distance between the MMAL and the image plane. The image processing unit 33 can combine a set of depthwise images representing the object at a given moment to make an all-in-focus image. Depending on the display method of the three-dimensional display system, the three-dimensional imaging system can provides depthwise images and the depth information for each depthwise image, one all-in-focus image and the depth information for each pixel of the all-in-focus image, or a pair of stereoscopic images generated by the image processing unit 33 using all-in-focus image and the depth information for each pixel of the all-in-focus image. These image data are transferred to an image data input unit 36 in the three-dimensional viewfinder to find and focus the object 35 or recorded on the storage medium 37. All the processes obtaining the three-dimensional image information representing the object at a given moment are achieved within a unit time which is less than or equal to the persistent rate of the human eye.
The three-dimensional imaging system for the three-dimensional camcorder can image an object which does not lie on the nominal optical axis by using the MMAL 31 without macroscopic mechanical movement of the three-dimensional imaging system, as shown in the
The three-dimensional imaging system for the three-dimensional camcorder further comprises a second variable focal length MMAL 38 to provide zoom function. As the first variable focal length lens 31 changes the image size, the image is defocused because the image position is also changed. Therefore, the focal lengths of the two variable focal length MMALs 31 and 38 are changed in unison to magnify and keep the image in-focus. Variable focal length lenses provide zoom function without macroscopic mechanical movements of lenses.
The three-dimensional imaging system for the three-dimensional camcorder provides auto focusing function capturing clear images using variable focal length MMALs 31 and/or 38, as shown in
The three-dimensional imaging system for the three-dimensional camcorder provides vibration correction function using variable focal length MMALs 31 and/or 38, as shown in
The three-dimensional imaging system for the three-dimensional camcorder provides compensation for aberrations of the system using variable focal length MMALs 31 and/or 38.
The three-dimensional imaging system for the three-dimensional camcorder may further comprise a first auxiliary lens group 39A to bring the object into focus.
The three-dimensional imaging system for the three-dimensional camcorder may further comprise a second auxiliary lens group 39B to focus the image onto an image sensor.
The three-dimensional imaging system for the three-dimensional camcorder may further comprise a third auxiliary lens group 39C to produce an inverted image.
The three-dimensional imaging system for the three-dimensional camcorder may further comprise an optical filter or filters for image quality enhancement.
The three-dimensional imaging system for the three-dimensional camcorder may further comprise additional auxiliary lens or group of lenses for image quality enhancement.
As shown in
a illustrates a three-dimensional imaging system 61 with auto focusing function using a variable focal length MMAL. First, the light scattered from an object 62 is refracted by a lens 63 and is reflected by a MMAL 64 to an image sensor 65. The light reflected from the MMAL is received by the image sensor 65 and converted into an electrical signal 66A carrying the object's image data. The electrical signal is then sent to a signal processor 67, where the image data is analyzed and compared to the camera focus criteria. Based on the compared image data, as discussed in further detail below, the signal processor 67 generates a control signal 66B. The control signal is sent to the MMAL to adjust the focal length of the MMAL until the image quality of the image data meets the focus criteria.
As shown in
Similarly, if the focal length of the MMAL causes the reflected light to be in-focused at a point 68C behind the image sensor 65, the image sensor will likewise generate an electrical signal 66A carrying “blurred” image data. Accordingly, the signal processor will process the “blurred” signal and send a control signal 66B to the MMAL, causing the arrangement of the micromirrors 69 to adjust to shorten the focal length of the MMAL.
In that regard, the focal length of the MMAL is adjusted in an iterative process until the reflected light is in-focused at a point 68B on the image sensor, which provides a “sharp” image, satisfying the camera focus criteria. The iterative process is preferably completed within the persistent rate of the human eye. Thus, the signal processor must have a speed equal to or grater than the product of the number of iterative adjustments, number of depths, and the persistent rate of the human eye.
where (uc, vc) represents the center of the image plane in the pixel coordinate system (u, v).
The each pixel on the all-in-focus image has a corresponding focal length f=f(u,v). The position of a point P on the object 81 with the camera coordinates (X, Y, Z) is determined by thin lens formula and geometry as follows;
The above relationship can be applied to all pixels on the image plane 82.
P′=R(P−t)
where R is a 3×3 rotation matrix representing the orientation of the virtual camera coordinate system with respect to the camera coordinate system and t is a translation vector representing the displacement of the origin of the virtual camera coordinate system from the origin of the camera coordinate system. For example, R is an identity matrix and t=(−d, 0, 0). The corresponding pixel location p′ is obtained
These processes are repeated for all pixels in the all-in-focus image to generate one of stereoscopic images. The same steps are applied for the virtual camera system 92, where R is the identity matrix and t=(d, 0, 0) to generate the other stereoscopic image. The problem reconstructing a pair of stereoscopic images is greatly simplified for illustrative purpose. A pair of stereoscopic images is displayed in the three-dimensional viewfinder in turns within the persistent rate of the human eye. These images can be viewed by stereoscopic glasses. When autostereoscopic viewfinder is used, the image can be viewed without stereoscopic glasses. Since the pair of stereoscopic images is generated using all-in-focus image and the depth information for each pixel of the all-in-focus image, the resulting three-dimensional image is all-in-focused (or sharp).
For example, assume that the persistent rate of the human eye is 30 Hz and the number of depths is 10. In order to have realistic three-dimensional video images in the space, the focusing speed of the variable focal length MMAL 113 and the refresh rate of two-dimensional display 112 have to be at least equal to 300 Hz, respectively. The variable focal length MMAL 113 of the present invention is capable of changing the focal length fast enough to generate realistic three-dimensional video images.
This application is a continuation-in-part of, and claims priority to U.S. patent application Ser. No. 10/822,414 filed Apr. 12, 2004, U.S. patent application Ser. No. 10/855,554 filed May 27, 2004, U.S. patent application Ser. No. 10/855,715 filed May 27, 2004, U.S. patent application Ser. No. 10/855,287 filed May 27, 2004, U.S. patent application Ser. No. 10/857,796 filed May 28, 2004, U.S. patent application Ser. No. 10/857,714 filed May 28, 2004, U.S. patent application Ser. No. 10/857,280 filed May 28, 2004, U.S. patent application Ser. No. 10/872,241 filed Jun. 18, 2004, U.S. patent application Ser. No. 10/893,039 filed Jul. 16, 2004, U.S. patent application Ser. No. 10/896,146 filed Jul. 21, 2004, U.S. patent application Ser. No. 10/979,612 filed Nov. 2, 2004, U.S. patent application Ser. No. 10/983,353 filed Nov. 8, 2004, U.S. patent application Ser. No. 11/072,597 filed Mar. 4, 2005, U.S. patent application Ser. No. 11/072,296 filed Mar. 04, 2005, U.S. patent application Ser. No. 11/076,616 filed Mar. 10, 2005, U.S. patent application Ser. No. 11/191,886 filed Jul. 28, 2005, U.S. patent application Ser. No. 11/208,115 filed Aug. 19, 2005, U.S. patent application Ser. No. 11/218,814 filed Sep. 02, 2005, U.S. patent application Ser. No. 11/294,944 filed Dec. 06, 2005, U.S. patent application Ser. No. 11/369,797 filed Mar. 06, 2006, and U.S. patent application Ser. No. 11/382,707 filed on May 11, 2006, all of which are hereby incorporated by reference.
Number | Name | Date | Kind |
---|---|---|---|
2002376 | Mannheimer | May 1935 | A |
4407567 | Michelet | Oct 1983 | A |
4834512 | Austin | May 1989 | A |
5004319 | Smither | Apr 1991 | A |
5212555 | Stoltz | May 1993 | A |
5369433 | Baldwin | Nov 1994 | A |
5402407 | Eguchi | Mar 1995 | A |
5467121 | Allcock | Nov 1995 | A |
5612736 | Vogeley | Mar 1997 | A |
5696619 | Knipe | Dec 1997 | A |
5881034 | Mano | Mar 1999 | A |
5897195 | Choate | Apr 1999 | A |
5986811 | Wohlstadter | Nov 1999 | A |
6025951 | Swart | Feb 2000 | A |
6028689 | Michaliek | Feb 2000 | A |
6064423 | Geng | May 2000 | A |
6084843 | Abe | Jul 2000 | A |
6104425 | Kanno | Aug 2000 | A |
6111900 | Suzudo | Aug 2000 | A |
6123985 | Robinson | Sep 2000 | A |
6282213 | Gutin | Aug 2001 | B1 |
6315423 | Yu | Nov 2001 | B1 |
6329737 | Jerman et al. | Dec 2001 | B1 |
6498673 | Frigo | Dec 2002 | B1 |
6507366 | Lee | Jan 2003 | B1 |
6549730 | Hamada | Apr 2003 | B1 |
6625342 | Staple | Sep 2003 | B2 |
6649852 | Chason | Nov 2003 | B2 |
6650461 | Atobe | Nov 2003 | B2 |
6658208 | Watanabe | Dec 2003 | B2 |
6711319 | Hoen | Mar 2004 | B2 |
6741384 | Martin | May 2004 | B1 |
6784771 | Fan | Aug 2004 | B1 |
6833938 | Nishioka | Dec 2004 | B2 |
6885819 | Shinohara | Apr 2005 | B2 |
6900901 | Harada | May 2005 | B2 |
6900922 | Aubuchon | May 2005 | B2 |
6906848 | Aubuchon | Jun 2005 | B2 |
6906849 | Mi | Jun 2005 | B1 |
6914712 | Kurosawa | Jul 2005 | B2 |
6919982 | Nimura | Jul 2005 | B2 |
6934072 | Kim | Aug 2005 | B1 |
6934073 | Kim | Aug 2005 | B1 |
6943950 | Lee | Sep 2005 | B2 |
6958777 | Pine | Oct 2005 | B1 |
6970284 | Kim | Nov 2005 | B1 |
6995909 | Hayashi | Feb 2006 | B1 |
6999226 | Kim | Feb 2006 | B2 |
7023466 | Favalora | Apr 2006 | B2 |
7031046 | Kim | Apr 2006 | B2 |
7039267 | Ducellier et al. | May 2006 | B2 |
7046447 | Raber | May 2006 | B2 |
7068416 | Gim | Jun 2006 | B2 |
7077523 | Seo | Jul 2006 | B2 |
7161729 | Kim | Jan 2007 | B2 |
20020018407 | Komoto | Feb 2002 | A1 |
20020102102 | Watanabe | Aug 2002 | A1 |
20020135673 | Favalora et al. | Sep 2002 | A1 |
20030058520 | Yu | Mar 2003 | A1 |
20030071125 | Yoo | Apr 2003 | A1 |
20030174234 | Kondo | Sep 2003 | A1 |
20030184843 | Moon et al. | Oct 2003 | A1 |
20040009683 | Hiraoka | Jan 2004 | A1 |
20040012460 | Cho | Jan 2004 | A1 |
20040021802 | Yoshino | Feb 2004 | A1 |
20040052180 | Hong | Mar 2004 | A1 |
20040246362 | Konno | Dec 2004 | A1 |
20040252958 | Abu-Ageel | Dec 2004 | A1 |
20050024736 | Bakin | Feb 2005 | A1 |
20050057812 | Raber | Mar 2005 | A1 |
20050136663 | Terence Gan | Jun 2005 | A1 |
20050174625 | Huiber | Aug 2005 | A1 |
20050180019 | Cho | Aug 2005 | A1 |
20050212856 | Temple | Sep 2005 | A1 |
20050224695 | Mushika | Oct 2005 | A1 |
20050225884 | Gim | Oct 2005 | A1 |
20050231792 | Alain | Oct 2005 | A1 |
20050264870 | Kim | Dec 2005 | A1 |
20060012766 | Klosner | Jan 2006 | A1 |
20060012852 | Cho | Jan 2006 | A1 |
20060028709 | Cho | Feb 2006 | A1 |
20060187524 | Sandstrom | Aug 2006 | A1 |
20060209439 | Cho | Sep 2006 | A1 |
Number | Date | Country |
---|---|---|
08-043881 | Feb 1996 | JP |
11-069209 | Mar 1999 | JP |
2002-288873 | Oct 2002 | JP |
Number | Date | Country | |
---|---|---|---|
20060221179 A1 | Oct 2006 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 10822414 | Apr 2004 | US |
Child | 11419480 | US | |
Parent | 10855554 | May 2004 | US |
Child | 10822414 | US | |
Parent | 10855715 | May 2004 | US |
Child | 10855554 | US | |
Parent | 10855287 | May 2004 | US |
Child | 10855715 | US | |
Parent | 10857796 | May 2004 | US |
Child | 10855287 | US | |
Parent | 10857714 | May 2004 | US |
Child | 10857796 | US | |
Parent | 10857280 | May 2004 | US |
Child | 10857714 | US | |
Parent | 10872241 | Jun 2004 | US |
Child | 10857280 | US | |
Parent | 10893039 | Jul 2004 | US |
Child | 10872241 | US | |
Parent | 10896146 | Jul 2004 | US |
Child | 10893039 | US | |
Parent | 10979612 | Nov 2004 | US |
Child | 10896146 | US | |
Parent | 10983353 | Nov 2004 | US |
Child | 10979612 | US | |
Parent | 11072597 | Mar 2005 | US |
Child | 10983353 | US | |
Parent | 11072296 | Mar 2005 | US |
Child | 11072597 | US | |
Parent | 11076616 | Mar 2005 | US |
Child | 11072296 | US | |
Parent | 11191886 | Jul 2005 | US |
Child | 11076616 | US | |
Parent | 11208115 | Aug 2005 | US |
Child | 11191886 | US | |
Parent | 11218814 | Sep 2005 | US |
Child | 11208115 | US | |
Parent | 11294944 | Dec 2005 | US |
Child | 11218814 | US | |
Parent | 11369797 | Mar 2006 | US |
Child | 11294944 | US | |
Parent | 11382707 | May 2006 | US |
Child | 11369797 | US |