The present invention relates to a behavior control apparatus and method for mobile unit, in particular, to a behavior control apparatus and method for recognizing a target object in acquired images and controlling behavior of the mobile unit with high accuracy based on the recognized target object.
To control a mobile unit with high accuracy based on input images, it is necessary for a control system to recognize an object in the image as a target for behavior of the mobile unit. One approach is that the control system learns training data pre-selected by an operator prior to recognition. Specifically, the control system searches the input images to extract some shapes or colors therefrom designated as features of the target. Then, the control system outputs commands to make the mobile unit move toward the extracted target.
However, it is necessary for the operator to teach features such as shape or color of the target in detail to the control system and therefore preparation for that is a burden in terms of time and labor. In addition, since the control would be interrupted when the target goes off the input image, it is difficult to apply this approach to practical use.
An alternative approach is that a template for the target is prepared and during controlling the mobile unit the template is always applied to input images to search and extract shape and location of the target in detail. In this case, however, computational cost would become huge because a computer has to keep calculating the shape and location of the target. Furthermore, the calculation of searching the target may fall into a local solution.
Therefore, to control behavior of the mobile unit efficiently and flexibly, it is preferable to make the mobile unit move autonomously rather than utilizing supervised learning method as a target is specified beforehand. To achieve that, a method for recognizing the target autonomously and learning the location of the target is needed. In Japanese Patent Application Unexamined Publication (Kokai) No. H8-126981, image position recognition method in robot system is disclosed. According to the method, the target object is searched out autonomically even when the target object is missing out of input image due to the error. However, the method requires that work plane for recognizing images is painted with various colors prior to working, which is substantially time-consuming task.
In Japanese Patent Application Unexamined Publication (Kokai) No. H7-13461, a method for leading autonomous moving robots for managing indoor air-conditioning units is disclosed. According to the method, a target object for leading is detected through image processing and the robot is leaded toward the target. However, the method needs blowing outlets of air-conditioning units as target objects, which lacks generality.
Therefore, it is objective of the present invention to provide a behavior control apparatus and method for autonomously controlling a mobile unit based on visual information in practical application without the needs of a great deal of preparation or computational cost and limiting the type of target object.
According to one aspect of the invention, a behavior control apparatus for controlling behavior of a mobile unit is provided. The apparatus comprises sensory input capturing method for capturing sensory inputs and motion estimating method for estimating motion of the mobile unit. The apparatus further comprises target segregation method for segregating the portion which includes a target object to be target for behavior of the mobile unit from sensory inputs, and target object matching method for extracting the target object from the segregated portion. The apparatus still further comprises target location acquiring method for acquiring the location of the target object and behavior decision method for deciding behavior command for controlling the mobile unit based on the location of the target object.
The behavior control apparatus roughly segregate the portion that includes a target object of behavior from sensory inputs, such as images, based on the estimation of motion. The apparatus then specifies a target object from the portion, acquires location of the target object and output behavior command which moves the mobile unit toward the location. Thus, detailed feature of the target object need not be predetermined. In addition, because the features irrelevant to present behavior are eliminated, the computational load is reduced. Therefore, highly efficient and accurate control for the mobile unit may be implemented.
As used herein, “mobile unit” refers to a unit which has a driving mechanism and moves in accordance with behavior commands.
The sensory inputs may be images of the external environment of the mobile unit.
The motion estimating method comprises behavior command output method for outputting the behavior command and behavior evaluation method for evaluating the result of the behavior of the mobile unit. The motion estimating method further comprises learning method for learning the motion of the mobile unit using the relationship between the sensory inputs and the behavior result and storing method for storing the learning result.
The behavior control apparatus pre-learns the relationship between sensory inputs and behavior commands. Then the apparatus updates the learning result when new feature is acquired on behavior control stage. The learning result is represented as probabilistic density distribution. Thus, motion of the mobile unit on behavior control stage may be estimated with high accuracy.
The motion of the mobile unit may be captured using a gyroscope instead of estimating it.
The target segregation method segregates the portion by comparing the sensory inputs and the estimated motion using such as optical flow. Thus the behavior control apparatus may roughly segregate the portion that includes a target object
The target location acquiring method defines the center of the target object as the location of the target object and the behavior decision method outputs the behavior command to move the mobile unit toward the location of the target object. Thus the mobile unit may be controlled stably.
The behavior decision method calculates the distance between the mobile unit and the location of the target object, and deciding the behavior command to decrease the calculated distance. This calculation is very simple and helps to reduce the amount of computation.
If the calculated distance is greater than a predetermined value, the target segregation method repeats segregating the portion which includes a target object.
The target object matching method extracts the target object by pattern matching between the sensory inputs and predetermined templates. Thus the target object may be extracted more accurately.
Other embodiments and features will be apparent by reference to the following description in connection with the accompanying drawings.
Preferred embodiments of the present invention will be described as follows with reference to the drawings.
A behavior control apparatus according to the invention recognizes a target object, which is a reference for controlling a mobile unit, from input images and then controls behavior of the mobile unit based on the recognized target object. The apparatus is used as installed on the mobile unit, which has driving mechanism and is movable by itself.
Configuration
The CCD camera 104 takes images of frontal vision of the RC helicopter. Area taken by the camera is showed in
The RC helicopter 100 is tuned as that only the control of yaw orientation (as an arrow in
The behavior control apparatus 105 outputs behavior commands to move the self-referential point (for example, center. This is hereinafter referred to as COM 111, acronym of “center of motion”) of the image captured by CCD camera 104 (the visual space 109) toward the target location 110 in order to control the RC helicopter 100 stably. The behavior commands are sent to the servomotor 106. In response to the behavior commands, the servomotor 106 drives the rod 108, activating the link mechanism to alter the angle of tail rotor 103 so as to rotate the RC helicopter 100 in yaw orientation.
In the embodiment described above, controllable orientation is limited in one-dimensional operation such that COM moves from side to side for the purpose of simple explanation. However, the present invention may be also applied to position control in two or three dimensions.
Although the RC helicopter 100 is described as an example of the mobile unit having the behavior control apparatus of the present invention, the apparatus may be installed on any of mobile unit having driving mechanism and being able to move by itself. In addition, the mobile unit is not limited to flying objects like a helicopter, but includes, for example, vehicles traveling on the ground. The mobile unit further includes the unit only the part of which can moves. For example, the behavior control apparatus of the present invention may be installed on industrial robots of which base is fixed to floor, to recognize an operation target of the robot.
The behavior control apparatus 105 first learns relationship between features of inputs (e.g., images taken by the CCD camera 104) and behavior of the mobile unit. These operations are inclusively referred to as “learning stage”. Completing the learning stage, the apparatus may estimate motion of the mobile unit based on the captured images using learned knowledge. The apparatus further searches and extracts target location in the image autonomously using estimated motion. Finally, the apparatus controls the motion of the mobile unit with the reference to the target location. These operations are inclusively referred to as “behavior control stage”.
It should be noted that the behavior control apparatus 105 shown in
Learning
In learning stage, while moving the mobile unit, the behavior control apparatus 105 learns relationship between features of input images taken by an image pickup device and behavior result in response to behavior command from the behavior command output block 204. The apparatus then stores learning result in the storage block 210. This learning enables the apparatus to estimate motion of the mobile unit accurately based on input images in the behavior control stage described later.
The image capturing block 202 receives images every predetermined interval from an image pickup device such as CCD camera 104 installed in front of the RC helicopter 100. Then the block 202 extracts features as sensory inputs Ii(t) (i=1,2, . . . ) from the images. This feature extraction may be implemented by any of prior-art approaches such as optical flow. The extracted features are sent to the behavioral evaluation block 206.
The behavior command output block 204 outputs behavior commands Qi(t), which directs behavior of the mobile unit. While the learning is immature in initial stage, behavior commands are read from command sequence which is selected randomly beforehand. During the mobile unit moves randomly, the behavior control apparatus 105 may learn necessary knowledge for estimating the motion of the mobile unit. As for the RC helicopter 100 shown in
ƒ:Ii(t)Qi(t) (1)
where subscript i (i=1,2, . . . ) means i-th data. For example, the mapping ƒ may be given as a non-linear approximation translation using well-known Fourier series or the like.
In alternative embodiment, the behavior command output block 204 receives signal from an external device and outputs behavior commands in accordance with the signal.
The behavior evaluation block 206 generates reward depending on both sensory inputs Ii(t) from image capturing block 202 and the behavior result in response to behavior command Qi(t) based on predetermined evaluation function under a reinforcement learning scheme. The example of the evaluation function is a function that yields reward “1” when the mobile unit controlled by behavior command is stable, otherwise yields reward “2”. After the rewards are yielded, the behavior evaluation block 206 generates a plurality of columns 1,2,3, . . . , m as many as the number of type of the rewards and distributes behavior commands into each column responsive to the type of their rewards. Hereinafter the behavior commands Qi(t) distributed in column l are denoted as “Qil(t)”. Sensory inputs Ii(t) and behavior command Qi(t) are supplied to learning block 208 and used for learning the relationship between them.
The purpose of the evaluation function is to minimize the variance of the behavior commands. In other words, the reinforcement learning satisfying σ(Q1)<σ(Q2) is executed with the evaluation function. The minimum variance of the behavior commands needs to be reduced for smooth control. Learning with the evaluation function allows the behavior control apparatus 105 to eliminate unnecessary sensory inputs and to learn important sensory inputs selectively.
In each column, both sensory inputs and the behavior commands are stored according to the type of rewards given to the behavior commands.
Each column 1,2,3, . . . ,m corresponds to a cluster model of the behavior commands. Each column is used to calculate generative models g(Ωl) where l denotes the number of attention classes applied. Generative model is a storage model generated through learning, and may be represented by probabilistic density function in statistic learning. Non-linear estimation such as neural network may be used to model g(Ωl), which gives the estimation of probabilistic density distribution P(Q|Ωl). In the present embodiment, it is assumed that P(Q|Ωl) takes the form of Gaussian mixture model, which may make approximation for any of probabilistic density function.
Only one column will not accelerate the convergence of the learning because, if so, it will take much time until the normal distribution curve of behavior commands stored in the column is sharpened and the variance gets small. In order to control the mobile unit stably, it needs to learn in such a way that the variance of normal distribution of motor output becomes smaller. One feature of the invention is that the normal distribution curve is sharpened rapidly since a plurality of columns are generated. A method utilizing such minimum variance theory is described in Japanese Patent Application Unexamined Publication (Kokai) No. 2001-028758.
Then the learning process described later is executed in the learning block 208. After the learning process is completed, a behavior command for minimizing the variance of the normal distribution curve of behavior commands for a new sensory input may be selected out of the column by means of a statistical learning scheme, and the rapid stability of the mobile unit may be attained.
Now the learning process at the learning block 208 will be described in detail.
The learning block 208 calculates the class of attention Ωl corresponding one by one to each column l which contains the behavior commands using identity mapping translation. This translation is represented by the following mapping h.
h:Qi(t)Ωl(t) (2)
The purpose of the class of attention Ωl is efficient learning by focusing on the particular sensory inputs from massive sensory inputs when new sensory inputs are given. Generally, the amount of sensory inputs far exceeds the processing capacity of the computer. Thus, appropriate filtering for sensory inputs with the classes of attention Ωl improves the efficiency of the learning. Therefore, the learning block 208 may eliminate the sensory inputs except the selected small subset of them.
When the learning goes forward, the learning block 208 may know directly the class of attention corresponding to the sensory input using the statistical probability without calculating the mapping f and/or h one by one. More specifically, each of the classes of attention Ωl is a parameter for modeling the behavior commands Qil(t) stored in each column using the probabilistic density function of the normal distribution. To obtain the probabilistic density function, a mean μ and covariance Σ need to be calculated for the behavior commands Qil(t) stored in each column. This calculation is performed by unsupervised Expectation Maximization (EM) algorithm using clustered component algorithm (CCA), which will be described later. It should be noted that the classes of attention Ωl are modeled on the assumption that true probabilistic distribution p(Ii(t)|Ωl) will exists for each class of attention Ωl.
Using the obtained parameters, probabilistic density function of each class of attention Ω1 may be obtained. The obtained density functions are used as prior probability {overscore (p)}(Ωl(t))(={overscore (p)}(Ql(t)|Ωl(t))) of each class of attention before sensory inputs are given. In other words, each class of attention Ωl is assigned as an element of the probabilistic density function p(Qil(t)|Ωl,θ).
After the classes of attention Ωl are calculated, the learning block 208 learns the relation between the sensory inputs and the classes of attention by means of supervised learning scheme using neural network. More specifically, this learning is executed by obtaining conditional probabilistic density function pλ(Ii(t)|Ωl) of the class of attention Ω1 and the sensory input Ii(t) using hierarchical neural network with the class of attention Ωl as supervising signal. It should be noted that the class of attention may be calculated by synthetic function f·h . The obtained conditional probabilistic density function p (Ii(t)|Ωl) corresponds to the probabilistic relation between the sensory input and the class of attention.
New sensory inputs gained by CCD camera 104 are provided to the behavior control apparatus 105 after the learning is over. The learning block 208 selects the class of attention corresponding to provide sensory input using statistical learning scheme such as bayes' learning. This operation corresponds to calculating conditional probabilistic density function p(Ωl|Ii(t)) of the class of attention Ωl relative to the sensory inputs Ii(t). As noted above, since the probabilistic density function of the sensory inputs and the class of attention has been already estimated by the hierarchical neural network, newly given sensory inputs may be directly assigned to particular class of attention. In other words, after the supervised learning with neural network is over, calculation of the mapping ƒ and/or h become unnecessary for selecting class of attention Ωl relative to sensory input Ii(t).
In this embodiment, bayes' learning scheme is used as the statistical learning scheme. Assume that sensory inputs Ii(t) are given and both prior probability {overscore (p)}(Ωl(t)) and probabilistic density function p(Ii(t)|Ωl) have been calculated beforehand. Maximum posterior probability for each class of attention is calculated by following bayes' rule.
The p(Ωl(t)) may be called the “belief” of Ωl and is the probability that a sensory input Ii(t) belongs to a class of attention Ω1(t). Calculating the probability that a sensory input Ii(t) belongs to a class of attention Ωl using bayes' rule implies that one class of attention Ωl can be identified selectively by increasing the belief (weight) by learning of bayes' rule.
The class with highest probability (belief) is selected as class of attention Ωl corresponding to the provided sensory input Ii(t). Thus, the behavior control apparatus 105 may obtain the class of attention Ωl that is hidden parameter from directly observable sensory input Ii(t) using bayes' rule and to assign the sensory input Ii(t) to corresponding class of attention Ωl.
The learning block 208 further searches behavior command according to the sensory input stored in the column corresponding to the selected class of attention, then send the searched behavior command to the target segregation block 212.
As noted above, using the blocks 204–210, the behavior control apparatus may estimate motion of the mobile unit accurately based on input images. Therefore, these blocks are inclusively referred to as “motion estimating method” in appended claims.
Behavior Control
On behavior control stage, the behavior control apparatus 105 estimates the motion based on input image and roughly segregates the location of the target object (target location). Then the apparatus performs pattern matching with templates which are stored in the memory as target object and calculate the target location more accurately. And the apparatus indicates to output the behavior command based on the distance between the target location and center of motion (COM). By repeating this process, the target location is getting refined and the mobile unit reaches in stably controlled status. In other words, the apparatus segregates the target based on motion estimation and understands what is to be target object.
Now the functionality of each block is described.
Target segregation block 212 roughly segregates and extracts a potion including target object, which are to be the behavior reference of the mobile unit, from visual space. For example, the segregation is done by comparing optical flow of the image and the estimated motion.
Target object matching block 214 uses templates to extract the target object more accurately. The target object matching block 214 compares the template and the segregated portion and determines whether the portion is the object to be targeted or not. The templates are prepared beforehand. If there are plurality of target objects, or if there are plurality of objects which match with the templates, the object having largest matching index is selected.
A target location acquiring block 216 defines the center point of the target object as the target location.
When the target location is defined, behavior decision block 218 supplies request signal to behavior command output block 204. When the request signal is received, behavior command output block 204 outputs the behavior command to move such that center of motion (COM) of the mobile unit overlaps the location of the target object.
It is indispensable for determining behavior command autonomously to segregate the target and non-target. The reason is because a target object segregated by target segregation may be used to select the optimal behavior to control the target object toward the location. In other words, the actual most suitable behavior is selected by predicting center of motion (COM) based on selective attention. Thus it allows the behavior control apparatus to search the location of the target object accurately in captured image.
Assuming that ΩTL represents the target location and σ represents the area where segregating may be executed in captured image, the location of the target object is modeled by probability density function P(ΩTL|σ). Since the location ΩTL is basically uncertain value, it is assumed that the location has behavior control noise (that is, the variance of probabilistic density distribution). By repeating feedback process, noise (variance) of the target location is reduced and refined. In the present invention, reduction of noise (variance) depends on the accuracy of the motion estimation of the mobile unit.
CCA Reinforced EM Algorithm
Now CCA reinforced EM algorithm is described in detail.
The EM algorithm is an iterative algorithm for estimating the maximum likelihood parameter when observed data can be viewed as incomplete data. When the observed data is the normal distribution, the parameter θ is represented by θ(μ, Σ).
In one embodiment of the invention, the model of feature vector is built by means of bayes' parameter estimation. This is employed to estimate the number of clusters which represents data structure best.
Algorithm to estimate a parameter of Gaussian mixture model will be described. This algorithm is similar to conventional clustering essentially, but is different in that it can estimate parameters closely when clusters are overlapped. Therefore, sample of training data is used to determine the number of subclass and the parameters of each subclass.
Let Y be an M dimensional random vector to be modeled using a Gaussian mixture distribution. Assume that this model has K subclasses. The following parameters are required to completely specify the k-th subclass.
π, μ, R denote the following parameter sets, respectively.
The complete set of parameters for the class are then given by K and θ=(π, μ,R). Note that the parameters are constrained in a variety of ways. In particular, K must be an integer greater than 0, πk≧0 with Σπk=1, and det(R)≧ε, where might be chosen depending on the application. The set of admissible θ for a k-th order model is denoted by ρ.
Let Y1, Y2, . . . , Yn be N multispectral pixels sampled from the class of interest. Moreover, assume that the subclass of that pixel is given by the random variable Xn for each pixel Yi. Certainly, Ωn is normally not known, and which can also be useful for analyzing the problem.
Letting each subclass be a multivariate Gaussian distribution, the probability density function for the pixel Yn for Ωn=k is given by
Since the subclass Ωn of each sample is not known, to compute the density function of Ynm for given parameter θ, the following definition of conditional probability is applied.
The logarithm of the probability of the entire sequence
is as follows.
The objective is then to estimate the parameters K and θ∈ρ(K).
Minimum description length (MDL) estimator works by attempting to find the model order which minimizes the number of bits that would be required to code both the data samples yn and the parameter vector θ. MDL reference is expressed like the following expression.
MDL(K,θ)=−log py(y|K,θ)+2L log(NM) (9)
Therefore, the objective is to minimize the MDL criteria
In order to derive the EM algorithm update equations, it is required to compute the following equation (Expectation step)
where Y and X are the sets of random variables
respectively, and y and x are realizations of these random objects.
Thus the following equation holds.
MDL(K,θ)−MDL(K,θ(i))<Q(θ(i);θ(i))−Q(θ;θ(i)) (13)
This results in a useful optimization method since any value of θ that increases the value of Q(θ;θ(i)) is guaranteed to reduce the MDL criteria. The objective of the EM algorithm is hereby to iteratively optimize with respect to θ until a local minimum of the MDL function is reached.
The Q function is optimized in the following way.
Q(E,π;E(i),π(i))=E[log py,x(Y,X|E,π)|y,E(i), π(i)]−KM log(NM) (14)
In this case,
where
The EM update equations then are following.
(E(i+1),π(i+1))=argminE,πQ(E,π;E(i),π(i)) (17)
The solution is given as follows.
ek(i+1)=principal eigenvector {overscore (R)}kπk(i+1)={overscore (N)}k|N (18)
Initially, the number K of subclasses will be started with sufficiently large, and then be decremented sequentially. For each value of K, the EM algorithm is applied until it is converged to a local maximum of the MDL function. Eventually, the value of K may be selected simply and corresponding parameters that resulted in the largest value for the MDL criteria may be selected.
One method to effectively reduce K is to constrain the parameters of two classes to be equal, such that el=em for classes l and m. Moreover, letting E* and E*lm be the unconstrained and constrained solutions to Eq (17), a distance function may be defined as follows.
d(l,m)=Q(E*,π*;E(i),π(i))−Q(E*l,m,π*;E(i),π(i))=σmax(Rl) σmax(Rm)−σmax(Rl+Rm)≧0 (19)
where σmax(R) denotes the principal eigenvalue of R. At each step, the two components that minimized the class distance are computed.
(l*,m*)=argminl,md(l,m) (20)
After all, the two classes are merged and the number of subclass K is decreased.
Process of Behavior Control Apparatus
It should be noted that the learning stage and behavior control stage are not also divided clearly, but both of them may be executed simultaneously as one example described bellow.
In other words, behavior evaluation block 206 determines whether feature of image provided afresh should be reflected to knowledge acquired by previous learning in behavior control stage. Furthermore, behavior evaluation block 206 receives the motion estimated from the image. When change of the external environment that was not learned in previous learning is captured by image capturing block 202, the feature is sent to behavior evaluation block 206, which outputs attentional demanding for indicating generation of an attention class. In response to this, learning block 208 generates an attention class. Thus learning result is always updated; therefore, precision of the motion estimation is improved, too.
Now the control process in practical application will be described of the behavior control apparatus of the invention installed on RC helicopter.
At step 602, probabilistic density distribution P(Ωl) for all attention classes Ωl of motion are assumed to be uniform. At step 604, the mobile unit moves randomly for collecting data for learning. In this example, data set collected for stabilizing the RC helicopter 100 was used to generate 500 training data points and 200 test points.
At step 606, the CCA reinforced EM algorithm is executed for calculating parameters θ (μ, Σ) which defines the probabilistic density distribution Ωl. In the present example, 20 subclasses was used at first, but the number of subclasses converges by CCA reinforced EM algorithm and finally reduced to 3 as shown in
At step 608, P(Q|Ωl) is calculated with θ, where Q represents behavior command. At step 610, probabilistic relation between feature vector I and attention class Ω1 is calculated with neural network. At step 612, motion of the mobile unit is estimated by bayes' rule. Steps 602 to 612 correspond to the learning stage.
At step 614, Gaussian mixture model is calculated with the use of each probabilistic density function. Part of the image which is not included in Gaussian mixture model is separated as non-target.
At step 616, the target object is recognized by template matching and probabilistic density distribution ΩTL of the target location is calculated. At step 618 the center of this is defined as target location.
At step 620, difference D between center of motion (COM) and the target location (TL) is calculated. At step 622, the map outputs behavior command expanding the width of motion when the helicopter is far from the target location, otherwise outputs command reducing the width of the motion.
At step 624, it is determined whether D is smaller than the allowable error ε. If D is larger than ε, the accuracy of the target location is not sufficient and the process returns to step 606 to re-calculate θ. That is, it attributes to the normalization problem how many number of gaussian mixture function is need to estimate the state of motion. By increasing the applied number of mixture gaussian function every time the process returns to step 606, the unit may estimate θ accurately and thus predict the target location accurately.
When D is smaller than ε at step 624, it shows that the helicopter is stable with sufficient accuracy for target location and so the process is terminated. By setting ε small, the unit may control both the location of helicopter and the duration during which the helicopter remains at that location. Steps 614 to 624 correspond to the behavior control stage.
Results
Some preferred embodiments have been described, but this invention is not limited to such embodiments. For example, the behavior control apparatus may not be installed on the mobile unit. In this case, only the CCD camera is installed on the mobile unit and the behavior control apparatus is installed on another place. Then information is transmitted through wireless communication between the camera and the apparatus.
According to one aspect of the invention, the behavior control apparatus roughly segregate target area that includes a target object of behavior from sensory inputs, such as images, based on the estimation of motion. The apparatus then specifies a target object from the target area, acquires location of the target object and output behavior command which moves the mobile unit toward the location. Thus, detailed feature of the target object need not be predetermined. In addition, because the features irrelevant to present behavior are eliminated, the computational load is reduced. Therefore, highly efficient and accurate control for the mobile unit may be implemented.
According to another aspect of the invention, the behavior control apparatus pre-learns the relationship between sensory inputs and behavior commands. Then the apparatus updates the learning result when new feature is acquired on behavior control stage. The learning result is represented as probabilistic density distribution. Thus, motion of the mobile unit on behavior control stage may be estimated with high accuracy.
Number | Date | Country | Kind |
---|---|---|---|
2001-214907 | Jul 2001 | JP | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/JP02/07224 | 7/16/2002 | WO | 00 | 1/16/2004 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO03/009074 | 1/30/2003 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
4092716 | Berg et al. | May 1978 | A |
4873644 | Yasuo et al. | Oct 1989 | A |
Number | Date | Country |
---|---|---|
19645556 | Oct 1997 | DE |
0390051 | Oct 1990 | EP |
05-150607 | Jan 1995 | JP |
06-266507 | May 1996 | JP |
2000-185720 | Jan 2001 | JP |
Number | Date | Country | |
---|---|---|---|
20040162647 A1 | Aug 2004 | US |