Claims
- 1. A method for identifying or quantifying one or more characteristics of interest of unknown objects, comprising the steps of:
A training of a single neural network model with a first and a second training set of known objects having known values for the one or more characteristics of interest; B validating the optimal neural network model; and C analyzing unknown objects having unknown values of the one or more characteristics of interest, comprising the steps of:
I imaging the unknown objects having unknown values of the one or more characteristics of interest against a background to obtain an original digital image, wherein the original digital image comprises pixels representing the unknown objects, the background and any debris; II processing the original digital image to identify, separate, and retain the pixels representing the unknown objects from the pixels representing the background and the pixels representing any debris, and to eliminate the background and any debris; III analyzing the pixels representing each of the unknown objects to generate data representative of one or more image parameters for each of the unknown objects; IV providing the data to a chosen flash code deployed from the candidate neural network model; V analyzing the data through the flash code; and VI receiving the output data from the flash code in a predetermined format, wherein the output data represents the unknown values of the one or more characteristics of interest of the unknown objects.
- 2. A method for training of a single neural network model with a first and second training set of known objects having known values for the one or more characteristics of interest, comprising the steps of:
A
I selecting known objects having known values for the one or more characteristics of interest; II arranging the known objects into a spectrum according to increasing degree of expression of the one or more characteristics of interest; III segregating the known objects into a first and a second training set corresponding to a predetermined state of the one or more characteristics of interest; III imaging each of the first and second training sets against a background to obtain an original digital image for each of the training sets, wherein each of the original digital images comprises pixels representing the known objects, background and any debris; IV processing the original digital image to identify, separate, and retain the pixels representing the known objects from the pixels representing the background and the pixels representing any debris, and to eliminate the background and any debris; V analyzing the pixels representing each of the known objects to generate data representative of one or more image parameters for each of the known objects; VI providing the data to the neural network software to generate multiple candidate neural network models, wherein the multiple candidate neural network models each can have a flash code for deployment; and VII choosing an optimal neural network model from the multiple candidate neural network models and retaining the corresponding flash code of the optimal neural network model for identifying or quantifying the one or more characteristics of interest of unknown objects having unknown values of the one or more characteristics of interest; and B validating the optimal neural network model comprising the steps of:
I selecting more than one sample of the known objects having known values for the one or more characteristics of interest; II imaging each sample against a background to obtain an original digital image for each sample, wherein the original digital image comprises pixels representing the known objects, background and any debris; III processing the original digital image to identify, separate, and retain the pixels representing the known objects from the pixels representing the background and the pixels representing any debris, and to eliminate the background and any debris; IV analyzing the pixels representing each of the known objects to generate data representative of one or more image parameters for each of the known objects; V providing the data to a chosen flash code deployed from the candidate neural network model; VI analyzing the data through the flash code; VII evaluating the output data from the flash code for accuracy and repeatability; VIII choosing and deploying the flash code of the optimal neural network model for identifying or quantifying the one or more characteristics of interest of unknown objects having unknown values of the one or more characteristics of interest.
- 3. A method of analyzing unknown objects having unknown values of the one or more characteristics of interest comprising the steps of:
I imaging the unknown objects having unknown values of the one or more characteristics of interest against a background to obtain an original digital image, wherein the original digital image comprises pixels representing the unknown objects, the background and any debris; II processing the original digital image to identify, separate, and retain the pixels representing the unknown objects from the pixels representing the background and the pixels representing any debris, and to eliminate the background and any debris; III analyzing the pixels representing each of the unknown objects to generate data representative of one or more image parameters for each of the unknown objects; IV providing the data to the chosen flash code deployed from the candidate neural network model; V analyzing the data through the flash code; and VI receiving the output data from the flash code in a predetermined format, wherein the output data represents the unknown values of the one or more characteristics of interest of the unknown objects.
- 4. A method of processing a digital image to identify, separate, and retain pixels representing objects from pixels representing the background and pixels representing any debris, and to eliminate the background and any debris.
- 5. The method according to claim 4, further comprising the step of detecting an edge of each of the objects and distinguishing each of the objects from the background.
- 6. The method according to claim 5, wherein detecting the edge of each of the objects comprises applying an edge detection algorithm.
- 7. The method according to claim 4, further comprising the step of eliminating from the original digital image, an outer layer of pixels on the outer circumference of each of the objects and any debris.
- 8. A method of processing a digital image comprising pixels representing objects to remove some debris and to separate each of the objects comprising the steps of:
i removing some debris from the original digital image of the objects by applying a first digital sieve, wherein the first digital sieve selects the pixels representing each of the objects meeting a predetermined threshold for a first set of one or more image parameters of the objects; and ii in the image from (i), separating each of the objects that are adjacent by applying an object-splitting algorithm at least once.
- 9. The method according to claim 8(i), wherein the first digital sieve selects the pixels representing each of the objects meeting a predetermined threshold for a first set of one or more image parameters, wherein the one or more image parameters are size or shape or both.
- 10. The method according to claim 9, wherein distinguishing the objects from the background comprises applying a predetermined threshold to the original digital image to create a binary mask having ON pixels in areas representing each of the objects and OFF pixels in areas representing the background.
- 11. The method according to claim 10, wherein the ON pixels display intensities represented by values for RGB.
- 12. The method according to claim 11, wherein the OFF pixels display intensities represented by RGB values of zero.
- 13. The method according to claim 12, further comprising the step of applying a Boolean logic command, AND, to combine the original digital image with the binary mask to create a new digital image, wherein the new digital image comprises pixels representing each of the objects and wherein the pixels representing each of the objects have a detected edge.
- 14. A method of processing a digital image comprising pixels representing objects to remove remaining debris or object anomalies comprising the step of separating and removing pixels representing remaining debris or object anomalies from the pixels representing each of the objects by applying a second digital sieve, wherein the second digital sieve selects the pixels representing each of the objects meeting predetermined thresholds for a second set of one or more image parameters.
- 15. The method according to claim 14, wherein the second digital sieve selects the pixels representing each of the objects meeting predetermined thresholds for a second set of one or more image parameters, wherein the one or more image parameters are roundness, shape, perimeter convex, aspect ratio, area, area polygon, dendrites, perimeter ratio and maximum radius.
- 16. The method according to claim 15, further comprising the step of applying a predetermined threshold to the new digital image, to create a binary mask having ON pixels in areas representing each of the objects and OFF pixels in areas representing the background.
- 17. The method according to claim 16, wherein the ON pixels display intensities represented by values for RGB.
- 18. The method according to claim 17, wherein the OFF pixels display intensities represented by RGB values of zero.
- 19. The method according to claim 18, further comprising the step of applying a Boolean logic command, AND, to combine the original digital image with the binary mask to create a new digital image, wherein the new digital image comprises pixels representing each of the objects and wherein the pixels representing each of the objects have a detected edge.
- 20. A method of analyzing pixels representing objects to generate data representative of one of more parameters for each of the objects wherein the one or more image parameters are dimension, shape, texture, and color.
- 21. The method according to claim 20, wherein the one or more image parameters of dimension and shape are area, aspect, area/box, major axis, minor axis, maximum diameter, minimum diameter, mean diameter, maximum radius, minimum radius, radius ratio, integrated optical density, length, width, perimeter, perimeter convex, perimeter ellipse, perimeter ratio, area polygon, fractal dimension, minimum feret, maximum feret, mean feret, and roundness.
- 22. The method according to claim 20, wherein the one or more image parameters of texture are margination, heterogeneity, and clumpiness.
- 23. The method according to claim 20, wherein the one or more image parameters of color are density for red, density for green, density for blue, minimum density, maximum density, standard deviation of density, and mean density.
- 24. A method for obtaining one or more image parameters of color for objects comprising the step of generating an outline of the pixels representing each of the objects.
- 25. The method according to claim 24, further comprising the step of obtaining color spectral information of the pixels representing each of the objects by recording a data set representative of the number of pixels contained in each of the multiplicity of intensity levels contained in each of the RGB color bands that are contained in each of the objects.
- 26. The method according to claim 25, further comprising the step of executing a command to calculate the number of pixels in a determined set of ranges in each of the RGB color bands to obtain a value.
- 27. The method according to claim 26, further comprising the step of normalizing the value by dividing each band range pixel count by the total pixel count of each image of each of the objects.
- 28. A method for identifying or quantifying one or more characteristics of interest of unknown objects comprising the steps of:
A training of a single neural network model with a first and a second training set of known objects having known values for the one or more characteristics of interest, wherein training of the single neural network model comprises the steps of:
I selecting known objects having known values for the one or more characteristics of interest; II arranging the known objects into a spectrum according to increasing degree of expression of the one or more characteristics of interest; III segregating the known objects into a first and a second training set corresponding to a predetermined state of the one or more characteristics of interest; IV imaging each of the first and second training sets against a background to obtain an original digital image for each of the training sets, wherein each of the original digital images comprises pixels representing the known objects, background and any debris; V processing the original digital image to identify, separate, and retain the pixels representing the known objects from the pixels representing the background and the pixels representing any debris, and to eliminate the background and any debris; VI analyzing the pixels representing each of the known objects to generate data representative of one or more image parameters for each of the known objects; VII providing the data to the neural network software to generate multiple candidate neural network models, wherein the multiple candidate neural network models each can have a flash code for deployment; and VIII choosing an optimal neural network model from the multiple candidate neural network models and retaining the corresponding flash code of the optimal neural network model for identifying or quantifying the one or more characteristics of interest of unknown objects having unknown values of the one or more characteristics of interest; and B validating the optimal neural network model comprising the steps of:
I selecting more than one sample of the known objects having known values for the one or more characteristics of interest; II imaging each sample against a background to obtain an original digital image for each sample, wherein the original digital image comprises pixels representing the known objects, background and any debris; III processing the original digital image to identify, separate, and retain the pixels representing the known objects from the pixels representing the background and the pixels representing any debris, and to eliminate the background and any debris; IV analyzing the pixels representing each of the known objects to generate data representative of one or more image parameters for each of the known objects; V providing the data to the chosen flash code deployed from the candidate neural network model; VI analyzing the data through the flash code; VII evaluating the output data from the flash code for accuracy and repeatability; VIII choosing and deploying the flash code of the optimal neural network model for identifying or quantifying the one or more characteristics of interest of unknown objects having unknown values of the one or more characteristics of interest; and C analyzing unknown objects having unknown values of the one or more characteristics of interest, comprising the steps of:
I imaging the unknown objects having unknown values of the one or more characteristics of interest against a background to obtain an original digital image, wherein the original digital image comprises pixels representing the unknown objects, the background and any debris; II processing the original digital image to identify, separate, and retain the pixels representing the unknown objects from the pixels representing the background and the pixels representing any debris, and to eliminate the background and any debris; III analyzing the pixels representing each of the unknown objects to generate data representative of one or more image parameters for each of the unknown objects; IV providing the data to the flash code deployed from the candidate neural network model; V analyzing the data through the flash code; and VI receiving the output data from the flash code in a predetermined format, wherein the output data represents the unknown values of the one or more characteristics of interest of the unknown objects.
- 29. The method according to claim 28, wherein step A(V), further comprises the step of processing the digital image to identify, separate, and retain pixels representing the known objects from pixels representing the background and pixels representing any debris, and to eliminate the background and any debris.
- 30. The method according to claim 29, further comprising the step of detecting an edge of each of the objects and distinguishing each of the known objects from the background.
- 31. The method according to claim 30, wherein detecting the edge of each of the objects comprises applying an edge detection algorithm.
- 32. The method according to claim 29, further comprising the step of eliminating from the original digital image, an outer layer of pixels on the outer circumference of each of the objects and any debris.
- 33. The method according to claim 32, further comprising the step of processing the digital image comprising pixels representing known objects to remove some debris and to separate each of the known objects comprising the steps of:
i removing some debris from the original digital image of the known objects by applying a first digital sieve, wherein the first digital sieve selects the pixels representing each of the known objects meeting a predetermined threshold for a first set of one or more image parameters of the known objects; and ii in the image from (i), separating each of the known objects that are adjacent by applying an object-splitting algorithm at least once.
- 34. The method according to claim 33(i), wherein the first digital sieve selects the pixels representing each of the known objects meeting a predetermined threshold for a first set of one or more image parameters, wherein the one or more image parameters are size or shape or both.
- 35. The method according to claim 34, wherein distinguishing the known objects from the background comprises applying a predetermined threshold to the original digital image to create a binary mask having ON pixels in areas representing each of the known objects and OFF pixels in areas representing the background.
- 36. The method according to claim 35, wherein the ON pixels display intensities represented by values for RGB.
- 37. The method according to claim 36, wherein the OFF pixels display intensities represented by RGB values of zero.
- 38. The method according to claim 37, further comprising the step of applying a Boolean logic command, AND, to combine the original digital image with the binary mask to create a new digital image, wherein the new digital image comprises pixels representing each of the known objects and wherein the pixels representing each of the known objects have a detected edge.
- 39. The method according to claim 38, further comprising the step of processing the digital image comprising pixels representing known objects to remove remaining debris or object anomalies comprising the step of separating and removing pixels representing remaining debris or object anomalies from the pixels representing each of the known objects by applying a second digital sieve, wherein the second digital sieve selects the pixels representing each of the known objects meeting predetermined thresholds for a second set of one or more image parameters.
- 40. The method according to claim 39, wherein the second digital sieve selects the pixels representing each of the known objects meeting predetermined thresholds for a second set of one or more image parameters, wherein the one or more image parameters are roundness, shape, perimeter convex, aspect ratio, area, area polygon, dendrites, perimeter ratio and maximum radius.
- 41. The method according to claim 40, further comprising the step of applying a predetermined threshold to the new digital image, to create a binary mask having ON pixels in areas representing each of the known objects and OFF pixels in areas representing the background.
- 42. The method according to claim 41, wherein the ON pixels display intensities represented by values for RGB.
- 43. The method according to claim 42, wherein the OFF pixels display intensities represented by RGB values of zero.
- 44. The method according to claim 43, further comprising the step of applying a Boolean logic command, AND, to combine the original digital image with the binary mask to create a new digital image, wherein the new digital image comprises pixels representing each of the known objects and wherein the pixels representing each of the known objects have a detected edge.
- 45. The method according to claim 44, further comprising the step of analyzing pixels representing the known objects to generate data representative of one of more parameters for each of the known objects wherein the one or more image parameters are dimension, shape, texture, and color.
- 46. The method according to claim 45, wherein the one or more image parameters of dimension and shape are area, aspect, area/box, major axis, minor axis, maximum diameter, minimum diameter, mean diameter, maximum radius, minimum radius, radius ratio, integrated optical density, length, width, perimeter, perimeter convex, perimeter ellipse, perimeter ratio, area polygon, fractal dimension, minimum feret, maximum feret, mean feret, and roundness.
- 47. The method according to claim 45, wherein the one or more image parameters of texture are margination, heterogeneity, and clumpiness.
- 48. The method according to claim 45, wherein the one or more image parameters of color are density for red, density for green, density for blue, minimum density, maximum density, standard deviation of density, and mean density.
- 49. The method according to claim 48, further comprising the step of obtaining one or more image parameters of color for the known objects comprising the step of generating an outline of the pixels representing each of the known objects.
- 50. The method according to claim 49, further comprising the step of obtaining color spectral information of the pixels representing each of the objects by recording a data set representative of the number of pixels contained in each of the multiplicity of intensity levels contained in each of the RGB color bands that are contained in each of the objects.
- 51. The method according to claim 50, further comprising the step of executing a command to calculate the number of pixels in a determined set of ranges in each of the RGB color bands to obtain a value.
- 52. The method according to claim 51, further comprises the step of normalizing the value by dividing each band range pixel count by the total pixel count of each image of each of the objects.
- 53. The method according to claim 52, which further comprises validating the optimal neural network model comprising the steps of:
I selecting more than one sample of the known objects having a known value for the one or more characteristics of interest; II imaging each sample to obtain an original digital image for each sample, wherein the original digital image comprises pixels representing the known objects, background and any debris; III processing the original digital image to identify, separate, and retain the pixels representing the known objects from the pixels representing the background and the pixels representing any debris, and to eliminate the background and any debris; IV analyzing the pixels representing each of the known objects to generate data representative of one or more image parameters for each of the known objects; V providing the data to the flash code deployed from the candidate neural network model; VI analyzing the data through the flash code; VII evaluating the output data from the flash code for accuracy and repeatability; VIII choosing and deploying the flash code of the optimal neural network model for identifying or quantifying the one or more characteristics of interest of unknown objects having unknown values of the one or more characteristics of interest.
- 54. The method according to claim 53, which further comprises analyzing unknown objects having unknown values of the one or more characteristics of interest comprising:
I imaging the unknown objects having unknown values of the one or more characteristics of interest to obtain an original digital image, wherein the original digital image comprises pixels representing the unknown objects, the background and any debris; II processing the original digital image to identify, separate, and retain the pixels representing the unknown objects from the pixels representing the background and the pixels representing any debris, and to eliminate the background and any debris; III analyzing the pixels representing each of the unknown objects to generate data representative of one or more image parameters for each of the unknown objects; IV providing the data to the flash code deployed from the candidate neural network model; V analyzing the data through the flash code; and VI receiving the output data from the chosen flash code in a predetermined format, wherein the output data represents the unknown values of the one or more characteristics of interest of the unknown objects.
- 55. The method according to claim 54, wherein step (II), further comprises the step of processing the digital image to identify, separate, and retain pixels representing the unknown objects from pixels representing the background and pixels representing any debris, and to eliminate the background and any debris.
- 56. The method according to claim 55, further comprising the step of detecting an edge of each of the unknown objects and distinguishing each of the unknown objects from the background.
- 57. The method according to claim 56, wherein detecting the edge of each of the unknown objects comprises applying an edge detection algorithm.
- 58. The method according to claim 55, further comprising the step of eliminating from the original digital image, an outer layer of pixels on the outer circumference of each of the unknown objects and any debris.
- 59. The method according to claim 58, further comprising the step of processing the digital image comprising pixels representing unknown objects to remove some debris and to separate each of the unknown objects comprising the steps of:
i removing some debris from the original digital image of the unknown objects by applying a first digital sieve, wherein the first digital sieve selects the pixels representing each of the unknown objects meeting a predetermined threshold for a first set of one or more image parameters of the unknown objects; and ii from the image in (i), separating each of the unknown objects that are adjacent by applying an object-splitting algorithm at least once.
- 60. The method according to claim 59(i), wherein the first digital sieve selects the pixels representing each of the unknown objects meeting a predetermined threshold for a first set of one or more image parameters, wherein the one or more image parameters are size or shape or both.
- 61. The method according to claim 60, wherein distinguishing the unknown objects from the background comprises applying a predetermined threshold to the original digital image to create a binary mask having ON pixels in areas representing each of the unknown objects and OFF pixels in areas representing the background.
- 62. The method according to claim 61, wherein the ON pixels display intensities represented by values for RGB.
- 63. The method according to claim 62, wherein the OFF pixels display intensities represented by RGB values of zero.
- 64. The method according to claim 63, further comprising the step of applying a Boolean logic command, AND, to combine the original digital image with the binary mask to create a new digital image, wherein the new digital image comprises pixels representing each of the unknown objects and wherein the pixels representing each of the unknown objects have a detected edge.
- 65. The method according to claim 64, further comprising the step of processing the digital image comprising pixels representing unknown objects to remove remaining debris or object anomalies comprising the step of separating and removing pixels representing remaining debris or object anomalies from the pixels representing each of the unknown objects by applying a second digital sieve, wherein the second digital sieve selects the pixels representing each of the unknown objects meeting predetermined thresholds for a second set of one or more image parameters.
- 66. The method according to claim 65, wherein the second digital sieve selects the pixels representing each of the unknown objects meeting predetermined thresholds for a second set of one or more image parameters, wherein the one or more image parameters are roundness, shape, perimeter convex, aspect ratio, area, area polygon, dendrites, perimeter ratio and maximum radius.
- 67. The method according to claim 66, further comprising the step of applying a predetermined threshold to the new digital image, to create a binary mask having ON pixels in areas representing each of the known objects and OFF pixels in areas representing the background.
- 68. The method according to claim 67, wherein the ON pixels display intensities represented by values for RGB.
- 69. The method according to claim 68, wherein the OFF pixels display intensities represented by RGB values of zero.
- 70. The method according to claim 69, further comprising the step of applying a Boolean logic command, AND, to combine the original digital image with the binary mask to create a new digital image, wherein the new digital image comprises pixels representing each of the unknown objects and wherein the pixels representing each of the unknown objects have a detected edge.
- 71. The method according to claim 70, further comprising the step of analyzing pixels representing the unknown objects to generate data representative of one of more parameters for each of the unknown objects wherein the one or more image parameters are dimension, shape, texture, and color.
- 72. The method according to claim 71, wherein the one or more image parameters of dimension and shape are area, aspect, area/box, major axis, minor axis, maximum diameter, minimum diameter, mean diameter, maximum radius, minimum radius, radius ratio, integrated optical density, length, width, perimeter, perimeter convex, perimeter ellipse, perimeter ratio, area polygon, fractal dimension, minimum feret, maximum feret, mean feret, and roundness.
- 73. The method according to claim 71, wherein the one or more image parameters of texture are margination, heterogeneity, and lumpiness.
- 74. The method according to claim 71, wherein the one or more image parameters of color are density for red, density for green, density for blue, minimum density, maximum density, standard deviation of density, and mean density.
- 75. The method according to claim 74, further comprising the step of obtaining one or more image parameters of color for the unknown objects comprising the step of generating an outline of the pixels representing each of the unknown objects.
- 76. The method according to claim 75, further comprising the step of obtaining color spectral information of the pixels representing each of the unknown objects by recording a data set representative of the number of pixels contained in each of the multiplicity of intensity levels contained in each of the RGB color bands that are contained in each of the unknown objects.
- 77. The method according to claim 76, further comprising the step of executing a command to calculate the number of pixels in a determined set of ranges in each of the RGB color bands to obtain a value.
- 78. The method according to claim 77, further comprising the step of normalizing the value by dividing each band range pixel count by the total pixel count of each image of each of the objects.
- 79. The method according to claim 78, further comprising the step of providing the data to the flash code deployed from the candidate neural network model.
- 80. The method according to claim 79, further comprising the step of analyzing the data through the flash code.
- 81. The method according to claim 80, further comprising the step of receiving the output data from the flash code in a predetermined format, wherein the output data represents the unknown values of the one or more characteristics of interest of the unknown objects.
- 82. The method according to claim 1, wherein the object is a plant or plant part, food article, biological matter, or industrial article.
- 83. The method according to claim 82, wherein the one or more characteristics of interest is selected from the group consisting of a class, a variety, a disease, an environmental condition, or a handling condition.
- 84. The method according to claim 82, wherein the plant or plant part is selected from the group consisting of a leaf, stem, root, plant organ and seed.
- 85. The method according to claim 84, wherein the plant or plant part is selected from the group consisting of wheat, rice, corn, soybeans, canola, barley, sorghum, millet, rye, oats, flax, buckwheat, alfalfa, mustard, clover, sunflower, field beans, field peas, forages, coffee, lentils, peanuts, beets, lettuce, and hemp seeds.
- 86. The method according to claim 82, wherein the food article is selected from the group consisting of produce, apples, potatoes, sugar cane, tea, hemp seeds, cocoa beans, nuts, and sugar beets.
- 87. The method according to claim 82, wherein the biological matter is selected from the group consisting of insects, microorganisms and cells.
- 88. The method according to claim 82, wherein the industrial article is selected from the group consisting of pharmaceuticals, pills, spray droplets, test plates, Petri dishes, bio-tech arrays, water, paper products, plastic pellets, paint, dry powders, wet products, textiles, raw food samples, processed food samples, package goods, parts, and general granular samples.
- 89. The method according to claim 85, wherein the object is a seed.
- 90. The method according to claim 89, wherein the object is a wheat seed.
- 91. The method according to claim 90, wherein the disease is selected from the group consisting of fusarium head blight, pink smudge, and blackpoint.
- 92. The method according to claim 91, wherein the disease is fusarium head blight.
- 93. The method according to claim 92, wherein the environmental condition is selected from the group consisting of frost damage, green seeds, and sprouting seeds.
- 94. The method according to claim 93, wherein the handling condition is selected from the group consisting of cracked, broken, bin burnt, and degermed seeds.
- 95. The method according to claim 28, wherein the object is a plant or plant part, food article, biological matter, or industrial article.
- 96. The method according to claim 81, wherein the object is a plant or plant part, food article, biological matter, or industrial article.
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] This application claims priority from United States Provisional Patent Application No. 60/338,018 filed Nov. 2, 2001 and U.S. Provisional Patent Application No. 60/323,044 filed Sep. 17, 2001. To the extent that they are consistent herewith, the aforementioned applications are incorporated herein by reference.
Provisional Applications (2)
|
Number |
Date |
Country |
|
60338018 |
Nov 2001 |
US |
|
60323044 |
Sep 2001 |
US |