At least one embodiment of the invention generally relates to methods, systems and optical sensor assemblies for optically inspecting objects and, in particular, to such methods, systems and assemblies which can inspect objects located in environments which have airborne particulate matter or vapor capable of coating optically transparent windows of sensors of the assemblies.
In some automated vision applications small, airborne particulate matter (such as atomized paint droplets) may cover the sensor optical glass face and obstruct optical sensor measurements including 3-D sensor measurements. An example of the problem can be stated in bullet format as follows:
Because the particulates may be airborne continuously during the manufacturing processes, use of mechanical gating methods, such as shutters, is not practical.
An object of at least one embodiment of the invention is to provide a method, system and optical sensor assembly for optically inspecting an object located in an environment having airborne particulate matter or vapor capable of coating an optically transparent window of an optical sensor of the assembly.
In carrying out the above object and other objects of at least one embodiment of the invention, a method of optically inspecting an object located in an environment having airborne particulate matter or vapor capable of coating an optically transparent window of an optical sensor. is provided. The method includes creating a positive dynamic boundary layer of air in front of and immediately adjacent an outer surface of the window. The layer of air has a pressure sufficient to protect the window from undesirable accumulation of the particulate matter or droplets of the vapor on the outer surface, thereby allowing the sensor to have an unobstructed view of the object.
The step of creating may include the steps of pressurizing air in an enclosed space adjacent the sensor and directing air flow from the space over the outer surface of the window to create the boundary layer.
The step of creating may include the step of blowing air over the outer surface of the window from a plurality of spaced locations about a periphery of the window to create the boundary layer.
The air may be dry to hinder condensation of the vapor on the window.
The method may further include shielding the window from the sides of the window.
The window may be double-paned.
The window may be optically transparent to projected and received visible and near-visible radiation.
The material of the window may be transparent to light having a wavelength in a range of 400 nanometers to 850 nanometers.
The particulate matter may be paint droplets.
The vapor may be water vapor.
Further in carrying out the above object and other objects of at least one embodiment of the present invention, a system for optically inspecting an object located in an environment having airborne particulate matter or vapor capable of coating an optically transparent window of a sensor is provided. The system includes an automatic machine, an air supply and an optical sensor assembly mounted on the machine to move therewith. The assembly has a sensor with an optically transparent window and a hollow protective enclosure secured around the window. The enclosure is fluidly coupled to the air supply. The enclosure is open at one end to allow the sensor to have an unobstructed view of the object. The enclosure includes a plurality of spaced gas vent ports to direct air from within the enclosure over an outer surface of the window to create a protective dynamic boundary layer of air in front of and immediately adjacent to the outer surface of the window. The layer of air has a pressure sufficient to protect the window from undesirable accumulation of the particulate matter or droplets of the vapor on the outer surface of the window, thereby allowing the sensor to have an unobstructed view of the object.
The enclosure may include a plenum for receiving pressurized air and a plurality of gas vent ports to direct air flow from the plenum over the outer surface of the window.
The size and number of gas vent ports may be empirically determined.
The air may be dry to hinder condensation of the vapor on the window.
The enclosure may have a frustum shape to shield the window from the sides of the window.
The window may be double-paned.
The window may be optically transparent to projected and received visible and near-visible radiation.
The material of the window may be transparent to light having a wavelength in a range of 400 nanometers to 850 nanometers.
The particulate matter may be paint droplets.
The vapor may be water vapor.
Still further in carrying out the above object and other objects of at least one embodiment of the invention, an optical sensor assembly for optically inspecting an object located in an environment having airborne particulate matter or vapor capable of coating an optically transparent window of a sensor of the assembly is provided. The assembly includes the optical sensor having the optically transparent window for optically inspecting objects located in the environment and a hollow protective enclosure secured about the window and adapted to be fluidly coupled to an air supply. The enclosure is open at one end to allow the sensor to have an unobstructed view of the object. The enclosure has a plurality of spaced gas ports to direct pressurized air from within the enclosure over an outer surface of the window to create a protective dynamic boundary layer of air in front of and immediately adjacent to the outer surface of the window. The layer of air has a pressure sufficient to protect the window from undesirable accumulation of the particulate matter or droplets of the vapor on the window while allowing the sensor to have an unobstructed view of the object.
The enclosure may include a plenum for receiving pressurized air and a plurality of gas vent ports to direct air flow from the plenum over the outer surface of the window.
The size and number of gas vent ports may be determined empirically.
The air may be dry to hinder condensation of the vapor on the window.
The enclosure may have a frustum shape to shield the window from the sides of the window.
The window may be double-paned.
The window may be optically transparent to projected and received visible and near-visible radiation.
The material of the window may be transparent to light having a wavelength in a range of 400 nanometers to 850 nanometers.
The particulate matter may be paint droplets.
The vapor may be water vapor.
The sensor may be a 3-D sensor.
Other technical advantages will be readily apparent to one skilled in the art from the following figures, descriptions and claims. Moreover, while specific advantages have been enumerated, various embodiments may include all, some, or none of the enumerated drawings.
As required, detailed embodiments of the present invention are disclosed herein; however, it is to be understood that the disclosed embodiments are merely exemplary of the invention that may be embodied in various and alternative forms. The figures are not necessarily to scale; some features may be exaggerated or minimized to show details of particular components. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a representative basis for teaching one skilled in the art to variously employ the present invention.
As shown in
Within the boundary layer 4, adjacent layers of fluid will be traveling at different velocities. The different velocities are the result of shearing stresses that are produced in the fluid. The shearing stresses are produced by the fluid's viscosity. Outside the boundary layer 4—in the freestream 1—all fluid will be traveling at the same speed and the effect of the fluid's viscosity will be negligible.
Referring now to
The system of at least one embodiment of the present invention includes one or more 3-D or depth sensors such as 2.5D volumetric or 2-D/3-D hybrid sensors, one of which is generally indicated at 10 in
The sensor technology described herein is sometimes called “3-D” because it measures X, Y and Z coordinates of objects within a scene. This can be misleading terminology. Within a given volume these sensors only obtain the X, Y and Z coordinates of the surfaces of objects; the sensors are not able to penetrate objects in order to obtain true 3-D cross-sections, such as might be obtained by a CAT scan of the human body. For this reason, the sensors are often referred to as 2½-D sensors which create 2½ dimensional surface maps to distinguish them from true 3-D sensors which create 3-D tomographic representations of not just the surface, but also the interior of an object.
In spite of this distinction between 2.5-D and 3-D sensors, people in the vision industry will often speak of 2.5-D sensors as 3-D sensors. The fact that “3-D Vision” sensors create 2.5-D surface maps instead of 3-D tomographs is implicit.
Referring to
The IR pattern emitter may comprise of an infrared laser diode emitting at 830 nm and a series of diffractive optics elements. These components work together to create a laser “dot” pattern. The laser beam from the laser diode is shaped in order to give it an even circular profile then passed through two diffractive optics elements. The first element creates a dot pattern containing dots, the second element multiplies this dot pattern into a grid. When the infrared pattern is projected on a surface, the infrared light scattered from the surface is configured to be sensitive in the neighborhood of 830 nm.
In addition to the IR sensor, there may be an RGB sensor or camera configured to be sensitive in the visible range, with a visible light band-pass filter operative to reject light in the neighborhood of 830 nm. During operation, the IR sensor is used to calculate the depth of an object and the RGB sensor is used to sense the object's color and brightness. This provides the ability to interpret an image in what is traditionally referred to as two and a half dimensions. It is not true 3-D due to the sensor only being able to detect surfaces that are physically visible to it (i.e., it is unable to see through objects or to see surfaces on the far side of an object).
Alternatively, the 3-D or depth sensor 10 may comprise light-field, laser scan, time-of-flight or passive binocular sensors, as well as active monocular and active binocular sensors.
Preferably, the 3-D or depth sensor 10 of at least one embodiment of the invention measure distance via massively parallel triangulation using a projected pattern (a “multi-point disparity” method). The specific types of active depth sensors which are preferred are called multipoint disparity depth sensors.
“Multipoint” refers to the laser projector which projects thousands of individual beams (aka pencils) onto a scene. Each beam intersects the scene at a point.
“Disparity” refers to the method used to calculate the distance from the sensor to objects in the scene. Specifically, “disparity” refers to the way a laser beam's intersection with a scene shifts when the laser beam projector's distance from the scene changes.
“Depth” refers to the fact that these sensors are able to calculate the X, Y and Z coordinates of the intersection of each laser beam from the laser beam projector with a scene.
“Passive Depth Sensors” determine the distance to objects in a scene without affecting the scene in any way; they are pure receivers.
“Active Depth Sensors” determine the distance to objects in a scene by projecting energy onto the scene and then analyzing the interactions of the projected energy with the scene. Some active sensors project a structured light pattern onto the scene and analyze how long the light pulses take to return, and so on. Active depth sensors are both emitters and receivers.
For clarity, the sensor 10 is preferably based on active monocular, multipoint disparity technology as a “multipoint disparity” sensor herein. This terminology, though serviceable is not standard. A preferred monocular (i.e., a single infrared camera) multipoint disparity sensor is disclosed in U.S. Pat. No. 4,493,496. A binocular multipoint disparity sensor, which uses two infrared cameras to determine depth information from a scene, is also preferred.
Multiple volumetric sensors are placed in key locations around and above the vehicle. Each of these sensors typically captures hundreds of thousands of individual points in space. Each of these points has both a Cartesian position in space and an associated RGB color value. Before measurement, each of these sensors is registered into a common coordinate system. This gives the present system the ability to correlate a location on the image of a sensor with a real world position. When an image is captured from each sensor, the pixel information, along with the depth information, is converted by a computer 12 into a collection of points in space, called a “point cloud”.
A point cloud is a collection of data representing a scene as viewed through a “vision” sensor. In three dimensions, each datum in this collection might, for example, consist of the datum's X, Y and Z coordinates along with the Red, Green and Blue values for the color viewed by the sensor 10 at those coordinates. In this case, each datum in the collection would be described by six numbers. To take another example: in two dimensions, each datum in the collection might consist of the datum's X and Y coordinates along with the monotone intensity measured by the sensor 10 at those coordinates. In this case, each datum in the collection would be described by three numbers.
The computer 12 of
At least one embodiment of the present invention uses hybrid 2-D/3-D sensors 10 to measure color, brightness and depth at each of hundreds of thousands of pixels per sensor 10. The collective 3-D “point cloud” data may be presented on a screen 16 of a display 14 as a 3-D graphic.
The field of view of each 2-D/3-D sensor 10 can be as wide as several meters across, making it possible for the user to see a hinged part such as a door or the hood 6 relative to the vehicle body 8 in 3-D. The graphic on the screen 16 may look like the 3-D part the user sees in the real world.
In summary, at least one embodiment of the invention provides aerodynamic boundary layer control via a positive air displacement, plenum-type frustum skirt 11. At least one embodiment of the invention provides:
At least one embodiment of the invention meets one or more of the following design specifications:
While exemplary embodiments are described above, it is not intended that these embodiments describe all possible forms of the invention. Rather, the words used in the specification are words of description rather than limitation, and it is understood that various changes may be made without departing from the spirit and scope of the invention. Additionally, the features of various implementing embodiments may be combined to form further embodiments of the invention.