The present invention relates in general to a sensor and a system for imaging characteristics of an object and relates in particular to a sensor and system for imaging multiple characteristics of an object with different degrees of resolution.
Conventional imaging sensors of the charge coupled device (CCD) and complementary metal oxide semiconductor (CMOS) type have an N×M matrix (array) with photodiodes, which absorb electromagnetic radiation and convert this into electrical signals.
There is often a requirement for imaging multiple characteristics of the same object, such as various three-dimensional (3D) and two-dimensional (2D) characteristics. In the 3D image geometrical characteristics such as width, height, volume etc. of the object are imaged. In the 2D image characteristics such as cracks, structural orientation, position and identity are imaged, for example, through marks, bar code or matrix code. Intensity information in the 2D image is usually imaged in grey scale, but imaging the 2D image in colour, that is to say registering R (red), G (green) and B (blue) components, for example, by means of filters or light wavelengths is also common.
A matrix array picture processor (MAPP sensor) is used for Imaging different characteristics of the same object using the same sensor, so-called multisensing, by using a part of the sensor for laser profiling (3D measurement) and individual sensor rows for reading out intensity information (2D measurement). An advantage in using one sensor to image multiple characteristics is that the cost and complexity of the system are less than when using one sensor for the 3D measurement and another sensor for the 2D measurement, for example.
In multisensing, the same resolution is nowadays used for lateral measurement both in the 2D measurement and in the 3D measurement. It is usual, however, to require a higher resolution on the 2D image than on the 3D image. The reason for this is the desire to be able to measure finer detail in the image than is required in measurement of the shape. An example of this is timber inspection, where it is often more important to measure cracks and surface structure with a higher resolution than the geometric shape.
Examples of imaging sensors which have different degrees of resolution are shown in Alireza Moini “Vision chips”, Kluwer Academic Publishers, page 143-146, 2000, in which image sensors are constructed as an electronic eye, that is to say they have a high resolution in the centre and a low resolution at the periphery. The pixel geometry in these “eyes” is linear-polar or log-polar. If the “eye” sees something interesting at its periphery with a low resolution, the system can control the sensor so that it directs its high-resolution centre part to the area in order to read the details. This type of sensor is very well suited for robot applications. An example of such an electronic eye is also shown in U.S. Pat. No. 5,166,511.
Another example of an imaging sensor which has different degrees of resolution is shown in U.S. Pat. No. 6,320,618, in which an array matrix-type sensor has been provided with at least one area having a higher resolution than the rest of the sensor. The sensor is placed in a camera, which is mounted on a vehicle as a part of an automatic navigation system, which controls functions of the vehicle, for example braking if some obstacle appears in front of the car or steering along the white line at the edge of the road. The sensor is arranged to pick up remote information with a high resolution and information in proximity to the vehicle with a low resolution.
An object of the present invention is to provide a sensor and a system which image the characteristics of an object with different degrees of resolution. This has been achieved by a sensor and a system having the characteristics specified in the characterising parts of claims 1 and 9 respectively.
One of the advantages to the use of a sensor and a system which read in multiple characteristics images with different degrees of resolution is that a simpler, cheaper and more compact solution is obtained than with previously known solutions. A system according to the invention furthermore requires fewer system components such as cameras, lenses etc.
According to one embodiment of the present invention the sensor comprises two integral areas with pixels which are arranged substantially parallel, side by side in a transverse direction.
According to another embodiment of the invention the two areas with pixels are designed as two separate units, which are arranged substantially parallel, side by side in a transverse direction
According to a further embodiment of the invention the two pixel areas/units share read-out logic, which means that they have the same output register.
According to an alternative embodiment of the invention the two pixel areas/units are each read out on different output registers, which means that it is possible to read out the information contained in the two areas/units simultaneously. One advantage to this is that it obtains greater freedom with regard to exposure times. Another advantage is that it obtains greater freedom with regard to degrees of resolution in both the transverse direction and the lateral direction.
The invention will now be explained in more detail on the basis of examples of embodiments and with reference to the drawings attached in which:
The camera 1 comprises, among other things, a sensor 10, which is shown in FIGS. 2 to 5 and is described in more detail below, light-gathering optics and control logic (not shown). The rays reflected from the object 2 are picked up by the sensor 10 and are converted there into electrical charges, which are in turn converted into analog or digital electrical signals. In the preferred embodiment these signals are then transferred via an output register (shown in
The object 2, which as stated above, has been placed on the base 3, which in a preferred embodiment moves relative to the measuring system indicated by an arrow in the figure. Instead of the base 3 moving relative to the measuring system, the relationship may naturally be reversed, that is to say the object 3 is fixed and the measuring system moves over the object 2 when measuring. The base 3 may be a conveyor belt, for example, or alternatively there is no base and the object itself moves, if the said object is paper, for example, in a continuous web in a paper-making machine.
In an alternative embodiment (not shown), one or more of the light sources is located below the base 3 and shines through the object 2, which means that the sensor 10 picks up transmitted rays which have passed through the object 2, and not reflected rays.
In
The sensor 10 (shown in FIGS. 2 to 5) is an array sensor and has a first area 11 with N×M pixels (where N is rows and M is columns) combined with a second high-resolution area 12 with X×Y pixels, where Y=M×b (b is an integer >1). In the preferred embodiment the first area 11 is used for 3D measurement by tri-angulation, that is to say imaged geometric characteristics of the object 2 such as width, height, volume etc. In 3D measurement, the intensity image is reduced from k rows, k>1, to the position values that correspond to where the light strikes the sensor in each column. The result is a profile with three-dimensional information for each sample of k rows. In the preferred embodiment the second area 12 is used for 2D measurement (intensity information), that is to say in imaged characteristics of the object 2 such as cracks, structural orientation, position etc. If X>2, there is a possibility of applying colour filters (for example, RGB) to the individual pixels and in this way obtaining a colour read-out of 2D data.
In the preferred embodiment an MAPP sensor is used, but the person skilled in the art will appreciate that the invention may be applied to other types of sensors, such as CCD sensors or CMOS sensors, for example.
In the embodiment according to
The person skilled in the art will appreciate that the invention is not limited to the embodiments shown in
According to
An alternative to designing the pixels offset in relation to one another is to cover the pixels with masks, which are arranged in such a way that illumination of parts of pixels is blocked and an offset sampling pattern is thereby obtained.
A sensor row is read out to an output register, which is M pixels long, shown in
The processor 17 can be programmed to perform many functions, among other things extracting the three-dimensional profile from the intensity image, that is to say each column calculates the position of the lightest point and these values can then be seen as an intensity profile in which the intensity corresponds to distance. Other functions performed by the processor 17 are edge detection, noise reduction etc.
In an alternative embodiment of the invention, shown in
Both of the embodiments according to
As stated above, it is possible to use colour filters or coloured light sources (not shown) on the second area 12 of the sensor 10, which means that both grey scale and colour images can be read out with the higher resolution. Which colour filters or coloured light sources are used and how these are placed will be known to the person skilled in the art and will not be described in detail here, but the possible use of Bayer patterns or a filter for each row may be mentioned by way of example. RGB components are commonly chosen, but other colours such as CMY (cyan magenta yellow) may also be used. In the alternative embodiment according to
Crosstalk means that light from one measurement interferes with another sensor area, that is to say light from the 3D measurement interferes with the 2D measurement and/or vice-versa. In order to reduce the crosstalk between different sensor areas it is possible to separate the light to these into different wavelengths and to protect different sensor areas with optical filters differing according to wavelength, which block the light or allow it to pass through to the respective sensor area.
In yet another embodiment of the sensor 1 according to the invention time delay integration (TDI) is used on the high-resolution second area 12. TDI means that the charge is moved from one row to another as the object 2 is moved with the base 3, thereby achieving an X-times greater light sensitivity with X TDI stages. By using TDI in the embodiment according to
Number | Date | Country | Kind |
---|---|---|---|
0201044-5 | Apr 2002 | SE | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/SE03/00461 | 3/19/2003 | WO |