This application is based on and claims priority from Korean Patent Application No. 10-2013-0119732, filed on, Oct. 8, 2013 in the Korean intellectual Property Office, the disclosure of which is incorporated herein in its entirety by reference.
1. Field of the Invention
The present invention relates to an image processing method and system of an around view monitoring (AVM) system, and more particularly, to an image processing method and system of an AVM system that recognizes a position and a form of an object around a vehicle more accurately and provides the recognized position and form to a driver.
2. Description of the Prior Art
Generally, a visual field of a driver in a vehicle is mainly directed toward the front. Therefore, since visual fields of the left and the right and the rear of the driver are significantly covered by a vehicle body, they are very limited. Therefore, a visual field assisting unit (e.g., a side mirror, or the like) that includes a mirror to widen a visual field of the driver having a limited range has been generally used. Recently, technologies including an imaging device that photographs an image of the exterior of the vehicle and provides the photographed image to the driver have been developed.
In particular, an around view monitoring (AVM) system has been developed in which a plurality of imaging devices are installed around the vehicle to show omni-directional (e.g., 360 degrees) images around the vehicle. The AVM system combines images around the vehicle photographed by the plurality of imaging devices to provide a top view image of the vehicle, to thus display an obstacle around the vehicle and remove a blind spot.
However, in the top view image provided by the AVM system, a shape of an object, particularly, a three-dimensional object, around the vehicle based on photographing directions of the imaging devices may be distorted and shown. An object of which a photographing direction and distance are close based on a position of the imaging device is photographed to be similar to an actual shape. However, as a relative distance to the imaging device and an angle from the photographing direction increases, a shape of the three-dimensional object may be distorted. Therefore, an accurate position and shape of the obstacle around the vehicle may not be provided to the driver.
Accordingly, the present invention provides an image processing method and system of an around view monitoring (AVM) system that assists in more actually recognizing a three-dimensional object around a vehicle when a shape of the three-dimensional object is distorted and shown in a top view image provided to a driver via the AVM system.
In one aspect of the present invention, an image processing method of an AVM system may include: photographing, by an imaging device, an environment around a vehicle to generate a top view image; creating, by a controller, a difference count map by comparing two top view images generated at different times; extracting, by the controller, partial regions in the created difference count map; and generating, by the controller, an object recognizing image by continuously connecting the extracted regions of the difference count map to each other. The image processing method may further include: recognizing, by the controller, an object around the vehicle using the object recognizing image; and including the recognized object in the top view image and displaying, by the controller, the top view image that includes the recognized object.
The creating of the difference count map may include: correcting, by a controller, a relative position change of an object around the vehicle included in the two top view images based on a movement of the vehicle; and comparing, by the controller, the two top view images in which the position change is corrected to calculate difference values for each pixel. The extracted region may include pixels having a number that corresponds to a movement distance of the vehicle in a movement direction of the vehicle in the difference count map. The extracted region may include a preset number of pixels in a movement direction of the vehicle in the difference count map.
In the generation of the object recognizing image, the extracted regions of the difference count map may be connected to be in proportion to a movement distance of the vehicle, and a final value may be determined based on weighting factors imparted to each pixel with respect to an overlapped pixel region. As an angle from a photographing direction of an imaging device based on a position of the imaging device in the difference count map increases, weighting factors to each pixel may decrease.
The above and other objects, features and advantages of the present invention will be more apparent from the following detailed description taken in conjunction with the accompanying drawings, in which:
It is understood that the term “vehicle” or “vehicular” or other similar term as used herein is inclusive of motor vehicles in general such as passenger automobiles including sports utility vehicles (SUV), buses, trucks, various commercial vehicles, watercraft including a variety of boats and ships, aircraft, and the like, and includes hybrid vehicles, electric vehicles, combustion, plug-in hybrid electric vehicles, hydrogen-powered vehicles and other alternative fuel vehicles (e.g. fuels derived from resources other than petroleum).
Although exemplary embodiment is described as using a plurality of units to perform the exemplary process, it is understood that the exemplary processes may also be performed by one or plurality of modules. Additionally, it is understood that the term controller/controlling unit refers to a hardware device that includes a memory and a processor. The memory is configured to store the modules and the processor is specifically configured to execute said modules to perform one or more processes which are described further below.
Furthermore, control logic of the present invention may be embodied as non-transitory computer readable media on a computer readable medium containing executable program instructions executed by a processor, controller/controlling unit or the like. Examples of the computer readable mediums include, but are not limited to, ROM, RAM, compact disc (CD)-ROMs, magnetic tapes, floppy disks, flash drives, smart cards and optical data storage devices. The computer readable recording medium can also be distributed in network coupled computer systems so that the computer readable media is stored and executed in a distributed fashion, e.g., by a telematics server or a Controller Area Network (CAN).
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.
Unless specifically stated or obvious from context, as used herein, the term “about” is understood as within a range of normal tolerance in the art, for example within 2 standard deviations of the mean. “About” can be understood as within 10%, 9%, 8%, 7%, 6%, 5%, 4%, 3%, 2%, 1%, 0.5%, 0.1%, 0.05%, or 0.01% of the stated value. Unless otherwise clear from the context, all numerical values provided herein are modified by the term “about.”
Hereinafter, the present invention will be described in detail with reference to the accompanying drawings.
The photographing unit 110 may be configured to photograph an environment around a vehicle. The photographing unit 110 may include a plurality of imaging devices (e.g., camera, video cameras, and the like) to omni-directionally (e.g., 360 degrees) photograph the environment around the vehicle. For example, the photographing unit 110 may include four imaging devices installed at the front, the rear, the left, and the right of the vehicle. In addition, the photographing unit 110 may include wide angle imaging devices configured to photograph the environment around the vehicle using a less number of imaging devices. The image around the vehicle photographed by the photographing unit 110 may be converted into a top view image as viewed from the top of the vehicle through image processing. The photographing unit 110 may be configured to continuously photograph the environment around the vehicle to continuously provide information regarding the environment around the vehicle to a driver.
The communicating unit 120 may be configured to receive various sensor values to process the top view image from electronic control units (ECUs) to adjust the respective portions of the vehicle. For example, the communicating unit 120 may be configured to receive a steering angle sensor value and a wheel speed sensor value to sense a movement distance and a movement direction of the vehicle. The communicating unit 120 may use a controller area network (CAN) communication to receive the sensor values of the ECUs. The CAN communication, which is a standard communication protocol designed for microcontrollers or apparatuses to communicate without a host computer in the vehicle, is a communication scheme in which a plurality of ECUs are connected in parallel to exchange information between the respective ECUs.
The displaying unit 130 may be configured to display the top view image generated by the controller 140. The displaying unit 130 may be configured to display the top view image in which the virtual image is include according to an object recognizing result. The displaying unit 130 may include various display devices such as a cathode ray tube (CRT), a liquid crystal display (LCD), an organic light emitting diode (OLED) and a plasma display panel (PDP), and the like. Additionally, the controller 140 may be configured to operate the AVM system. More specifically, the controller 140 may be configured to combine images around the vehicle photographed by the photographing unit 110 to generate the top view image.
Furthermore, the controller 140 may be configured to compare two top view images generated at different times to create a difference count map. The difference count map may be an image that indicates a difference value between corresponding pixels among pixels included in the two top view images generated at different time periods and may have different values for each pixel based on a degree thereof.
As described above, an object, particularly, a three-dimensional object, around the vehicle included in the top view image may be shown as a distorted shape. The difference count map may include information regarding the distorted three-dimensional object by comparing two continuous top view images and calculating difference values. In addition, the controller 140 may be configured to extract partial regions in the created difference count maps and continuously connect the extracted regions as time elapses to generate an object recognizing image. Further, the controller 140 may be configured to recognize the object around the vehicle using the generated object recognizing image. More specifically, the controller may be configured to recognize a shape of the object around the vehicle and a distance from the vehicle to the object around the vehicle using the object recognizing image. In addition, the controller 140 may be configured to compare the recognized shape of the object with pre-stored patterns and output a virtual image that corresponds to the recognized shape in the top view image when a pattern that corresponds to the recognized shape of the object is present.
Moreover, although not illustrated in
First, the top view images may be generated (S210). More specifically, an environment around a vehicle may be omni-directionally (e.g., 360 degrees) photographed, and the photographed images may be combined to generate the top view images. This will be described in more detail with reference to
Furthermore, referring to
When the top view images are generated, two top view images generated at different time periods may be compared to create the difference count map (S220). As described above, the difference count map may be an image that indicates a difference value between corresponding pixels among pixels included in the two top view images generated at different time periods. The creation of the difference count map may include correcting, by the controller, a relative position change of the environment around the vehicle included in the two top view images based on movement of the vehicle and comparing, by the controller, the two top view images in which the position change is corrected to calculate difference values for each pixel. These processes will be described in detail with reference to
The imaging device mounted within the vehicle may be configured to continuously photograph the environment around the vehicle at preset time intervals as the vehicle moves and generally photograph about 10 to 30 frames per second. In addition, the top view images may be continuously generated as time elapses using the images continuously photographed by the plurality of imaging devices. In particular, a change may be generated in positions of the objects around the vehicle included in the image between the respective top view images as the vehicle moves. When the difference count map is created, a relative position change of the object around the vehicle included in the other top view image may be corrected based on any one of the two temporally continuous top view images to remove (e.g., minimize) an error based on the movement of the vehicle. In
In particular, a corrosion degree of the top view image may be determined based on a movement distance and a movement direction of the vehicle. For example, when it is assumed that a distance of about 2 cm is represented by one pixel in the top view image, when the vehicle moves by about 10 cm in a forward direction during a time in which the two top view images are photographed, the past entire top view image may be moved by five pixels in an opposite direction to a movement direction of the vehicle based on the current top view image. Alternatively, the current entire top view image may be moved by five pixels in the movement direction of the vehicle based on the past top view image. In particular, the movement distance of the vehicle may be calculated by receiving a movement distance of electronic control units (ECUs) adjusting the respective portions of the vehicle and sensor values (e.g., a steering angle sensor value and a wheel speed sensor value) required to calculate a movement direction.
Further, the two top view images in which the position change based on the movement of the vehicle is corrected may be compared to create the difference count map.
In particular, the number and the pattern of adjacent pixels may be selected by various methods. The difference count map illustrated in
In particular, since the object, particularly, the three-dimensional object, around the vehicle included in the top view image is shown as the distorted shape, the difference count map may include information regarding the distorted and shown three-dimensional object by comparing two continuous top view images and calculating the difference values. Moreover, when a new top view image is generated, the new top view image may be compared with the previous top view image to create a difference count map. This will be described with reference to
Moreover, the number of pixels in the movement direction of the vehicle in the region extracted in the difference count map may be determined based on a movement speed of the vehicle. The regions each extracted in continuously created difference count maps may be connected as time elapses as described below. In particular, when a region that includes pixels having a number less than a movement distance of the vehicle is extracted, a discontinuous region may appear. Therefore, a sufficient region may be extracted in consideration of the movement distance of the vehicle. As an example, the extracted region may include a preset number of pixels in the movement direction of the vehicle in the difference count map.
The AVM system may be mainly used when the vehicle is parked or the vehicle passes through a narrow road in which an obstacle is present. The preset number of pixels may be determined based on a maximum movement speed of the vehicle. More specifically, the preset number of pixels may be set to be equal to or greater than the number of pixels in which the vehicle maximally moves in the image based on the maximum movement speed of the vehicle. The number of pixels required according to the maximum movement speed of the vehicle may be represented by the following Equation 1.
wherein X is the preset number of pixels, V is the maximum movement speed of the vehicle, F is an image photographing speed, and D is an actual distance per pixel. More specifically, X is the number of pixels to be extracted in the movement direction of the vehicle in one difference count map and has a unit of px/f. The maximum speed V of the vehicle, may be a maximum movement speed of the vehicle and has a unit of cm/s. The image photographing speed F may be the number of image frames photographed by the imaging device per second and has a unit of f/s. The actual distance D per pixel, which may be an actual distance that corresponds to one pixel of the difference count map, has a unit of cm/px. The image photographing speed F and the actual distance D per pixel may be changed based on performance or a setting state of the imaging device.
For example, when the maximum movement speed of the vehicle is about 36 km/h, the image photographing speed may be about 20 f/s, and the actual distance per pixel may be about 2 cm/px, since the maximum movement speed (e.g., 36 km/h) of the vehicle may correspond to about 1000 cm/s, when these values are substituted into the above Equation 1, the preset number X of pixels may be about 25 px/f. In other words, a region of about 25 pixels or more in the movement direction of the vehicle in the difference count map may be extracted. As another example, the extracted region may include pixels having a number that corresponds to a movement distance of the vehicle in the movement direction of the vehicle in the difference count map. For example, when it is assumed that a distance of about 2 cm is shown as one pixel in the top view image and when the vehicle moves by about 20 cm in a forward direction for a time in which two top view images are photographed, a region that includes about 10 pixels in the movement direction of the vehicle may be extracted. Alternatively, when vehicle moves by about 30 cm in the forward direction, a region that includes about 15 pixels in the movement direction of the vehicle may be extracted.
In particular, as described above with reference to
Furthermore, the extracted regions of the difference count maps may be continuously connected as time elapses to generate the object recognizing image (S240). Since the generating of the object recognizing image in the movement direction of the vehicle may be changed based on a scheme of extracting partial regions in the difference count maps, examples will be described, respectively. As an example, the extracted region may include a preset number of pixels in the movement direction of the vehicle in the difference count map. In particular, since preset regions may be extracted when the difference count maps are created without the movement distance of the vehicle, when the extracted regions are connected, an error may occur between the connected extracted regions and an actual movement distance of the vehicle. Therefore, when the extracted regions include a preset number of pixels, the regions may be connected to correspond to the movement distance of the vehicle in the movement direction of the vehicle. This will be described in detail with reference to
A final pixel value may be determined by various methods such as a method of giving a priority to a new extracted region, a method of selecting an intermediate value of pixel values of each extracted region, a method of selecting any one pixel value based on weighting factors imparted to each pixel, a method of determining a pixel value by setting a contribution based on weighting factors imparted to each pixel, and the like, with respect to the overlapped region. When the contribution based on the weighting factors imparted to each pixel are set, a final pixel value may be determined by the following Equation 2. The following Equation 2 is an equation for determining a final pixel value when n extracted regions are overlapped with respect to one pixel to be shown in the object recognizing image.
wherein, pf is a final pixel value, p1 is a pixel value of a first extracted region, p2 is a pixel value of a second extracted region, pn is a pixel value of a n-th extracted region, w1 is a weighting factor imparted to a pixel of the first extracted region, w2 is a weighting factor imparted to a pixel of the second extracted region, and wn is a weighting factor imparted to a pixel of the n-th extracted region.
Moreover, weighting factors imparted to each pixel will be described with reference to
As another example, when the extracted regions include pixels having a number that corresponds to the movement distance of the vehicle in the movement direction of the vehicle in the difference count map will be described. When the extracted regions correspond to the movement distance of the vehicle, the extracted regions may be connected in the movement direction of the vehicle without overlapped regions whenever the difference count maps are created to generate the object recognizing image. In particular, the object recognizing image may be generated since the movement distance of the vehicle has been already considered when the partial regions are extracted in the difference value maps.
According to the examples as described above, as the vehicle moves, a new difference count map may be created, and when the difference count map is created, a new extracted region may be updated, thus information regarding the object around the vehicle changed based on the movement of the vehicle may be reflected. Additionally, although not illustrated in
Moreover, the image processing method of an AVM system according to various exemplary embodiments of the present invention may be implemented by programs that may be executed in a terminal apparatus. In addition, these programs may be stored and used in various types of recording media. More specifically, codes for performing the above-mentioned methods may be stored in various types of non-volatile recording media such as a flash memory, a read only memory (ROM), an erasable programmable ROM (EPROM), an electronically erasable and programmable ROM (EEPROM), a hard disk, a removable disk, a memory card, a universal serial bus (USB) memory, a compact disk (CD) ROM, and the like. According to various exemplary embodiments of the present invention as described above, the AVM system may recognize more accurate positions and shapes of objects positioned around the vehicle and provide more accurate information regarding the objects around the vehicle to a driver.
Although the exemplary embodiments of the present invention have been disclosed for illustrative purposes, those skilled in the art will appreciate that various modifications, additions and substitutions are possible, without departing from the scope and spirit of the invention as disclosed in the accompanying claims. Accordingly, such modifications, additions and substitutions should also be understood to fall within the scope of the present invention.
Number | Date | Country | Kind |
---|---|---|---|
10-2013-0119732 | Oct 2013 | KR | national |