The present invention relates to image processors and, more particularly, to an image processor for processing images captured by a plurality of image pickup devices mounted on a vehicle.
One example of the above image processor is a multi-function vehicle-mounted camera system. The multi-function vehicle-mounted camera system broadly includes first through eight image pickup devices, an image processor, and first through third display devices.
The first through eighth image pickup devices are respectively mounted around a vehicle. More specifically, the first image pickup device shoots images in an area ahead of the vehicle.
The second image pickup device shoots images in an area diagonally ahead of the vehicle to its left. The third image pickup device shoots images in an area diagonally ahead of the vehicle to its right. The fourth image pickup device shoots images in an area substantially identical to an area reflected in a door mirror on the left side of the vehicle. The fifth image pickup device shoots images in an area substantially identical to an area reflected in a door mirror on the right side of the vehicle. The sixth image pickup device shoots images in an area diagonally behind the vehicle to its left. The seventh image pickup device shoots images in an area diagonally behind the vehicle to its right. The eighth image pickup device shoots images in an area behind the vehicle.
The image processor combines images shot by predetermined image pickup devices of the above first through eighth image pickup devices (hereinafter referred to as shot images) to generate an image to be displayed on either one of the first through third display devices (hereinafter referred to as a display image). As the display image, five types of images are generated: an upper viewing point image, a panorama image, an all-around image, a combined image, and a viewing angle limited image.
The upper viewing point image is an image representing an area surrounding the vehicle when viewed from the above. Also, the panorama image is a super-wide angle image combining a plurality of shot images. The all-around image is an image generated by successively combining the shot images from all image pickup devices to allow the state of the surroundings of the vehicle to be successively displayed. The combined image is an image formed by combining a plurality of shot images representing states of discontiguous areas. Note that, boundaries between the plurality of shot images are represented so as to be clearly recognizable by the driver. The viewing angle limited image is an image generated from the shot images of the fourth and fifth image pickup devices and having a viewing angle to a degree similar to that of each door mirror.
The first through third display devices each display the images of the above five types in appropriate timing in accordance with the driving state of the vehicle.
With the above-described processing, the multi-function vehicle-mounted camera system can assist safety vehicle driving. Note that the above-described multi-function vehicle-mounted camera system is disclosed in European Patent Publication No. EP 1077161 A2, which has been published by the European Patent Office.
Next, a problem included in the above-described multi-function vehicle-mounted camera system is described. All of the above images of five types represent the state of the surroundings of the vehicle. Therefore, the multi-function vehicle-mounted camera system cannot provide the interior state of the vehicle. As a result, there is a problem in which the driver cannot easily recognize, for example, whether a passenger, particularly a passenger in the rear seat, is seated at a proper position in the seat or whether the passenger has fastened a seatbelt.
Therefore, an object of the present invention is to provide an image processor capable of also providing the state of the vehicle.
In order to achieve the above object, one aspect of the present invention is directed to an image processor including: a first buffer storing a first image representing a state of surroundings of a vehicle and a second buffer storing a second image representing a state of an inside of the vehicle; and a processor for generating a driving assist image representing both of the state of the surroundings of the vehicle and the state of the inside the vehicle based on the first image stored in the first buffer and the second image stored in the second buffer.
Next, with reference to
Also,
Furthermore,
Here, (a) of
Next, a preferred value of the angle α1 is described. For driving assistance for the vehicle V, the image pickup device 1 is required to shoot an area out of a driver's line of vision. If the angle α1 is close to 0 degree, the image pickup device 1 cannot shoot an area immediately below the rear end of the vehicle, which is the above-described area out of the driver's line of vision. Conversely, if the angle α1 is close to 90 degrees, the image pickup device 1 cannot shoot the area behind the vehicle V to its right, which is the above-described area out of the driver's line of vision. In view of the above points and the shooting areas of the surrounding image pickup devices 2 and 4, the angle α1 is set to an appropriate value. For example, when the viewing angle θV is of the order of 140 degrees, α1 is preferably set to be of the order of 20 degrees.
Next, a preferred value of the angle β1 is described. As described above, the image pickup device 1 is required to shoot the area out of the driver's line of vision. If the angle β1 is close to 0 degree, the image pickup device 1 cannot shoot areas other than an area away from the vehicle V. That is, the image pickup device 1 cannot shoot the area immediately below the rear end of the vehicle V. Also, since the driver generally drives so as to avoid an obstacle obstructing a direction of travel of the vehicle V, the obstacle is located some distance away from the vehicle V. Therefore, if the angle β1 is close to 90 degrees, the image pickup device 1 cannot shoot areas other than an area extremely close to the vehicle V. That is, in this case, the image pickup device 1 is difficult to shoot the obstacle. In view of the above point and the shooting areas of the surrounding image pickup devices 2 and 4, the angle β1 is set to an appropriate value. When the viewing angle θV is of the order of 140 degrees as described above, the angle Pi is preferably set to be of the order of 30 to 70 degrees.
Also, as illustrated in
Note that, as evident from
Also, as illustrated in
The image processor AIP includes, as illustrated in
The working area 9 is structured typically by a random access memory, and is used by the processor 8 at the time of generating the driving assist image IDA. The working area 9 includes, as illustrated in
Furthermore, the driving assist image IDA presents, as illustrated in (b) of
Still further, making the position and direction of the virtual camera CV simply identical to those of the image pickup device 5 merely causes the driving assist image IDA to be identical to the shot image IC5. That is, the state of the surroundings of the vehicle V in the driving assist image IDA is obstructed by a component of the vehicle V typified by a door and is hidden therebehind, thereby making it impossible to fully achieve the object set in the present invention. Therefore, with a blending process described further below, most of the vehicle V is translucently rendered in the driving assist image IDA. With this, as illustrated in (b) of
Also, in (b) of
The buffer 96 illustrated in
Also, in
Next, the mapping table 102 is described in detail. As will be described further below, the processor 8 selects some pixels PC1 through PC5 from the shot images IC1 through IC5, and then generates the driving assist image IDA by using the selected pixels PC1 through PC5. At this time of selection and generation, the mapping table 102 is referred to. For example, in accordance with the mapping table 102, as illustrated in
Note herein that the driving assist image IDA represents the state of the inside of the vehicle V and the outside of the vehicle V when viewed from the virtual camera CV (refer to
To allow the values of each pixel PDA to be determined, the mapping table 102 describes which value of the pixel PDA is determined by which value(s) of the pixels PC1 through PC5. Here,
The record type TUR indicates a type of the corresponding unit record UR typically by one of numbers “1” and “2”. In the present embodiment, for convenience of description, “1”0 indicates that the above blending is not required, while “2” indicates that blending is required. Therefore, in a unit record UR assigned to a pixel PDA that belongs to the above non-blending area RNB, “1” is described in the column of the record type TUR. Also, in a unit record UR assigned to a pixel PDA that belongs to the blending area RMX,“2” is described in the column of the record type TUR.
The coordinate values (UDA, VDA) indicate to which pixel PDA the corresponding unit record UR is assigned.
The identification number ID and the coordinate values (UC, VC) are as described above. Note herein that the value of the pixel PDA is determined by using one or two values of the pixels PC1 through PC5 each uniquely specified by a combination of the identification number ID and the coordinate values (UDA, VDA) of the same unit record UR (refer to
Also, the blending ratio RBR is a parameter for determining the value of the pixel PDA described in the corresponding unit record UR. In the present embodiment, as a preferred example, the blending ratio RBR is described only in the unit record UR whose record type TUR is “2” and, more specifically, is assigned to either one of the sets of the identification number ID and the coordinate values (UDA, VDA). Here, when the assigned blending ratio RBR is α (0<α<1), the blending ratio RBR of the other of the sets of the identification number ID and the coordinate values (UC, VC) is (1−α).
The display device 6 displays the driving assist image IDA generated by the image processor AIP.
Next, with reference to a flowchart of
Here, in the present embodiment, in response to the image pickup instruction CIC, the image pickup devices 1 through 5 generate the shot images IC1, through IC5 and store them in the buffers 91 through 95. This is not meant to be restrictive. The image pickup devices 1 through 5 may spontaneously or actively generate the shot images IC1 through IC5 and store them in the buffers 91 through 95.
Next, the processor 8 performs image processing in accordance with the mapping table 102 in the program memory 10. That is, the processor 8 uses the shot images IC1 through IC5 stored in the buffers 91 through 95 to generate the driving assist image IDA on the buffer 95 (step S3).
Here,
When the record type TUR indicates “1”, blending is not necessary, as described above, and the unit record UR selected this time has described therein one set of the identification number ID and the coordinate values (UC, VC). Upon determination that the record type TUR indicates “2”, the processor 8 reads the identification number ID and the coordinate values (UC, VC) from the unit record UR this time (step S23). Next, the processor 8 accesses one of the buffers 91 through 95 that is specified by the identification number ID read this time, and further extracts a value of a pixel P (any one of the pixels PC1 through PC5) specified by the coordinate values (UC, VC) read this time from the buffer accessed this time (any one of the buffers 91 through 95) (step S24). Next, the processor 8 reads the coordinate values (UDA, VDA) from the unit record UR this time (step S25). The processor 8 then takes the value extracted this time from the pixels PC1 through PC5 as the value of the pixel PDA specified by the coordinate values (UDA, VDA) described in the unit record UR selected this time. That is, the processor 8 stores any one of the pixels PC1 through PC5 extracted in step S23, as it is, in an area for storing the value of the pixel PDA specified by the coordinate values (UDA, VDA) in the buffer 96 (step S26).
On the other hand, upon determination in step S22 that the unit record TUR this time indicates “2” , the identification number ID and the coordinate values (UC, VC), and the blending ratio RBR of the same set are extracted from the unit record UR this time (step S27). Next, the processor 8 accesses one of the buffers 91 through 95 that is specified by the identification number ID read this time, and further extracts a value of a pixel P (any one of the pixels PC1 through PC5) specified by the coordinate values (UC, VC) read this time from the buffer accessed this time (any one of the buffers 91 through 95) (step S28). Thereafter, the processor 8 multiplies the one of the pixels PC1 through PC5 extracted this time by the blending ratio RBR read this time, and then retains a multiplication value MP×R in the working area 9 (step S29). Next, the processor 8 determines whether or not an unselected set (the identification number ID and the coordinate values (UC, VC)) remains in the unit record UR selected this time (step S210). If an unselected set remains, the processor 8 reads the set and the blending ratio RBR (step S211) to perform step S28. On the other hand, if no unselected set remains, the processor 8 performs step S212.
At the time when the processor 8 determines in step S210 that no unselected set remains, the working area 9 has stored therein a plurality of multiplication values MP×R. The processor 8 calculates a total VSUM of the plurality of multiplication values MP×R (step S212), and then reads the coordinate values (UDA, VDA) from the unit record UR this time (step S213). The processor 8 then takes the total VSUM calculated in step S212 as the value of the pixel PDA specified by the coordinate values (UDA, VDA) read in step S213. That is, the processor 8 stores the total VSUM calculated this time in an area for storing the value of the pixel PDA specified by the coordinate values (UDA, VDA) in the buffer 96 (step S214).
When the above steps S26 or S214 ends, the processor 8 determines whether or not an unselected unit record UR remains (step S215) and, if unselected one remains, performs step S21 to determine the value of each pixel PDA forming the drive assist image IDA. That is, the processor 8 performs the processes up to step S215 until all unit records UR have been selected. As a result, the driving assist image IDA of one frame is completed in the buffer 96, and then processor 8 exits from step S2.
Next, the processor 8 transfers the driving assist image IDA generated on the buffer 96 to the display device 6 (step S4). The display device 6 displays the received driving assist image IDA. In the image processor AIP, the series of the above steps S1 through S4 is repeatedly performed. Also, by viewing the above driving assist image IDA, the driver can visually recognize both of the states of the inside of the vehicle V and the outside of the vehicle V. More specifically, the driver can grasp the state of the area out of the driver's line of vision and, simultaneously, can check to see whether a passenger is safely seated in the seat. With this, it is possible to provide the image processor AIP capable of generating the driving assist image IDA that can assist safety driving more than ever.
Here, in the present embodiment, as a preferred example, the driving assist image IDA represents the states of the outside and the inside of the vehicle V viewed from the virtual camera CV. With this, for example, even when there is an obstacle outside the vehicle V, the driver can intuitively recognize the position of the obstacle with respect to the vehicle V. Alternatively, other than (b) of
Also, as a preferred example in the present embodiment, the image pickup devices 1 through 5 are mounted as illustrated in
Next,
Each of the seating sensor 11 mounted to a seat of the vehicle V detects, in response to an instruction from the processor 8,whether a passenger is seated in a seat at which the seating sensor is mounted (hereinafter referred to as a target seat), and transmits a report signal DST for reporting the detection result to the processor 8.
Each of the fastening sensor 12 mounted to a seatbelt for the above target seat detects, in response to an instruction from the processor 8, whether the seatbelt at which the fastening sensor is mounted has been fastened by the passenger, and transmits a report signal DSB for reporting the detection result to the processor 8.
Next, with reference to a flowchart of
In
Next, the processor 8 uses the received report signals DST and DSB to determine the presence or absence of a seat in which a passenger is seated but the seatbelt has not been fastened by the passenger (hereinafter referred to as a warning target seat) (step S7). More specifically, from the detection result indicated by each report signal DST, the processor 8 specifies a seat in which a passenger is currently seated (hereinafter referred to as an used seat). Furthermore, from the detection result indicated by each report signal DSB, the processor 8 determines a seat in which the seatbelt is currently not fastened (hereinafter referred to as an unfastened seat). The processor 8 determines whether or not there is a warning target seat, which is a used seat and also is an unfasten seat. Upon determination that there is no such seat, the processor 8 then performs step S4 without performing step S8.
On the other hand, upon determination in step S7 that one or more warning target seats exist, the processor 8 overlays a mark image DMK representing a shape like a human face at a predetermined position in the driving assist image IDA (step S8). As a result, the driving assist image IDA as illustrated in (b) of
As described above, in the present exemplary modification, the mark image DMK is overlaid on the driving assist image IDA. Therefore, the driver can easily visually recognize the passenger not fastening the seatbelt.
Next,
The external storage device 13 is a non-volatile storage device for storing the driving assist image IDA transferred from the buffer 96 of the image processor AIP as a vehicle state image IVS representing both states of the inside and the surroundings of the vehicle V. Furthermore, the external storage device 13 stores, in addition to the vehicle state image IVS, a current time TC, measured by the timer 14 as a date/time information Ddd. The timer 14 transmits, in response to an instruction from the processor 8, the current time TC measured by itself to the image processor AIP. In the present embodiment, the above current time TC is assumed to include year, month, and day and, as described above, is recorded in the external storage device 13 together with the vehicle state image IVS. The transmitting device 15 is formed typically by a cellular phone, operates in response to an instruction from the processor 8, and at least transmits the driving assist image IDA generated on the buffer 96 as the vehicle state image IVS to the outside of the vehicle V. Although details are described further below, typical destination facilities of the vehicle state image IVS include a police station and/or an emergency medical center. The locator 16 is formed typically by a GPS (Global Positioning System) receiver to derive a current position DCP of the vehicle V. Note that, in the present embodiment, description continues as the locator 16 being formed by a GPS receiver for convenience of description. However, as well known, the current position DCP obtained by the GPS receiver includes an error. Therefore, the locator 16 may include an autonomous navigation sensor. The above current position DCP is preferably transmitted from the transmitting device 15 to the outside of the vehicle V together with the vehicle state image IVS. The shock sensor 17 is typically an acceleration sensor used in an SRS (Supplemental Restraint System) airbag system for supplement to seatbelts for detecting a degree of shock. Also, when the detected degree of shock is larger than a predetermined reference value, the shock sensor 17 regards that the vehicle V has been involved in a traffic accident, and then transmits a report signal DTA indicating as such to the processor 8.
Next, with reference to
Subsequently to step S3, the processor 8 further transfers the driving assist image IDA generated in the buffer 96 to the display device 6 and the external storage device 13 (step S9). Similarly to the above, the display device 6 displays the received driving assist image IDA. Also, the external storage device 13 stores the received driving assist image IDA as the vehicle state image IVS. With the above vehicle state image IVS being recorded, both of the state of the surroundings and the inside of the vehicle V during driving are stored in the external storage device 13. Therefore, in case that the vehicle V has been involved in a traffic accident, the vehicle state image IVS in the external storage device 13 can be utilized for tracking down the cause of the traffic accident, as with a flight recorder of an aircraft. Furthermore, the image recorder AREC does not store the shot images IC1 through IC5 generated by the plurality of the image pickup devices 1 through 5 as they are in the external storage device 13, but stores the vehicle state image IVS obtained by combining these images as one image. Therefore, it is possible to incorporate the small-capacity external storage device 13 in the image recorder AREC. With this, the small inner space of the vehicle V can be effectively utilized.
Note that, in step S9, preferably, the processor 8 first receives the current time TC from the timer 18. The processor 8 then transfers, in addition to the driving assist image IDA on the buffer 96, the received current time TC to the external storage device 13. The external storage device 13 stores both of the received drive assist image IDA and the current time TC. This can be useful to specify the time TC of occurrence of the traffic accident in which the vehicle V was involved.
Furthermore, as described above, the shock sensor 17 transmits the report signal DTA indicating that the vehicle V has been involved in a traffic accident to the processor 8 if the detected degree of shock is larger than the predetermined reference value. In response to reception of the report signal DTA, the processor 8 performs interruption handling as illustrated in
The image processor according to the present invention can be incorporated in a driving assist device.
Number | Date | Country | Kind |
---|---|---|---|
2001-313032 | Oct 2001 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP02/10427 | 10/8/2002 | WO |