1. Field of the Invention
The invention relates to an image display technology for an in-vehicle display apparatus.
2. Description of the Background Art
Conventionally, known is an image display system that displays an image that shows the periphery of a vehicle on an in-vehicle display based on the shot images acquired by a camera that is installed in the vehicle such as a car. A user (typically a driver) can view the periphery of the vehicle in almost real time by using this image display system.
In an example, when the vehicle runs backward, the image display system that displays the backward image showing the area behind the vehicle is useful for the user to grasp the object that exists in the area behind the vehicle. Thus, the user can drive backward safely while preventing the vehicle from having a collision with the object.
These days, also known is another image display system that shows the downward view that is viewed from a virtual viewpoint looking down at a vehicle and that is generated based on shot images acquired by a plurality of cameras that capture images of the periphery of the vehicle in different directions. A user can easily understand the location of an object that exists near the vehicle by checking on such a downward view.
A user uses such an image display system when rather great attention is needed, such as when the user parks a vehicle at a parking area, or when the vehicle passes close by another vehicle on a narrow street. In such a case where rather great attention is needed, the user pays most of his or her attention to the traveling direction of the vehicle. Thus, even when there is an object, such as a vehicle or a pedestrian, which is coming closer from the direction opposite to the traveling direction of the vehicle, the user might not notice the object coming closer. Therefore, a new technology allowing a user to notice the object that exists in the direction opposite to the traveling direction of a vehicle has been required.
Also known is another image display system that is capable of switching images for display between the area in front of a vehicle and the area behind the vehicle. Such an image display system generally displays a forward image that shows the area in front of the vehicle when the traveling direction of the vehicle is the forward direction (that is, when the shift lever is set at a position except “P” and “R”), and displays a backward image that shows the area behind the vehicle when the traveling direction of the vehicle is the backward direction (that is, when the shift lever is set at “R”). However, in some cases such as when the user parks the vehicle, even when the traveling direction of the vehicle is the backward direction, the user needs to check the area in front of the vehicle. Therefore, new image display systems have been under development. One of the new image display systems displays a forward image in response to user's operation, even when the traveling direction of the vehicle is the backward direction. However, displaying the forward image when the traveling direction is the backward direction may mislead the user into believing the traveling direction of the vehicle is the forward direction. Therefore, the new image display system has been in need of more improvement.
According to one aspect of the invention, an image generation apparatus is used in a vehicle, and generates an image that shows a periphery of the vehicle. The image generation apparatus includes an input that acquires shot images from a plurality of cameras, each of which captures the periphery of the vehicle in a different direction, a controller that determines a traveling direction of the vehicle, a generator that generates a composite image including a first area and a second area in a seamless manner based on the shot images, the first area showing the periphery of the vehicle viewed from a virtual viewpoint that is set looking down at the vehicle, the second area showing an area that includes an outside of the first area and is in a direction opposite to the determined traveling direction, and an output that transmits a display image including the composite image to a display apparatus to display the display image on the display apparatus.
Since the display image includes the composite image showing the second area that is in the direction opposite to the determined traveling direction of the vehicle, a user can check an object that exists in the direction opposite to the determined traveling direction of the vehicle by looking at the display image. Moreover, since the composite image shows, in a seamless manner, the first area showing the periphery of the vehicle and the second area that is in the direction opposite to the traveling direction, the user can immediately understand the location of the object that exists in the direction opposite to the determined traveling direction of the vehicle.
According to another aspect of the invention, an image generation apparatus is used in a vehicle and generates an image that shows a periphery of the vehicle. The image generation apparatus includes an input that acquires shot images from a plurality of cameras, each of which captures one of an area in front of the vehicle and an area behind the vehicle, a controller that determines a traveling direction of the vehicle, a generator that generates a display image based on the shot images, and an output that transmits the display image to a display apparatus to display the display image on the display apparatus. The generator generates as the display image a first display image that shows the area in front of the vehicle when the determined traveling direction of the vehicle is a forward direction, and selectively one of a second display image and a third display image in conformity with a user's operation when the determined traveling direction of the vehicle is a backward direction. The second display image shows the area behind the vehicle, and the third display image shows the area in front of the vehicle in a different form than the first display image.
Each of the first display image for display when the determined traveling direction of the vehicle is the forward direction and the third display image for display when the determined traveling direction of the vehicle is the backward direction shows the area in front of the vehicle in a different form. This prevents the user from mistaking the traveling direction of the vehicle.
According to another aspect of the invention, an image generation apparatus is used in a vehicle and generates an image that shows a periphery of the vehicle. The image generation apparatus includes an input that acquires a shot image from a side camera that captures a side area of the vehicle, a controller that determines a traveling direction of the vehicle, a detector that detects a state of a side mirror of the vehicle, a generator that generates a display image that shows the side area covering the determined traveling direction of the vehicle based on the shot image acquired from the side camera when the side mirror is retracted, and an output that transmits the display image to a display apparatus to display the display image on the display apparatus.
When the side mirror is retracted, the display image that shows the side area and covers the determined traveling direction of the vehicle is displayed. Thus, when the side surface of the vehicle is close to an object, a user can check the side state of the vehicle.
Therefore, the first object of the invention is for a user to be able to notice an object that exists in a direction opposite to a traveling direction of a vehicle.
The second object of the invention is to prevent the user from mistaking the traveling direction of the vehicle.
The third object of the invention is for the user to be able to check a side state of the vehicle when a side surface of the vehicle is close to an object.
These and other objects, features, aspects and advantages of the invention will become more apparent from the following detailed description of the invention when taken in conjunction with the accompanying drawings.
Hereinafter, an embodiment of the invention is described with reference to attached drawings.
As shown in
The major function of the navigation apparatus 20 is to provide route assistance for the user to get to a destination. The navigation apparatus 20 includes a display 21 such as a liquid crystal display having a touch panel, an operation part 22 such as a hardware switch which the user presses, and a controller 23 that controls the whole apparatus. The navigation apparatus 20 is installed in, for example, an instrument panel of the vehicle so that the user can look at the screen of the display 21 with ease.
User's operation is received through the operation part 22 or on the touch panel of the display 21. The controller 23 is a computer that includes a CPU, RAM, ROM, etc. Various kinds of functions including a route-guidance navigation function are performed by CPU processing of the controller 23 based on predetermined programs. The navigation apparatus 20 that is electrically connected to the image generation apparatus 100 is capable of transmitting and receiving various kinds of signals to and from the image generation apparatus 100, and is capable of receiving the image generated by the image generation apparatus 100.
The display 21 normally displays an image for route-guidance based on stand-alone functions of the navigation apparatus 20. When an operation mode of the image display system 120 is changed, the display 21 displays the image that shows the periphery of the vehicle and is generated by the image generation apparatus 100. That is, the navigation apparatus 20 also functions as a display apparatus that displays the image generated by the image generation apparatus 100.
The image generation apparatus 100 includes a shooting part 5 that acquires an image and a main body 10 that processes the image. The main body 10 generates a display image for display on the display 21 based on the image acquired by the shooting part 5.
The shooting part 5 acquires a shot image by capturing the periphery of the vehicle. The shooting part 5 has four on-vehicle cameras: a front camera 51, a rear camera 52, a left-side camera 53 and a right-side camera 54. Each of the on-vehicle cameras 51, 52, 53 and 54 having a lens and an image sensor acquires an image electronically. The shot images acquired by the on-vehicle cameras 51, 52, 53 and 54 are transmitted to the main body 10.
These on-vehicle cameras 51, 52, 53 and 54 that are respectively disposed at different locations on the vehicle capture the periphery of the vehicle in the different directions.
As shown in
Each of the on-vehicle cameras 51, 52, 53 and 54 adopts a lens, such as a fish-eye lens, having a field angle θ of 180 degrees or more. Therefore, combination use of the four on-vehicle cameras 51, 52, 53 and 54 provides the shot images of the whole periphery of the vehicle 9.
Here is another description based on
Further, the main body 10 of the image generation apparatus 100 mainly includes an image acquisition part 41 that acquires a shot image from the shooting part 5, an image generator 3 that generates a display image, a navigation communicator 42 that communicates with the navigation apparatus 20, a signal receiver 43 that receives a signal from another apparatus, and a controller 1 that controls the whole of the image display system 120.
The image acquisition part 41 acquires the shot images respectively from the four on-vehicle cameras 51, 52, 53 and 54 of the shooting part 5. Therefore, the image acquisition part 41 acquires the four shot images, each of which shows the front area, the rear area, the left-side area or the right-side area of the vehicle 9.
The image generator 3 is a hardware circuit that is capable of various kinds of image processing. The image generator 3 generates the display image for display on the display 21 by processing the shot image acquired by the image acquisition part 41. The image generator 3 includes a composite image generator 31 and a display image generator 32 as the major functions of the image generator 3.
The composite image generator 31 generates a composite image that shows the periphery of the vehicle 9 viewed from a virtual viewpoint, based on the shot images acquired by the four on-vehicle cameras 51, 52, 53 and 54. The method for generating the composite image by the composite image generator 31 is described later.
The display image generator 32 generates the display image for display on the display 21 by use of the shot image acquired by the shooting part 5 and of the composite image generated by the composite image generator 31. The display image generator 32 adjusts the shot image and composite image to make them in prescribed sizes by image processing, such as scaling or clipping. Then, the display image generator 32 generates the display image by setting the shot image and the composite image at prescribed locations.
The navigation communicator 42 transmits and receives a signal to and from the navigation apparatus 20. The navigation communicator 42 transmits the display image generated by the display image generator 32 to the navigation apparatus 20. This allows the display 21 of the navigation apparatus 20 to display the display image that shows the periphery of the vehicle 9. When the user performs operation for the operation part 22 or on the display 21 of the navigation apparatus 20, the navigation communicator 42 receives the signal that represents the operation contents, and transmits the signal to the controller 1.
The signal receiver 43 receives a signal from another apparatus that is installed separately from the image display system 120 on the vehicle 9. The signal receiver 43 receives from a shift position sensor 71a signal relevant to a shift position that indicates a shift lever position in a gearbox on the vehicle 9, and transmits the signal to the controller 1.
The controller 1 is a computer that collectively controls the whole of the image display system 120. The controller 1 that includes a CPU, RAM, ROM, etc. implements various kinds of control functions by CPU processing based on prescribed programs. When the user handles the navigation apparatus 20 or the switch 44, the signal representing the operation contents is transmitted to the controller 1. To respond to the signal, the controller 1 controls the image display system 120 based on the instruction conforming to the operation by the user.
An image controller 11 and a shift determining part 12 shown in
The image controller 11 controls the image processing implemented by the image generator 3. In an example, the image controller 11 provides to the composite image generator 31 various kinds of the parameters required for composite image generation. The image controller 11 also instructs the display image generator 32 on an appropriate display image in accordance with the operation mode of the image display system 120 and the traveling direction of the vehicle.
The shift determining part 12 determines which is the current shift lever position, “P” (Parking), “D” (Driving), “N” (Neutral) or “R (Reverse),” based on the signal transmitted from the shift position sensor 71 of the vehicle 9. That is, the shift determining part 12 substantially determines whether the traveling direction of the vehicle 9 is the forward direction or the backward direction.
Further, the main body 10 includes a nonvolatile memory 40 that is capable of maintaining memory contents even in a power-off state. The nonvolatile memory 40 is, for example, a hard disk and a flash memory. The nonvolatile memory 40 stores viewpoint data 4a and projection surface data 4b. The viewpoint data 4a and the projection surface data 4b are used for composite image generation.
Here is a description about the method where the composite image generator 31 of the image generator 3 generates the composite image that shows the periphery of the vehicle 9 viewed from a virtual viewpoint.
Upon capturing by each of the front camera 51, the rear camera 52, the left-side camera 53 and the right-side camera 54, each of four shot images P1, P2, P3 and P4 is acquired. The shot image P1 shows the area in front of the vehicle 9. The shot image P2 shows the area behind. The shot image P3 shows the left side. The shot image P4 shows the right side. The four shot images P1, P2, P3 and P4 overall include the data covering the whole area around the vehicle 9.
The data (each pixel value) of the four shot images P1, P2, P3 and P4 that are acquired as above are projected onto a projection surface TS that is virtually created. The center of the projection surface TS is designated as the location of the vehicle 9. The area except the designated area on the projection surface TS corresponds to the data of the shot images P1, P2, P3 and P4 in advance, and the corresponding data are projected onto the projection surface TS.
In the area in front of the vehicle 9 on the projection surface TS, the data of the shot image P1 acquired by the front camera 51 is projected. In the area behind the vehicle 9 on the projection surface TS, the data of the shot image P2 acquired by the rear camera 52 is projected. In the left-side area of the vehicle 9 on the projection surface TS, the data of the shot image P3 acquired by the left-side camera 53 is projected. In the right-side area of the vehicle 9 on the projection surface TS, the data of the shot image P4 acquired by the right-side camera 54 is projected.
Further, the image of the vehicle 9 is superposed at the position designated as the location of the vehicle 9 on the projection surface TS. The image of the vehicle 9 is prepared as a bitmap image or the like, and stored in the nonvolatile memory 40 in advance. As above, the data for the whole area of the projection surface TS are determined.
Upon the determination of the data of the whole of the projection surface TS, a virtual viewpoint VP is set. Concretely, the virtual viewpoint VP is set so that the vehicle 9 and the periphery of the vehicle 9 are looked down from the virtual viewpoint VP. Within the projection surface TS, the area included in the prescribed angular range viewed from the set virtual viewpoint VP is clipped. The image clipped as above is determined as a composite image CP that shows the periphery of the vehicle 9 viewed from the virtual viewpoint VP.
The projection surface TS in
Next, operation modes on the image display system 120 are described.
The image display system 120 has three operation modes: a navigation mode M1, a front mode M2, and a back mode M3. The navigation mode M1 activates the functions of the navigation apparatus 20.
In the navigation mode M1, a variety of information is displayed on the display 21 based on the stand-alone functions of the navigation apparatus 20 with no use of the functions of the image generation apparatus 100.
Here is another description based on
The controller 1 is capable of switching among these operation modes based on user's operation or the shift lever position. In an example, pressing the switch 44 in the navigation mode M1 under the situation where the shift lever is set at a position except “P” and “R” changes the operation mode to the front mode M2 (arrow T1). Moreover, pressing the switch 44 in the front mode M2 changes the operation mode back to the navigation mode M1 (arrow T2).
Moving the shift lever to the position of “R” in the navigation mode M1 or the front mode M2 changes the operation mode to the back mode M3 (arrows T3 and T5). Moving the shift lever to a position except “R” in the back mode M3 changes the operation mode back to the navigation mode M1 (arrow T4).
When the shift lever is set at “R,” the traveling direction of the vehicle 9 is the backward direction. Thus, the operation mode is changed, according to the user's needs, to the back mode M3 that mainly displays the area behind the vehicle 9. On the other hand, when the shift lever is set at a position except “P” and “R,” the traveling direction of the vehicle 9 is the forward direction. Thus, the operation mode is changed to the front mode M2 that mainly provides the display image of the area in front of the vehicle 9.
The image generator 3 changes the form of the display image to be generated in accordance with the operation mode. Thus, the front mode M2 and the back mode M3 provide display images in different forms for display on the display 21 respectively.
The forward image SP1 that shows the area in front of the vehicle 9 is processed based on the shot image acquired by the front camera 51. In the front mode M2, the traveling direction of the vehicle 9 is the forward direction. At the time, the user can notice the object forward that exists in the traveling direction of the vehicle 9 by looking at the forward image SP1. The forward image SP1 includes a mask area Ma that partially hides the image in front of the vehicle 9.
The composite image CP1 shows a peripheral area A1 as the periphery of the vehicle 9 and a backward area A2 as the area behind the vehicle 9. The peripheral area A1 is in a rectangle shape that has prescribed distances (e.g. 2 meters) respectively from the front surface, the rear surface, the left-side surface and the right-side surface of the vehicle 9 that is centered in the rectangle shape. The composite image CP1 includes the peripheral area A1 that centers the image of the vehicle 9 and is viewed from a virtual viewpoint that looks down the vehicle 9. The backward area A2 shows the area in the area behind the vehicle 9, which also includes an outside of the peripheral area A1.
The composite image CP1 shows the peripheral area A1 and the backward area A2 in a seamless form. That is, the composite image CP1 has no boundary line for separating the peripheral area A1 and the backward area A2, and shows the seamless image of the object spreading in the peripheral area A1 and the backward area A2 when the object exists in the location corresponding to the both areas of the peripheral area A1 and the backward area A2.
The user can notice the object backward that exists in the direction opposite to the traveling direction of the vehicle 9 by looking at the composite image CP1. Since the composite image CP1 shows the peripheral area A1 and the backward area A2 seamlessly, the user can immediately understand the location of the object that exists in the area behind the vehicle 9.
The peripheral area A1 and the backward area A2 may be separated as different images and displayed side by side. However, on the separate images, it is difficult for the user to understand the positional relation between the peripheral area A1 and the backward area A2. Then, this makes it difficult for the user to immediately understand the position of the object that exists in the backward area A2, if any. Also, while an object moves from the backward area A2 to the peripheral area A1, the image of the object moves from the image of the backward area A2 to the image of the peripheral area A1. This may cause the user to lose sight of the image of the object.
On the other hand, since the composite image CP1 of the embodiment shows, in a seamless manner, the backward area A2 and the peripheral area A1 that includes the position of the vehicle 9, the user can immediately understand the relative position to the vehicle 9, of the object that exists in the area behind the vehicle 9. Further, even while the image of the object moves from the backward area A2 to the peripheral area A1, the image of the object moves within the composite image CP1. Thus, the user does not lose sight of the image of the object.
The user can check the state of the area behind the vehicle 9 and the state of the area in front of the vehicle 9 as well, by looking at the display image DP1 shown in
That is, on the projection surface TS, the part PP1 that centers the vehicle 9 and has a margin of a prescribed distance H from each side of the vehicle 9, is created on the flat surface. The data of the peripheral area A1 is projected onto the flat part PP1. On the other hand, on the projection surface TS, the part PP2 which includes the outside of the part PP1 and is in the backward direction of the vehicle 9, is created on the curved surface. The data of the backward area A2 is projected onto the curved part PP2. The data of the shape of the projection surface TS is included in the projection surface data 4b stored in the nonvolatile memory 40.
The virtual viewpoint VP is set posterior to a center line CL that is set at the center between the front end line and the rear end line of the vehicle 9. The data of the point of the virtual viewpoint VP is included in the viewpoint data 4a stored in the nonvolatile memory 40. The composite image CP1 shows the area which is viewed from the virtual viewpoint VP in a prescribed angle α on the projection surface TS.
As above, the virtual viewpoint VP is set posterior to the center line CL that is set at the center between the front end line and the rear end line of the vehicle 9. Thus, on the composite image CP1 shown in
If a composite image were generated based on the projection surface TS only including the flat surface, the farther from the location of the vehicle 9 an object would exist, the bigger the image of the object projected onto the projection surface TS would become. Therefore, while the image of the object existing near the vehicle 9 has less distortion, the image of the object existing away from the vehicle 9 has larger distortion. Moreover, the composite image includes only the rather narrow area in the peripheral of the vehicle 9.
On the other hand, in the embodiment, the data of the backward area A2 away from the vehicle 9 is projected onto the curved part PP2 of the projection surface TS. This reduces the distortion of the image of the object that exists away from the vehicle 9, and generates the composite image CPI that includes the rather wide area in the peripheral of the vehicle 9. Further, the data of the peripheral area A1 near the vehicle 9 is projected onto the flat part PP1 of the projection surface TS. This provides effective use of the image that has less distortion, and allows a user to pinpoint the precise location of the object existing near the vehicle 9.
The backward image SP2 that shows the area behind the vehicle 9 is processed based on the shot image acquired by the rear camera 52. In the back mode M3, the traveling direction of the vehicle 9 is the backward direction. At the time, the user can notice the object backward that exists in the traveling direction of the vehicle 9 by looking at the backward image SP2.
The composite image CP2 shows the peripheral area A1 as the peripheral area of the vehicle 9 and a forward area A3 as the area in front of the vehicle 9. The composite image CP2 also includes the peripheral area A1 that centers the image the image of the vehicle 9 and is viewed from a virtual viewpoint that looks down the vehicle 9. The forward area A3 shows the area in the area in front of the vehicle 9, which includes the outside of the peripheral area A1.
The composite image CP2 shows the peripheral area A1 and the forward area A3 in a seamless manner. That is, the composite image CP2 has no boundary line for separating the peripheral area A1 and the forward area A3, and shows the seamless image of the object spreading in the peripheral area A1 and the forward area A3 when the object exists at the location corresponding to the both areas of the peripheral area A1 and the forward area A3.
The user can notice the object forward that exists in the direction opposite to the traveling direction of the vehicle 9 by looking at the composite image CP2. Since the composite image CP2 shows the peripheral area A1 and the forward area A3 in a seamless manner, the user can immediately understand the relative position to the vehicle 9, of the object that exists in the area in front of the vehicle 9. While an object moves from the forward area A3 to the peripheral area A1, the image of the object moves within the composite image CP2. Thus, the user does not lose sight of the image of the object
The user can check the state of the area behind the vehicle 9 and the state of the area in front of the vehicle 9 as well, by looking at the display image DP2 shown in
That is, on the projection surface TS, the part PP1 that centers the vehicle 9 and has a margin of the prescribed distance H from each side of the vehicle 9, is created on the flat surface. The data of the peripheral area A1 is projected onto the flat part PP1. On the other hand, on the projection surface TS, the part PP3 that includes the outside of the part PP1 and is in the forward direction of the vehicle 9, is created on the curved surface. The data of the forward area A3 is projected onto the curved part PP3. The data of the shape of the projection surface TS is included in the projection surface data 4b stored in the nonvolatile memory 40.
The virtual viewpoint VP is set anterior to the center line CL that is set at the center between the front end line and the rear end line of the vehicle 9. The data of the point of the virtual viewpoint VP is included in the viewpoint data 4a stored in the nonvolatile memory 40. The composite image CP2 shows the area which is viewed from the virtual viewpoint VP in the prescribed angle α on the projection surface TS.
As above, the virtual viewpoint VP is set anterior to the center line CL that is set at the center between the front end line and the rear end line of the vehicle 9. Thus, on the composite image CP2 shown in
The data of the forward area A3 away from the vehicle 9 is projected onto the curved part PP3 of the projection surface TS. This reduces the distortion of the image of the object that exists away from the vehicle 9, and generates the composite image CP2 that includes the rather wide peripheral area of the vehicle 9. Further, the data of the peripheral area A1 near the vehicle 9 is projected onto the flat part PP1 of the projection surface TS, as well. This provides effective use of the image that has less distortion, and allows the user to accurately pinpoint the precise location of the object existing near the vehicle 9.
Next, a processing flow on the image display system 120 is described.
First, the shift determining part 12 determines the current shift lever position based on the signal transmitted from the shift position sensor 71. That is, the shift determining part 12 substantially determines the current traveling direction of the vehicle 9 (step S11). Hereafter, the processing methods vary in accordance with the traveling direction of the vehicle 9.
When the traveling direction of the vehicle 9 is the forward direction (that is, in the front mode M2) (Yes at the step S12), first, the image acquisition part 41 acquires four shot images from each of the four on-vehicle cameras 51, 52, 53 and 54 (step S13).
Next, the composite image generator 31 generates, using the shot images, the composite image CPI that shows the peripheral area A1 and the backward area A2 in a seamless manner by the method described based on
The display image DP1 generated as described above is transmitted from the navigation communicator 42 to the navigation apparatus 20. Thus, the display image DP1 is displayed on the display 21 of the navigation apparatus 20 (step S19).
When the traveling direction of the vehicle 9 is the backward direction (that is, in the back mode M3) (No at the step S12), first, the image acquisition part 41 acquires the four shot images from each of the four on-vehicle cameras 51, 52, 53 and 54 (step S16).
Next, the composite image generator 31 generates, using the shot images, the composite image CP2 that shows the peripheral area A1 and the forward area A3 in a seamless manner by the method described based on
The display image DP2 generated as described above is transmitted from the navigation communicator 42 to the navigation apparatus 20. Thus, the display image DP2 is displayed on the display 21 of the navigation apparatus 20 (step S19)
As described above, on the image display system 120 of the first embodiment, the shift determining part 12 determines the traveling direction of the vehicle 9. Then, the composite image generator 31 generates the composite image that shows, in a seamless manner, the peripheral area A1 and the area that includes the outside of the peripheral area A1 of the vehicle 9, and that is in the direction opposite (the backward area A2 or the forward area A3) to the traveling direction. The peripheral area A1 shows the peripheral area of the vehicle 9 viewed from a virtual viewpoint that is set looking down at the vehicle 9. The display 21 displays the display image including the composite image. Since the display image includes the composite image that shows the area in the direction opposite to the traveling direction of the vehicle 9, the user can notice the object that exists in the direction opposite to the traveling direction of the vehicle 9 by looking at the display image. Besides, since the peripheral area A1 and the area in the direction opposite to the traveling direction of the vehicle 9 are displayed in a seamless manner, the user can immediately pinpoint the location of the object that exists in the direction opposite to the traveling direction of the vehicle 9.
Next, the second embodiment is described. The operation and the processing on the image display system of the second embodiment are almost the same as the ones of the first embodiment. Thus, the points different from the first embodiment are mainly described. In the first embodiment, when the traveling direction of the vehicle 9 is the backward direction (that is, in the back mode M3), displayed is the display image including the backward image, but is not the one including the forward image.
On the other hand, in the second embodiment, when the traveling direction of a vehicle 9 is the backward direction, a user can switch between a display image that includes the backward image and another display image that includes the forward image by operation. Concretely, when the traveling direction of the vehicle 9 is the backward direction, an image generator 3 selectively generates one of the display image including the backward image and the display image including the forward image in accordance with the user's operation. Then, the generated display image is displayed on a display 21. In the second embodiment, this allows the user to check the area in front of the vehicle 9 in the display image if needed, even when the traveling direction of the vehicle 9 is the backward direction.
However, in the second embodiment, the back mode M3 includes two sub-modes. One of the sub-modes is a backward display mode M31 that mainly provides the display image of the area behind the vehicle 9, and the other of the sub-modes is a forward display mode M32 that mainly provides the display image of the area in front of the vehicle 9. Immediately after the operation mode is switched to the back mode M3, the sub-mode is the backward display mode M31. Then, every time the user presses a switch 44, the sub-mode is switched between the backward display mode M31 and the forward display mode M32 (arrow T6). That is, every time the user presses the switch 44, the image generator 3 generates display images in different forms, and then the generated display image are displayed on the display 21.
In the backward display mode M31, the display image including the backward image that shows the area behind the vehicle 9 is generated and displayed on the display 21. On the other hand, in the forward display mode M32, the display image including the forward image that shows the area in front of the vehicle 9 is generated and displayed on the display 21. The form of the display image in the backward display mode M31 is identical to the display image DP2 (refer to
As seen in the comparison between
If the display image in the forward display mode M32 were generated in the same form as the display image in the front mode M2, the user might take the front mode M2 as the current operation mode on the image display system 120 when the user looks at the display image in the forward display mode M32. In other words, the user, when looking at the display image in the forward display mode M32, might mistake the forward direction for the current traveling direction of the vehicle 9. On the other hand, in the embodiment, the display image DP1 in the front mode M2 and the display image DP3 in the forward display mode M32 are in different forms, which prevents the user from mistaking the traveling direction of the vehicle 9.
In the front mode M2, the traveling direction of the vehicle 9 is the forward direction. Thus, it is preferable for the user to see the area with his or her own eyes if the area can be seen directly. The forward image SP1 in the front mode M2 includes the mask area Ma partially hiding the area that the user can directly check within the area in front of the vehicle 9, and mainly shows the area that is hardly checked by the user directly (the blind area for the user).
On the other hand, the forward display mode M32 provides a display image so that the user can check the area in front of the vehicle 9 when the traveling direction of the vehicle 9 is the backward direction. Thus, displaying the wider area in the front of the vehicle 9 is desired for the driver to check. Therefore, the display image DP3 in the forward display mode M32 can show the wide area in front of the vehicle 9 without any mask area.
As shown in the example of
As shown in the example of
As shown in
As described above, on the image display system 120 of the second embodiment, the display image DP3 that shows the area in front of the vehicle 9 when the traveling direction of the vehicle 9 is the backward direction is shown differently in form from the display image DP1 that also shows the area in front of the vehicle 9 when the traveling direction of the vehicle 9 is the forward direction. This prevents the user from mistaking the traveling direction of the vehicle 9.
Next, the third embodiment is described. The operation and the processing on the image display system of the third embodiment are almost the same as the ones of the first embodiment. Thus, the points different from the first embodiment are mainly described.
On the conventional image display systems, the case where a side-mirror is retracted is not concerned. The cases where the side-mirror of a vehicle is retracted are commonly seen at parking, and further when the side surface of the vehicle is close to an object, such as when the vehicle passes close by another vehicle, or when the vehicle runs on a rather narrow street. Thus, when the side-mirror is retracted under the situation where the vehicle is ready for running, it is assumed that a vehicle 9 will get close to an object. Therefore, when a side-mirror 93 is retracted, an image display system 120 of the third embodiment displays a useful display image when the vehicle 9 runs in the case where the side surface of the vehicle 9 is close to an object.
In the third embodiment, when the side-mirror 93 is retracted under the situation where the operation mode is the navigation mode M1, the operation mode is switched to the close-passing mode M4 (arrow T7). When the side-mirror 93 is opened under the situation where the operation mode is the close-passing mode M4, the operation mode is switched back to the navigation mode M1 (arrow T8). Moreover, when the side-mirror 93 is retracted under the situation where the operation mode is the front mode M2 or the back mode M3, the operation mode may be switched to the close-passing mode M4.
The close-passing mode M4 provides a useful display image when the vehicle 9 runs in the case where the side surface of the vehicle 9 is close to an object. At the time when the side surface of the vehicle 9 is close to an object, the user needs to check the clearance between the side surface of the vehicle 9 and the object. However, it is difficult for the user to directly check the side areas of the vehicle 9, especially the side area on the opposite side to a driver seat. Therefore, the close-passing mode M4 provides the display image that shows the side areas of the vehicle 9.
In the close-passing mode M4, a display image generator 32 of an image generator 3 generates a display image that shows the side areas, each of which also covers the area in the traveling direction of the vehicle 9 based on the shot images acquired by a left-side camera 53 and a right-side camera 54. Then, the generated display image is displayed on a display 21. Therefore, in the close-passing mode M4, the display images are displayed in different forms in accordance with the traveling direction of the vehicle 9.
The left-side image FP1 shows the left-side area of the vehicle 9 which covers the area in front of the vehicle 9 in the traveling direction. Concretely, the left-side image FP1 shows the area anterior to the side-mirror 93 that is mounted on the left side of the vehicle 9. The left-side image FP1 also includes an image of a body near a left-front tire of the vehicle 9.
On the other hand, the right-side image FP2 shows the right-side area of the vehicle 9 which covers the area in front of the vehicle 9 in the traveling direction. Concretely, the right-side image FP2 shows the area anterior to the side-mirror 93 that is mounted on the right side of the vehicle 9. The right-side image FP2 also includes an image of a body near a right-front tire of the vehicle 9.
In the display image DP4 shown in
The left-side image RP1 shows the left-side area of the vehicle 9 which covers the area behind the vehicle 9 in the traveling direction. Concretely, the left-side image RP1 shows the area posterior to the side-mirror 93 that is mounted on the left side of the vehicle 9. The left-side image RP1 also includes an image of a body near a left-rear tire of the vehicle 9.
On the other hand, the right-side image RP2 shows the right-side area of the vehicle 9 which covers the area behind the vehicle 9 in the traveling direction. Concretely, the right-side image RP2 shows the area posterior to the side-mirror 93 that is mounted on the right side of the vehicle 9. The right-side image RP2 also partially includes the vehicle image near a right-rear tire of the vehicle 9.
In the display image DP5 shown in
First, the mirror detector 13 detects the state (retracted or opened) of the side-mirror 93 based on the signal transmitted from the mirror driver 72. When the side-mirror 93 is opened (No at a step S21), the processing in the close-passing mode M4 ends.
When the side-mirror 93 is retracted (Yes at the step S21), next, the shift determining part 12 determines the current shift lever position based on the signal transmitted from a shift position sensor 71. That is, the shift determining part 12 substantially determines the current traveling direction of the vehicle 9 (step S22).
When the traveling direction of the vehicle 9 is the forward direction (that is, when the shift lever is set at a position except “R”) (Yes at a step 23), first, an image acquisition part 41 acquires two shot images from each of the left-side camera 53 and the right-side camera 54 (step S24).
Next, the display image generator 32 generates the display image DP4 that shows the side areas, each of which covers the area in front of the vehicle 9 by use of the shot images through the method described based on
When the traveling direction of the vehicle 9 is the backward direction (that is, when the shift lever is set at “R”) (No at the step S23), first, the image acquisition part 41 acquires two shot images from each of the left-side camera 53 and the right-side camera 54 (step S26).
Next, the display image generator 32 generates the display image DP5 that shows the side areas, each of which covers the area behind the vehicle 9 by use of the shot images through the method described based on
As described above, when the side-mirror 93 is retracted, the image display system 120 of the third embodiment generates the display image that shows the side areas, each of which covers the area in the traveling direction of the vehicle 9, and displays the generated display image on the display 21. This allows the user to easily check the states of the side areas of the vehicle 9 in the case where the side surface of the vehicle 9 is close to the object, which are normally difficult for the user to check.
So far, the embodiments of the invention are described. However, the invention is not to be considered limited to the described embodiments above, but includes various modifications. Hereafter, these modifications are described. Every embodiment described above and below can be arbitrarily combined with others.
In the first embodiment, the peripheral area A1 shown in the composite image CPI in the front mode M2 and the peripheral area A1 shown in the composite image CP2 in the back mode M3 are identical. However, they are not necessary to be identical.
In the third embodiment, the display image shows the both of the left-side area and the right-side area of the vehicle 9. However, the display image may show one of the left-side area and the right-side area of the vehicle 9. In this case, it is preferable for the display image to show the side area on the opposite side of a driver seat because it is difficult for the user to check the side area.
In the above descriptions of the embodiments, the main body 10 of the image generation apparatus 100 and the navigation apparatus 20 are individually set up. However, one apparatus may include and integrate the main body 10 and the navigation apparatus 20 inside.
In the above descriptions of the embodiments, the navigation apparatus 20 displays the image generated by the image generation apparatus 100. However, the generated image may be displayed on a general display apparatus that has no special function such as the navigation function.
In the above descriptions of the embodiments, a part of the functions implemented by the controller 1 of the image generation apparatus 100 may be implemented by the controller 23 of the navigation apparatus 20.
Moreover, the signal transmitted from the shift position sensor 71 or the mirror driver 72 may be received by the navigation apparatus 20. In this case, the signal can be transmitted to the controller 1 of the image generation apparatus 100 via the navigation communicator 42.
In the above descriptions of the embodiments, various functions are implemented by software, specifically by CPU processing based on programs. However, some of these functions may be implemented by electrical hardware circuits. Contrarily, some of the functions implemented by hardware circuits in the above descriptions may be implemented by software.
While the invention has been shown and described in detail, the foregoing description is in all aspects illustrative and not restrictive. It is therefore understood that numerous other modifications and variations can be devised without departing from the scope of the invention.
Number | Date | Country | Kind |
---|---|---|---|
2011-127928 | Jun 2011 | JP | national |