BLIND SPOT VISUALIZATION SYSTEM AND METHOD

Abstract
A blind spot visualization system and method for eliminating blind spots for an operator of a vehicle. The blind spots are caused by obstructions in the vehicle. A first image based on a first frequency range, and a second image based on a second frequency range, is generated. The images of the first frequency range and the images of the second frequency range are combined to create a composite image. Displays on the obstruction, facing the operator, receive the composite image. The displays display the images to the operator so that the blind spots, caused by the obstructions in the vehicle, are eliminated.
Description
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS

In the drawings, which form a part of this specification,



FIG. 1A is a right-side perspective view of one embodiment;



FIG. 1B is a left-side perspective view of the embodiment shown in FIG. 1A;



FIG. 2 is a close up of the left side perspective of the embodiment shown in FIG. 1B;



FIG. 3A is an external, partial front view of the embodiment shown in FIG. 1B and FIG. 2;



FIG. 3B is an internal, partial view of the embodiment in FIG. 1B or FIG. 2;



FIG. 4 is an exploded view of the embodiment in a vehicle pillar;



FIG. 5 is an inside view of the front vehicle pillars displaying an embodiment of the system being off or in automatic mode;



FIG. 6A is an inside view of the embodiment of the system with a driver turning the steering wheel to the left in automatic mode with the left screen turning on;



FIG. 6B is an inside front passenger view of the embodiment of the system with a driver turning the steering wheel to the left in automatic mode with the left screen turning on;



FIG. 7A is an inside view of the embodiment of the system with a driver turning the steering wheel to the right in automatic mode with the right screen turning on;



FIG. 7B is an inside front driver view of the embodiment of the system with a driver turning the steering wheel to the right in automatic mode with the right screen turning on;



FIG. 8 is an inside view of the embodiment of the system with the two front pillar screens activated in manual mode;



FIG. 9 is a schematic of an embodiment's components;



FIG. 10 is a flowchart of how the controller handles manual push button input;



FIG. 11 is a flowchart of how the controller handles automatic initiation; and



FIG. 12 is a flowchart of how the processors combine images from two different sensors.







DETAILED DESCRIPTION OF SEVERAL EMBODIMENTS

The blind spot visualization system and method eliminates blind spots created by the design of a vehicle, while an operator is operating the vehicle. A typical vehicle has pillars that support the roof and provide additional rollover safety to an operator. These support pillars create blind spots in the operator's field of vision, which puts the operator, and others, at risk. This embodiment places sensors on one side of the pillar and a display on the other side, to make it look like there is no pillar blocking an operator's view.


An embodiment of the blind spot visualization system and method may use sensors, displays, and processors to eliminate blind spots created by support pillars in a vehicle. One or more sensors may be placed on the support pillars opposite the operator. At least one of the sensors will generate images based on the visible frequency range. Additional sensors may be added to generate images based on the infrared frequency range, or any other frequency range that is not based on the visible frequency range. One or more displays may be placed on the support pillars facing the operator, and they may correspond to the location of the sensors to display the images generated by the sensors to the operator. Images from only one sensor may be displayed. Images from more than one sensor may be combined to provide the operator with a composite enhanced view of what is behind the support pillar. The operator may manually activate the displays, or the displays may automatically activate when the vehicle turns to the left or right or when any other type of sensor detects something on the other side of the support pillar.



FIG. 1A is a right-side perspective view of one embodiment. The embodiment 101 may be associated with a common automobile form 102 with common components, such as a body 103, pillars 104, 105, and 106, and a roof 107. Pillars 104, 105, and 106 may support the roof 107 of the automobile. The same may be true on the opposite side of a common automobile form 102. FIG. 1B shows a left-side perspective view of the embodiment shown in FIG. 1A. In this view, pillars 111, 112, and 113 support the roof 107.


For each pillar 104, 105, 106, 111, 112, and 113, shown in FIGS. 1A, 1B, 2, 3A, 3B, and 4, that the blind spot is to be eliminated, each pillar may have one or more sensors associated with it. FIG. 1A shows sensor 108 on the front right pillar 104 and a different sensor 109 on the front right exterior pillar corner trim 110. FIG. 1A shows two different sensors 108 and 109, but there may be multiple sensors on pillar 104, and on pillars 105 and 106. FIG. 1B shows sensor 114 on the front left pillar 111 and another sensor 115 on the front left exterior pillar corner trim 116. FIG. 1B shows two different sensors 114 and 115, but there may be multiple sensors on pillar 111, and on pillars 112 and 113. FIG. 3A shows an enlarged side view of the left front pillar 111 with sensor 114 and the front left exterior pillar corner trim 116 with sensor 115 from FIG. 2. Furthermore, FIG. 4 is an exploded view of the front left pillar 111, showing the front left exterior pillar trim 122 with holes 124 and 125 that may be added to hold sensors 114 and 115, illustrated in FIGS. 1B, 2, 3A and 4.


As illustrated in FIG. 1A, sensor 108 senses in the visible frequency range, sensor 109 senses in the infrared frequency range, on the right-side of the embodiment 101. FIGS. 1B, 2, and 3A, illustrate sensor 114, which senses in the visible frequency range, and sensor 115 which senses in the infrared frequency range, on the left-side of the embodiment 101. Sensor 114 and 115 may be mounted onto, or built into, the left exterior pillar trim 122 by holes 124 and 125, as illustrated in FIG. 4. The same may be true for the right exterior pillar trim 110 for sensors 108 and 109.


As illustrated in FIGS. 2, 5, 6A, 7A, 7B, and 8, the front right-side interior pillar trim 117 may have a right-side display 118, which may be integrated, mounted, placed onto, or built into, the right-side interior pillar trim 117. Similarly, as illustrated in FIGS. 3B, 4, 5, 6A, 6B, 7A, and 8, the front left-side interior pillar trim 121 may have a left-side display 120, which may be integrated, mounted, placed, or built into the front left-side interior pillar trim 121.



FIG. 5 illustrates an inside view of the displays 120 and 118 not displaying. When displays 120 and 118 are not displaying, the pillars 121 and 117, look just like any other pillar in a vehicle. FIG. 6A demonstrates automatic mode of the embodiment. When the operator of the vehicle is turning towards the left, display 120 begins displaying, however, display 118 does not display because it is not in the direction in which the vehicle is turning. FIG. 6B shows the inside front passenger view of the system activating in automatic mode for a driver when the vehicle is turning toward the left, which eliminates the obstruction by pillar 121 and allows the operator of the vehicle to view what is behind the pillar 121. FIG. 7A illustrates automatic displaying when the operator of the vehicle is turning towards the right. Display 118 begins displaying, however, display 120 will not display because it is not in the direction in which the vehicle is turning. FIGS. 7A and 7B illustrates the inside view of the system activating in automatic mode for an operator when the vehicle is turning towards the right, which eliminates the obstruction by pillar 117 and allows the operator of the vehicle to view what is behind the pillar 117.


Alternatively, if the operator of the vehicle desires, they can manually initiate the displaying by displays 120 and 118. FIGS. 5 and 8 show the left and right control clusters 123. Manual mode control is implemented through pressing either, or both, left and right buttons in the control clusters 123. When both control buttons in the control clusters 123 are pressed, displays 120 and 118 start displaying, and what is behind both the pillars, 121 and 117, can be seen, as illustrated in FIG. 8. When neither control buttons in the control clusters 123 are pressed, the displays, 120 and 118, do not display. Control clusters 123 may also be employed to actuate the automatic mode.


As illustrated in FIG. 9, sensor 114, which generates images based on the visible frequency range, and sensor 115, which generates images based on the infrared frequency range, are connected to a left-side processor 212. Sensor 108, which generates images based on the visible frequency range, and sensor 109, which generates images based on the infrared frequency range, are connected to a right-side processor 213. Each processor 212 and 213 may include algorithms to process images based on different frequency ranges. Additionally, each processor 212 and 213 includes a fusion algorithm 208 and 209 for composite imaging. The fusion algorithm 208 and 209 is illustrated in FIG. 12.


Each processor 212 and 213 receives an image corresponding to the infrared frequency range from their respective infrared sensors 115 and 109 as illustrated by FIG. 9. Each processor 212 and 213 also receives an image corresponding to the visible frequency range from their respective visible sensors 114 and 108. Illustrated in FIG. 12, the processors 212 and 213 will decompose the infrared frequency range images 1201 received from the infrared sensors 115 and 109. The processors 212 and 213 will also decompose the visible frequency range images 1202 received from the visible sensors 114 and 108. The decomposition of infrared frequency range images 1201 and the decomposition of the visible frequency range images 1202 may include disassembling each image into individual pixels or groups thereof. Processors 212 and 213 will then apply the fusion algorithm 1203 to the decomposed visible frequency range images 1202 and the decomposed infrared frequency range images 1201. The fusion algorithm application 1203 may include interspersing pixels of the decomposed visible frequency range images 1202 and pixels of the decomposed infrared frequency range images 1201. For example, this may involve repeatedly alternating one or more pixels or a 3×3 or 5×5 block of pixels of the decomposed visible frequency range images 1202 with one or more pixels or a 3×3 or 5×5 block of pixels of the decomposed infrared frequency range images 1201. Alternatively, blocks of pixels of any size may be used. During the fusion algorithm application 1203, a decomposed image of the visible frequency range 1202 and a decomposed image of the infrared frequency range 1201 becomes a single decomposed image 1204. Once the fusion algorithm application 1203 is completed, a fused image 1205 is output to the respective left-or right-side display 120 and 118 as illustrated by FIG. 9. The left and right-side processors 212 and 213 are coupled to the individual displays 120 and 118. The left or right-side processors 212 and 213 output images to the left-or right-side displays 120 and 118 through instructions given by the controller 202. The blind spot visualization system and method may also employ only a single sensor. When the blind spot visualization system and method employs a single sensor rather than two or more sensors, the fusion algorithm 208 and 209 that the processors 212 and 213 execute is not necessary. When one sensor is employed, the image generated is directly sent to the displays 120 and 118 by the left or right-side processors 212 and 213. The blind spot visualization system and method may also employ more than two sensors. The third sensor may be based on a different frequency range than the described sensors 114 and 115, 108 and 109. The fusion algorithm application 1203 that the processors 212 and 213 execute, as illustrated by FIG. 12, would be the same for having more than two sensors responsive to different frequency ranges.



FIG. 10 illustrates the operation of a manual mode. When a button is pressed on the control clusters 123, either display 120 or display 118 may individually begin displaying, or both may begin displaying, so that the operator may, at any time, view what is opposite an obstruction. The controller 202, may receive a signal through the control clusters 123, illustrated in FIGS. 5, 8, and 9. When that signal is received by the controller 202, the controller 202 will run through the process illustrated in FIG. 10. When a button is pushed on the control cluster 123, the control cluster 123 will output a signal to the controller 202, the controller then determines whether the displays 118 and 120 are on 1001. If the displays 118 and 120 are on, they are then turned off 1002. If the displays 118 and 120 are off, the controller 202 retrieves the command for the selection 1003 of either the left-side display 120, the right-side display 118, or both displays 118 and 120. Controller 202, through left and right-side processors 212 and 213, then directs the corresponding displays 118 and 120 and the corresponding sensors 108, 109, 114, and 115, to turn on 1004. The process ends 1005.


Alternatively, when the controller 202 receives automatic mode instructions from the control cluster 123, controller 202 monitors a direction signal from the steering angle sensor 203, illustrated in FIGS. 9 and 11. The steering angle sensor 203 may be located anywhere in the steering chain on a vehicle, including from the steering wheel down to the wheels themselves. The steering angle sensor 203 may also be located on the vehicle's tire pressure sensor or may utilize a vehicle's tire pressure sensor. As illustrated in FIG. 11, the steering angle sensor 203 produces a sensor signal 1101, which is received by a vehicle's controller area network bus 1102, which in turn relays that sensor signal 1101 to the controller 202. The controller 202 receives the steering angle sensor signal 1103 and runs through an algorithm to determine which displays 118 and 120 to turn on. First, when the controller 202 receives the steering angle sensor signal 1103, the controller checks to see if displays 118 and 120 are on 1104. If they are on, the algorithm ends 1105. If they are off, the controller checks to see if the steering angle sensor signal 1103 is less than the left turn threshold value 1106. If the steering angle sensor signal 1103 is less than the left turn threshold value 1106, the left-side sensors 114 and 115 and the left-side display 120 are turned on 1108 and the algorithm ends 1110. If the steering angle sensor signal 1103 is more than the right turn threshold value 1107, the right-side sensors 108 and 109 and the right-side display 118 are turned on 1109 and the algorithm ends 1111. If the steering angle sensor signal 1103 is not more than the right turn threshold value 1107 then the algorithm ends 1111.


As illustrated in FIG. 9, the controller 202 may also receive a signal from the vehicle's blind spot sensors 206 and 207. The blind spot sensors 206 and 207 may be blind spot sensors previously installed by the original manufacturer and may direct the controller 202 to turn on sensors 108, 109, 114, and 115, and displays 120 and 118.


It is noted for those skilled in the art, that a variety of modifications and variants can be made without departing from the principle of the present disclosure, and such modifications and variants shall be deemed as the protecting scope of the present disclosure.

Claims
  • 1. A method of eliminating a blind spot for an operator of a vehicle caused by an obstruction comprising: (a) generating a first image and a second image of an area on a side of the obstruction opposite the operator, the first and the second images representing different frequency ranges of radiation from the area and each comprised of a plurality of pixels;(b) combining the first and the second images by interspersing pixels of each of the first and second images to create a composite image; and(c) displaying on a display visible to the operator the composite image so the operator sees the area on the other side of the obstruction on the display.
  • 2. The method according to claim 1, wherein the first image corresponds to frequencies in a visible frequency range and the second image corresponds to frequencies in an infrared frequency range.
  • 3. The method according to claim 1, wherein the display is attached to the obstruction.
  • 4. The method according to claim 1, wherein the combining includes repeatedly alternating a pixel block from the first image with a pixel block from the second image to create the composite image.
  • 5. The method according to claim 1, further comprising: (a) generating a third image and a fourth image of another area on another side of another obstruction opposite the operator, the third and the fourth images representing different frequency ranges of radiation from the other area and each comprised of another plurality of pixels;(b) combining the third and the fourth images by interspersing pixels of each of the third and fourth images to create another composite image; and(c) displaying on another display visible to the operator the other composite image so the operator sees the other area on the other side of the obstruction opposite the operator on the other display.
  • 6. The method according to claim 5, wherein the third image corresponds to frequencies in the visible frequency range and the fourth image corresponds to frequencies in the infrared frequency range.
  • 7. The method according to claim 5, wherein the other display is attached to the other obstruction.
  • 8. The method according to claim 5, wherein the combining includes repeatedly alternating a pixel block from the third image with a pixel block from the fourth image to create the other composite image.
  • 9. A method of eliminating a blind spot for an operator of a vehicle caused by an obstruction comprising: (a) detecting a steering angle of the vehicle;(b) generating at least one image of an area on a side of the obstruction opposite the operator; and(c) displaying the at least one image on a display adjacent to the obstruction only when the detecting indicates that the vehicle is turning in a direction that corresponds to the side of the vehicle that the obstruction is on.
  • 10. The method according to claim 9, wherein the detecting includes detecting with a steering wheel angle sensor.
  • 11. The method according to claim 9, wherein the detecting includes detecting with a wheel angle sensor.
  • 12. The method according to claim 9, wherein the displaying includes placing the display over the obstruction.
  • 13. The method according to claim 9, wherein the displaying includes attaching the display to the obstruction.
  • 14. The method according to claim 9, wherein the displaying includes activating the display when the detecting indicates that the vehicle is turning in the direction that corresponds to the side of the vehicle that the obstruction is on.
  • 15. The method according to claim 9, wherein the generating includes generating a first image and a second image of the area on the side of the obstruction opposite the operator, the first and the second images representing different frequency ranges of radiation received from the area and each comprised of a plurality of pixels.
  • 16. The method according to claim 15, wherein the generating further comprises combining the first and the second images by interspersing pixels of each of the first and the second images.
  • 17. The method according to claim 9, further comprising: (a) generating at least another image of another area on another side of another obstruction opposite the operator; and(b) displaying that at least one other image on another display adjacent to the other obstruction only when the detecting indicates that the vehicle is turning in the direction that corresponds to the other side of the vehicle that the other obstruction is on.
  • 18. The method according to claim 17, wherein the displaying includes placing the other display over the other obstruction.
  • 19. The method according to claim 18, wherein the displaying includes attaching the other display to the other obstruction.
  • 20. The method according to claim 17, wherein the displaying includes activating the other display when the detecting indicates that the vehicle is turning in the direction that corresponds to the other side of the vehicle that the other obstruction is on.
  • 21. The method according to claim 17, wherein the generating includes generating a third image and a fourth image of the other area on the other side of the other obstruction opposite the operator, the third and the fourth images representing different frequency ranges of radiation received from the other area and each comprised of another plurality of pixels.
  • 22. The method according to claim 21, wherein the generating further comprises combining the third and the fourth images by interspersing pixels of each of the third and fourth images to generate that at least one other image.
  • 23. A system for eliminating a blind spot for an operator of a vehicle caused by an obstruction comprising: (a) one or more processors;(b) a first sensor and a second sensor coupled to the one or more processors, the first sensor and the second sensor being on a side of the obstruction opposite the operator and being configured to generate a first image and a second image of an area on a side of the obstruction opposite the operator, the first and the second images representing different frequency ranges of radiation received from the area and each comprised of a plurality of pixels;(c) a display coupled to the one or more processors and visible to the operator; and, wherein, the one or more processors:(i) combine the first and the second images by interspersing pixels of each of the first and second images to create a composite image; and(ii) transmit the composite image to the display so the operator sees the area on the other side of the obstruction on the display.
  • 24. The system according to claim 23, wherein the first sensor generates the first image in a visible frequency range and the second sensor generates the second image in an infrared frequency range.
  • 25. The system according to claim 23, wherein the display is attached to the obstruction.
  • 26. The system according to claim 23, wherein the one or more processors alternately intersperse a pixel block of each of the first and second images to create the composite image.
  • 27. The system according to claim 23, further comprising: (a) a third sensor and a fourth sensor coupled to the one or more processors, the third sensor and the fourth sensor being on another side of another obstruction opposite the operator and being configured to generate a third image and a fourth image of another area on the other side of the other obstruction opposite the operator, the third and the fourth images representing different frequency ranges of radiation received from the other area and each comprised of another plurality of pixels;(b) another display coupled to the one or more processors and visible to the operator; and, wherein, the one or more processors: (i) combine the third and the fourth images by interspersing pixels of each of the third and fourth images to create another composite image; and(ii) transmit the other composite image to the other display so the operator sees the other area on the other side of the other obstruction opposite the operator on the other display.
  • 28. The system according to claim 27, wherein the third sensor generates the third image in the visible frequency range and the fourth sensor generates the fourth image in the infrared frequency range.
  • 29. The system according to claim 27, wherein the other display is attached to the other obstruction.
  • 30. The system according to claim 27, wherein the one or more processors alternately intersperse a pixel block of each of the third and fourth images to create the other composite image.
  • 31. A system for eliminating a blind spot for an operator of a vehicle caused by an obstruction comprising: (a) one or more processors;(b) a steering sensor configured to transmit a signal to the one or more processors;(c) a sensor coupled to the one or more processors, the sensor being on a side of the obstruction opposite the operator and being configured to generate an image of an area on a side of the obstruction opposite the operator;(d) a display visible to the operator and coupled to the one or more processors; and, wherein the one or more processors: (i) determine a direction of turn of the vehicle based on the signal;(ii) generate at least one image of the area on the side of the obstruction opposite the operator;(iii) transmit that at least one image to the display; and(iv) display that at least one image on the display only when the signal indicates that the vehicle is turning in a direction that corresponds to the side of the vehicle that the obstruction is on.
  • 32. The system according to claim 31, further comprising a controller area network bus coupled to the steering sensor and the one or more processors.
  • 33. The system according to claim 32, wherein the controller area network bus transmits the signal to the one or more processors.
  • 34. The system according to claim 31, wherein the steering sensor is adapted to be disposed adjacent to a steering chain of a vehicle.
  • 35. The system according to claim 34, wherein the steering sensor includes a vehicle wheel angle sensor.
  • 36. The system according to claim 31, wherein the display is connected to the obstruction by placing the display over the obstruction.
  • 37. The system according to claim 36, wherein the display is built into the obstruction.
  • 38. The system according to claim 31, further comprising: (a) another sensor coupled to the one or more processors, the other sensor being on the side of another obstruction opposite the operator and being configured to generate at least one other image of another area on a side of the other obstruction opposite the operator;(b) another display visible to the operator and coupled to the one or more processors; and wherein, the one or more processors: (i) generate the at least one other image of the other area on the side of the other obstruction opposite the operator;(ii) transmit the at least one other image to the other display; and(iii) display the at least one other image on the other display only when the signal indicates that the vehicle is turning in a direction that corresponds to the side of the vehicle that the other obstruction is on.
  • 39. The system according to claim 38, wherein the other display is built into the other obstruction.
  • 40. The system according to claim 31, wherein the sensor generates the image in a visible frequency range and is comprised of a plurality of pixels that is transmitted to the one or more processors.
  • 41. The system according to claim 40, further comprising: another sensor coupled to the one or more processors, the other sensor being on the side of the obstruction opposite the operator and being configured to generate another image of the area on the side of the obstruction opposite the operator;wherein, the one or more processors: (i) generate at least another image of the area on the side of the obstruction opposite the operator;(ii) transmit the at least one other image to the display; and(iii) display the at least one image and the at least one other image on the display only when the signal indicates that the vehicle is turning in a direction that corresponds to the side of the vehicle that the obstruction is on.
  • 42. The system according to claim 41, wherein the other sensor generates the at least one other image in an infrared frequency range and is comprised of another plurality of pixels that is transmitted to the one or more processors.
  • 43. The system according to claim 42, wherein the one or more processors create a composite image by interspersing the pixels of each of the at least one image and the least one other image.