1. Related Applications
This application is related to U.S. patent application with an Attorney Docket Number of US49177 and a title of VEHICLE ASSISTANCE DEVICE AND METHOD, which has the same assignee as the current application and was concurrently filed.
2. Technical Field
The present disclosure relates to vehicle assistance devices, and particularly, to a vehicle assistance device capable of automatically turning on lights of a vehicle and a related method.
3. Description of Related Art
Usually, a driver determines to turn on lights of a vehicle according to visibility. The light emitted by the lights not only increases the visibility of the driver, but also helps others, such as the drivers of other vehicles or passers-by, to watch for the vehicle. Accordingly, there is a need for a vehicle assistance device capable of automatically turning on lights of a vehicle when the distance between the vehicle and other vehicles or passer-by is less than a safe distance.
The components of the drawings are not necessarily drawn to scale, the emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Moreover, in the drawings, like reference numerals designate corresponding parts throughout several views.
The embodiments of the present disclosure are now described in detail, with reference to the accompanying drawings.
In the embodiment, the number of the cameras 3 is two. The cameras 3 are respectively arranged on the front and the rear of the first vehicle 2, respectively capture the surroundings in front of the vehicle and in rear of the vehicle to generate surroundings images. Each captured surroundings image includes distance information indicating the distance between one camera 3 and each object in the field of view of the camera 3. In the embodiment, the camera 3 is a Time of Flight (TOF) camera. In the embodiment, there are two pairs of lights 5 respectively arranged on the front and the rear of the first vehicle 2. The surroundings image captured by each camera 3 can be used to control the on of one pair of lights 5. For example, in
The vehicle assistance device 1 including a processor 10, a storage unit 20, and a vehicle assistance system 30. In the embodiment, the vehicle assistance system 30 includes an image obtaining module 31, a model creating module 32, a detecting module 33, a determining module 34, and an executing module 35. One or more programs of the above function modules may be stored in the storage unit 20 and executed by the processor 10. In general, the word “module”, as used herein, refers to logic embodied in hardware or firmware, or to a collection of software instructions, written in a programming language. The software instructions in the modules may be embedded in firmware, such as in an erasable programmable read-only memory (EPROM) device. The modules described herein may be implemented as either software and/or hardware modules and may be stored in any type of computer-readable medium or other storage device.
In the embodiment, the storage unit 20 further stores a number of 3D special vehicle models and a number of 3D special person models. Each 3D specific vehicle model has a unique name and includes a number of characteristic features. Each 3D special person model has one unique name and a number of characteristic features. The 3D specific vehicle models or the 3D special person models may be created based on a number of specific vehicle images or a number of specific person images pre-collected by the camera 3 and the distance between the camera 3 and the specific vehicle or the specific person recorded in the pre-collected specific vehicle images or the pre-collected specific person images.
The image obtaining module 31 obtains the surroundings image captured by each camera 3.
The model creating module 32 creates a 3D surroundings model corresponding to each camera 3 according to the obtained surroundings image captured by each camera 3 and the distance between the corresponding camera 3 and each object recorded in the obtained surroundings image.
The detecting module 33 determines whether or not one or more second vehicles or passers-by appear in at least one created 3D surroundings model. In detail, the detecting module 33 extracts object data corresponding to the shape of the one or more objects appearing in each created 3D surroundings model from each created 3D surroundings model. In addition, compares the extracted object data with characteristic feature of each of the 3D specific vehicle models and each of the 3D specific person models, to determine whether or not one or more second vehicles or passers-by appear in at least one created 3D surroundings model. If the extracted object data does not match the characteristic data of any of the 3D specific vehicle models and the 3D specific person models, the detecting module 33 determines that no second vehicle or passer-by appears in the created 3D surroundings models. If the extracted object data matches the characteristic data of one or more of the 3D specific vehicle models or the 3D specific person models, the detecting module 33 determines that one or more second vehicles or passers-by appear in the at least one created 3D surroundings model.
The determining module 34 determines the distance between the camera 3 and the second vehicle or passer-by appearing in each created 3D surroundings model as the distance between the first vehicle 2 and the second vehicle or passer-by appearing in each created 3D surroundings model. Then, determines the shortest distance between the first vehicle 2 and the second vehicle or passer-by appearing in each of the at least one created 3D surroundings model. In addition, determines whether or not the shortest distance between the first vehicle 2 and the second vehicle or passer-by appearing in the at least one created 3D surroundings model is less than the safe distance. In the embodiment, the safe distance is default or input by the driver through an input unit 6 connected to the vehicle assistance device 1. In detail, if the number of the second vehicle or passer-by appearing in the at least one created 3D surroundings model is one, the determining module 34 determines that the shortest distance between the first vehicle 2 and the second vehicle or passer-by is the distance between the first vehicle 2 and the second vehicle or passer-by. If the number of the second vehicle or passer-by appearing in the at least one created 3D surroundings model is more than one, the determining module 34 determines that the shortest distance between the first vehicle 2 and the second vehicle or passer-by is the shortest distance among the distances between the first vehicle 2 and the second vehicles or passers-by.
The executing module 35 determines the at least one created 3D surroundings model when the shortest distance between the first vehicle 2 and the second vehicle or the passer-by appearing in the at least one created 3D surroundings model is less than the safe distance. Then, determines the at least one camera 3 corresponding to the determined at least one created 3D surroundings model. In addition, controls the driving device 4 to turn on the at least one pair of lights 5 corresponding to the determined at least one camera 3, to inform the driver of the second vehicle or the passer-by to watch for the first vehicle 2.
In the embodiment, the vehicle assistance device 1 is further connected to a detecting device 7. The detecting device 7 detects the movement speed of the first vehicle 2. The storage unit 20 further stores a first table. The first table records a relationship between movement speed ranges of the first vehicle 2 and the safe distances. Each movement speed range of the first vehicle 2 corresponds to one safe distance.
The vehicle assistance system 30 further includes a setting module 17. The setting module 17 obtains the movement speed of the first vehicle 2 detected by the detecting device 7. Then, determines the movement speed range of the first vehicle 2 that the movement speed of the first vehicle 2 is in. In addition, determines the safe distance corresponding to the movement speed range of the first vehicle 2 according to the relationship between movement speed ranges of the first vehicle 2 and the safe distances.
In other embodiments, the storage unit 20 further stores a second table. The second table records a relationship between the movement speed range of the first vehicle 2, the driving levels of the driver, and the safe distances. Each movement speed range of the first vehicle 2 corresponds to a number of driving levels of the driver and a number of safe distances. Each movement speed range of the first vehicle 2 and each driving level of the driver correspond to one safe distance. In the embodiment, the driving level of the driver is preset by the driver through the input unit 6.
The setting module 17 obtains the movement speed of the first vehicle 2 detected by the detecting device 7. Then, determines the movement speed range of the first vehicle 2 that the first vehicle 2 is in according to the movement speed of the first vehicle 2. In addition, obtains the preset driving level of the driver, and determines the safe distance corresponding to the movement speed range of the first vehicle 2 and the preset driving level of the driver according to the relationship between movement speed ranges of the first vehicle 2, the driving levels of the driver, and the safe distances.
In step S301, the image obtaining module 31 obtains a surroundings image captured by each camera 3.
In step S302, the model creating module 32 creates a 3D surroundings model corresponding to each camera 3 according to the surroundings image captured by each camera 3 and the distances between each object recorded in the obtained surroundings image and the corresponding camera 3.
In step S303, the detecting module 33 determines whether or not one or more second vehicles or passers-by appear in at least one created 3D surroundings model. If one or more second vehicles or passers-by appear in the at least one 3D surroundings model, the procedure goes to step S304. If no second vehicle and passer-by appearing in the created 3D surrounding models, the procedure goes to step S301. In detail, the detecting module 33 extracts object data corresponding to the shape of the one or more objects appearing in each created 3D surroundings model from each created 3D surroundings model. Then, compares the extracted object data with characteristic feature of each of the 3D specific vehicle models and each of the 3D specific person models, to determine whether or not one or more second vehicles or passers-by appear in at least one created 3D surroundings model. If the extracted object data does not match the characteristic data of any of the 3D specific vehicle models and the 3D specific person models, the detecting module 33 determines that no second vehicle or passer-by appears in the created 3D surroundings models. If the extracted object data matches the characteristic data of one or more of the 3D specific vehicle models or the 3D specific person models, the detecting module 33 determines that one or more second vehicles or passers-by appear in at least one created 3D surroundings model.
In step S304, the determining module 34 determines the distance between the camera 3 and the second vehicle or passer-by appearing in each created 3D surroundings model as the distance between the first vehicle 2 and the second vehicle or passer-by appearing in each created 3D surroundings model. Then, determines the shortest distance between the first vehicle 2 and the second vehicle or passer-by appearing in each of the at least one created 3D surroundings model. In addition, determines whether or not the shortest distance between the first vehicle 2 and the second vehicle or passer-by appearing in the at least one created 3D surroundings model is less than the safe distance. If the shortest distance between the first vehicle 2 and the second vehicle or passer-by appearing in the at least one created 3D surroundings model is less than the safe distance, the procedure goes to step S305. If the shortest distance between the first vehicle 2 and the second vehicle or passer-by appearing in each created 3D surroundings model is more than the safe distance, the procedure goes to step S301.
In step S305, the executing module 35 determines the at least one created 3D surroundings model when the shortest distance between the first vehicle 2 and the second vehicle or the passer-by appearing in the at least one created 3D surroundings model is less than the safe distance, determines the at least one camera 3 corresponding to the determined at least one created 3D surroundings model, and controls the driving device 4 to turn on at least one pair of lights 5 corresponding to the determined at least one camera 3, to inform the driver of the second vehicle or the passer-by to watch for the first vehicle 2.
Although the present disclosure has been specifically described on the basis of the exemplary embodiment thereof, the disclosure is not to be construed as being limited thereto. Various changes or modifications may be made to the embodiment without departing from the scope and spirit of the disclosure.
Number | Date | Country | Kind |
---|---|---|---|
102114567 | Apr 2013 | TW | national |