The disclosure of Japanese Patent Application No. 2007-091323, filed on Mar. 30, 2007, including the specification, drawings, and abstract is incorporated herein by reference in its entirety.
1. Related Technical Fields
Related technical fields include feature information collecting apparatuses, methods, and programs that collect information about features included in image information of the vicinity of a vehicle
2. Description of the Related Art
Conventional navigation apparatuses correct the position correction of a vehicle by recognizing features included in image information of the vicinity of the vehicle (see, e.g., Japanese Patent Application Publication No. JP-A-H09-243389). The image information is acquired by an imaging device or the like mounted on the vehicle. The navigation apparatus detects an intersection by recognizing an intersection symbol in the image data and finds the distance from the vehicle position to the intersection symbol. A vehicle position correcting device corrects the vehicle position to a point on the traveled road that deviates by the distance from the intersection position. The intersection symbol denotes, for example, a traffic signal, a pedestrian crossing, or the white lines for the center divider.
In Japanese Patent Application Publication No. JP-A-2000-293670, a database is disclosed that relates to such features. Instead of manually recognizing features, features are photographed while a specialized measurement vehicle is traveling, and the features are automatically recognized from the photographed images.
Japanese Patent Application Publication No. JP-A-2006-038558, discloses a navigation apparatus that collects information about features based on the results of image recognition of target features that are included in image information about the vicinity of the vehicle. In Japanese Patent Application Publication No. JP-A-2006-038558, traffic signs and road traffic information display boards that are provided along the road serve as target features. In addition, feature information such as the sign information extracted from the recognition results is stored in a map database in association with position information and section information. At this time, the position information and the section information that are associated with the feature information is determined based on information from a GPS receiver, a gyroscope, a vehicle speed sensor, or the like that are generally used in navigation apparatuses.
However, the position information that is acquired by a GPS receiver, a gyroscope, a vehicle speed sensor or the like that are used in a navigation apparatus generally includes an error from several meters to several tens of meters. Thus, there is a possibility that the feature information stored in the map database includes such error.
However, as disclosed in Japanese Patent Application Publication No. JP-A-2000-293670, collecting high accuracy feature information for all roads by using a specialized measurement vehicle is not practical due to time and cost.
Therefore, high accuracy feature information may be prepared for roads that have many users, such as arterial roads. For a particular user, high accuracy feature information on roads where the user frequently travels may be prepared, and thereby the user can enjoy a variety of detailed services, including navigation. Therefore, irrespective of whether a road is an arterial road, high accuracy feature information may be prepared at a low cost in conformity with the usage patterns of the user. In addition, on roads such as mountain roads for which few standard road markings and the like are provided on the road surface, the collection of feature information in which such road markings are the target objects is difficult.
Exemplary implementations of the brad principles described herein provide feature information collecting apparatuses, methods, and programs that enable the collection of accurate feature information at low cost depending on road usage conditions by a user of roads for which feature information has not been prepared.
Exemplary implementations provide apparatuses, methods, and programs that acquire vehicle position information that represents the current position of a vehicle, acquire image information for a vicinity of the vehicle, and carry out image recognition processing on recognition target objects that are included in the image information. The apparatuses, methods, and programs store recognition information in a memory that represents a result of the image recognition of the recognition target objects in association with information for the recognition position of the recognition target objects, the recognition position determined based on the vehicle position information. The apparatuses, methods, and programs extract, as learned features, recognition target objects that can be repeatedly recognized by image recognition based on a plurality of sets of recognition information related to the same position, the plurality of sets of recognition information being stored due to the image information for the same position being recognized a plurality of times by image recognition.
Exemplary implementations will now be described with reference to the accompanying drawings, wherein:
Each functional unit of the navigation apparatus 1 that is shown in
The map database DB1 is a database in which map information M, which is divided into predetermined segments, is recorded.
The region classification information is information about the regional classification when a region is divided into a plurality of segments that are provided with roads that correspond to the links k. Such segments include, for example, regions such as the Kanto region or the Kansai region, or administrative divisions, such as Tokyo metropolis, prefectures, cities, towns, and villages. The attribute information of these links k corresponds to road attribute information Rb (refer to
The feature database DB2 is a database on which information about various features that are provided on the road and in the vicinity of the road, that is, the feature information F, is stored. As shown in
The initial feature information Fa denotes the feature information F for a plurality of features that have been prepared and recorded in advance in the feature database DB2. Such initial feature information Fa is prepared only for a portion of regions and roads, such as for large city areas and arterial roads and the like, among all of the regions for which the map information M that includes the road information Ra is prepared. In contrast, as will be described below, the learned feature information Fb is the feature information F that is stored in the feature DB 2 as a result of learning by using the image recognition results for the target features that have been obtained by the image recognition unit 18. Note that, in the following explanation, the term “feature information F” used alone is a term that encompasses both the initial feature information Fa and the learned feature information Fb.
Road markings (e.g., painted markings) that are provided on the surface of the road are included in the features for which feature information F (in particular, the initial feature information Fa) is stored in the feature database DB2.
The feature information F includes feature position information for each of the features and feature attribute information that is associated therewith. The feature position information includes information about the position (coordinates) on the map of representative points for each of the features that is associated with a link k and a node n that structure the road information Ra, and information about the orientation of each of the features. The representative points in the initial feature information Fa, for example, are set in proximity to the center portion of each of the features in the lengthwise direction and the widthwise direction. The representative points for the learned feature information Fb will be explained below. The feature attribute information for the initial feature information Fa includes feature class information and feature contour information, such as the shape, size, color, and the like, of the features.
For example, specifically, the feature type in the initial feature information Fa is information that represents the type of the features that have fundamentally identical contours, such as “pedestrian crossings,” “stop lines,” or “speed markings (for example, “30 km/h”).” The feature information for the learned feature information Fb will be explained below. In the present example, the feature information F includes associated information that represents the relationship with other approaching features and feature distance information that represents the distance between one feature and another feature. The associated information is information that allows the anticipation of features that are present in the forward direction due to recognizing by image recognition one feature while a vehicle 50 travels forward along the road. In addition, the feature distance information is information for accurately anticipating the distance from the vehicle 50 to such a feature that is present in the forward direction.
The learning database DB3 is a database that stores recognition position information Aa that has been derived by the learned feature extracting unit 31 in a form in which the recognition target objects that correspond to each of the sets of the recognition position information Aa can be distinguished. In order to render the recognition target objects that correspond to each of the sets of the recognition position information Aa distinguishable, each of the sets of the recognition position information Aa and the feature attribute information Ab for the recognition target objects that correspond thereto are stored in association with each other. The specific content of the recognition position information Aa and the feature attribute information Ab that are stored in this learning database DB3 will be explained in detail below.
The image information acquiring unit 12 functions as an image information acquiring device that acquires image information G of the vicinity of the vehicle that has been photographed by the imaging device 11. The imaging device 11 is, for example, a vehicle-mounted camera that is provided with an image-pickup element, and is provided at a position that enables photographing at least the surface of the road in the vicinity of the vehicle 50. It is possible to use, for example, a back camera that photographs the road surface behind the vehicle 50, as shown in
The vehicle position information acquiring unit 16 acquires vehicle position information P that shows the current position of the vehicle 50. The vehicle position information acquiring unit 16 is connected to a GPS receiver 13, a direction sensor 14, and a distance sensor 15. The GPS receiver 13 is an apparatus that receives GPS signals from GPS (Global Positioning System) satellites. These GPS signals are normally received at one-second intervals and output to the vehicle position information acquiring unit 16. In the vehicle position information acquiring unit 16, the signals from the GPS satellites that have been received by the GPS receiver 13 are analyzed, arid it is thereby possible to acquire information about the current position (latitude and longitude) of the vehicle 50, the forward travel direction, the movement speed, the time, and the like.
The direction sensor 14 is a sensor that detects the forward direction of the vehicle 50 and changes in the direction thereof. This direction sensor 14 is structured, for example, by a gyroscopic sensor, a geomagnetic sensor, an optical rotation sensor or a rotation resistance sensor that is attached to a rotation portion of the steering wheel, or an angle sensor that is attached to a wheel portion, or the like. In addition, the direction sensor 14 outputs the detected result to the vehicle position information acquisition unit 16. The distance sensor 15 is a sensor that detects the vehicle speed and the movement distance of the vehicle 50. The distance sensor 15 is structured, for example, by a vehicle speed pulse sensor that outputs a pulse signal each time the drive shaft or the steering wheel of the vehicle rotates by a specified amount, or a yaw sensor that detects the acceleration of the vehicle 50 and a circuit that integrates the detected acceleration. In addition, the distance sensor 15 outputs information about the vehicle speed and the movement distance as the detected results to the vehicle position information acquiring unit 16.
The vehicle position information acquiring unit 16 carries out the calculation in which the vehicle position is specified by a well-known method based on the outputs from the GPS receiver 13, the direction sensor 14, and the distance sensor 15. The vehicle position information acquiring unit 16 acquires the road information Ra in the vicinity of the position of the vehicle that has been extracted from the map database DB1, and also carries out corrections in which the vehicle is positioned on the road that is shown in the road information Ra by carrying out well-known map matching based thereon. In this manner, the vehicle position information acquiring unit 16 acquires the vehicle position information P that includes information about the current position of the vehicle 50, which is represented by latitude and longitude, and information about the forward travel direction of the vehicle 50.
The feature information acquiring unit 17 extracts and acquires the feature information F about features that are present in the vicinity of the vehicle 50 from the feature database DB2 based on the vehicle position information P and the like that has been acquired by the vehicle position information acquiring unit 16. Both the initial feature information Fa and the learned feature information Fb are included in the feature information F. In the present example, based on the vehicle position information P, the feature information acquiring unit 17 extracts, from the feature database DB2, the feature information F about target features that are present in the interval from the current position of the vehicle 50, which is shown in the vehicle position information P, to the end point of the link k, which represents the road on which the vehicle 50 is traveling. The term “target feature” denotes a feature that is the target object of position correction image recognition processing (to be explained in detail below) by the image recognition unit 5. Therefore, the feature information F that has been acquired by the feature information acquiring unit 17 is output to the image recognition unit 18 and the vehicle position information correction unit 19.
In the present example, in addition to various road markings that are provided on the surface of the road, such as pedestrian crossings, stop lines, and vehicle speed markings and the like, which have been prepared in advance as initial features, target features also include stains, grime, and cracks on the road surface, links between pavement sections, and manhole covers, and the like that have been collected as learned features. In addition, various types of road markings that are not registered as initial features because they are infrequently used as target features may serve as learned features. Furthermore, recognition target objects such as shadows, which appear when predetermined conditions such as a particular time and weather occur together, also become target features as limited learned features that are valid only when these conditions are satisfied.
Note that the feature information acquiring unit 17 also functions as a presence determining device that refers to the feature DB2 based on the vehicle position information P and determines whether target features that have been stored as feature information F are present in the vicinity of the vehicle 50. When it has been determined that target features are not present in the vicinity of the vehicle 50, as will be explained below, information collecting image recognition processing is executed. Specifically, when it has been determined that target features are not present in the vicinity of the vehicle 50, it is determined that the position of the vehicle 50 is not recognized with high accuracy (i.e., not in a “high accuracy vehicle position recognition state,” the “high accuracy vehicle position recognition state” being described later), and information collection image recognition processing is executed in order to collect learned feature information Fb. Moreover, when the feature information acquiring unit 17 is functioning as the presence determining device, the feature information F may include the learned feature information Fb, or may be limited to the initial feature information Fa. In the present example, both are included.
In the present example, the image recognition unit 18 carries out two types of image recognition processing: (1) position correction image recognition processing that is used in the correcting of the vehicle position information P and (2) information collecting image recognition processing for learning the image recognition results for features and reflecting these results in the feature database DB2. Specifically, for the position correcting image recognition processing, the image processing unit 18 functions as an image recognition device that carries out image recognition processing of target features that are shown by the feature information F, which is included in the image information G, based on the feature information F about the target features that are present in the vicinity of the vehicle 50 and that have been extracted from the feature database DB2 based on the vehicle position information P. In addition, the image recognition unit 18 functions as an image recognition device that carries out image recognition processing for the recognition target objects that are included in the image information G that has been acquired by the image information acquiring unit 12.
In the present example, the image recognition unit 18 carries out position correction image recognition processing when it has been determined that target features, which have been stored as feature information F, are present in the vicinity of the vehicle 50 by the feature information acquisition unit 17. In contrast, the image recognition unit 18 carries out information collection image recognition processing when it has been determined that target features, which have been stored as feature information F, are not present in the vicinity of the vehicle 50 by the feature information acquisition unit 17. When the driver's vehicle 50 is traveling along a road for which feature information F has already been prepared, the correction processing of the vehicle position information P is executed based on the image recognition result obtained by the position correction image recognition processing and the feature information F. In contrast, when the vehicle 50 is traveling along a road for which feature information F has not been prepared, information collection image recognition processing is carried out, and feature collection processing is executed that extracts recognition target objects that are included in the image information G as learned features.
In the position correction image recognition processing, the image recognition unit 18 carries out image recognition processing on the target features that are shown by the feature information F, which is included in the image information G, based on the feature information F for the target features that are present in the vicinity of the vehicle 50 and that have been acquired from the feature database DB2 based on the vehicle position information P. At this time, the image recognition unit 18 sets a predetermined recognition area for which the recognition request for the target features that are shown in the feature information F is carried out, and image recognition processing of the target features is carried out on the image information G that is within this recognition area. The recognition area is set as the position range in the link k shown by the road information Ra, in which it is anticipated that the target features are present.
The photographed area of the actual road included in each set of image information G can be found based on the vehicle position information P by using the positional relationship between the photographed area and the vehicle position that is calculated in advance based on the installation position and installation angle of the imaging device 11 on the vehicle 50, the angle of view, and the like. Thus, based on the information of the photographed areas for each set of image information G that has been found in this manner, the image recognition unit 18 extracts the image information G that corresponds to the recognition area that has been set for each of the target features, and carries out the image recognition processing. When the target features are initial features, the image recognition unit 18 carries out well-known binarization processing or edge detection processing or the like on the image information G, and extracts contour information of the target features (road markings) that are included in the image information G. In addition, target features that are included in the image information G are recognized by carrying out pattern matching on the extracted contour information and the shapes of the target features. When the target features are learned features, as will be explained below, the target features are recognized by providing processing that is similar to the image recognition processing for recognition target objects during learning. In addition, the image recognition results for the target features obtained by this position correction image recognition processing are used in the correction of the vehicle position information P by the vehicle position information correction unit 19.
In the information collection image recognition processing, the image recognition unit 18 carries out the image recognition processing for the recognition target objects that are included in the image information G. This image recognition processing carries out the image recognition processing for recognition target objects without using the feature information F that is stored in the feature database DB2, and therefore, without setting the target features and the recognition area described above. The term “recognition target object” denotes a characteristic that is included in the image information, and includes one or more of any among noise, edges, predetermined colors, and predetermined shapes. In addition, the recognition results for these recognition target objects are characteristic amounts that are obtained by carrying out a predetermined image recognition processing on the recognition target objects. For example, when there are no features on the road surface, the image information for the road surface is uniform. However, when there is some sort of feature present, the uniformity is disrupted. If the cause of the disruption of the uniformity is noise, and if this noise is used as a recognition target object, the amount of the noise is recognized as a characteristic amount. Because such noise is a high-frequency component in the spatial frequency of the image information G, it is possible to extract the noise amount by filtering processing by using, for example, a high pass filter.
Examples of features that can become learned features and that are present on the road surface include road markings provided on the surface of the road, stains on the surface of the road, grime on the surface of the road, cracks in the surface of the road, links between pavement sections, manhole covers, shadows on the surface of the road, and the like. The boundary between stains, grime, and cracks on the road surface and the road surface can be extracted as edge components by providing well-known Gaussian filter processing on the image information G. If this edge component is used as a recognition target object, the amount of the extractions of the edge component is recognized as characteristic amounts. In addition, it is possible to extract the color components of the road markings that are painted in white, yellow, and orange by providing well-known window comparator processing on the image information G. If these colors are used as recognition target objects, the type of color and the number of extractions thereof are recognized as characteristic amounts. In addition, pattern matching processing may be provided in which predetermined shapes such as triangles, circles, rectangles, and numbers and the like are used as recognition target objects, and their conformity may be recognized as characteristic amounts. The recognition results (characteristic amounts) of the recognition target objects that are obtained by the information collection image recognition processing are associated with information for the recognition position thereof, and stored in the learned database DB3.
The vehicle position information correcting unit 19 corrects the vehicle position information P based on the results of the image recognition processing for the target features that are obtained by the image recognition unit 18 and the position information for the target features that is included in the feature information F for the target features. In the present example, the vehicle position information correcting unit 19 corrects the vehicle position information P along the forward travel direction of the vehicle 50 by using the image recognition results for the target features that are obtained by the position correction image recognition processing based on the feature information F in the image recognition unit 18 and the position information for the target features that is included in the feature information F.
Specifically, first, the vehicle position information correcting unit 19 calculates the positional relationships between the vehicle 50 and the target features during the acquisition of the image information G that includes the image of the target features, based on the result of the position correction image recognition processing obtained by the image recognition unit 18 and the installation position and installation angle of the imaging device 11, the angle of view, and the like. Next, based on the calculated result of the positional relationships between the vehicle 50 and the target features, and the position information for the target features that are included in the feature information F for the target features, the vehicle position information correcting unit 19 calculates and acquires high accuracy position information for the vehicle 50 based on the position information (feature information F) of the target features in the forward travel direction of the vehicle 50. In addition, based on the high accuracy position information for the vehicle 50 that has been acquired in this manner, the vehicle position information correcting unit 19 corrects the information about the current position of the vehicle 50 in the forward travel direction, which is included in the vehicle position information P that has been acquired by the vehicle position information acquiring unit 16. As a result, the vehicle position information acquiring unit 16 acquires the high accuracy vehicle position information P after such correction, and the position of the vehicle 50 is recognized with high accuracy (i.e., in the “high accuracy vehicle position recognition state”).
The navigation calculating unit 20 is a calculation processing device that operates according to an application program 23 for executing navigation functions such as the vehicle position display, the retrieval of a route from the departure point to the destination point, the course guidance to the destination point, and the destination retrieval and the like. For example, the navigation calculating unit 20 acquires map information M for the vicinity of the vehicle 50 from the map database DB1 based on the vehicle position information P and displays an image of the map in a display input apparatus 21, and at the same time, based on the vehicle position information P, carries out processing in which the vehicle position mark is superimposed and displayed on the image of the map.
The navigation calculating unit 20 carries out the retrieval of routes from a predetermined departure point to a destination point based on the map information M that is contained in the map database DB1. Furthermore, the map calculating unit 20 carries out course guidance for the driver by using one or both of the display input apparatus 21 and an audio output apparatus 22 based on the route from the departure point to the destination point that has been retrieved and the vehicle position information P. In the present example, the navigation calculating unit 20 is connected to the display input apparatus 21 and the audio output apparatus 22. The display input apparatus 21 is one in which a display apparatus, such as a liquid crystal display, and an input apparatus, such as a touch panel, are integrated. The audio output apparatus is structured so as to include a speaker and the like. In the present example, the navigation calculating unit 20, the display input apparatus 21, and the audio output apparatus 22 function as a guidance information output device 24 in the present invention.
The learned feature extracting unit 31 functions as a learned feature extracting device that extracts recognition target objects that can be repeatedly recognized by image recognition as learned features based on a plurality of sets of recognition information that is related to the same place and that is stored in the learned database DB3 due to the image information G of the same place being recognized by image recognition a plurality of times, and outputs these learned features along with the position information of the learned features. Below, the details of the processing that is carried out by the learned feature extracting unit 31 will be explained with reference to
The learned feature extracting unit 31 derives the recognition position for the recognition target objects as information that represents the positions of the recognition target objects on the road, using the vehicle position information P as reference. Specifically, the learned feature extracting unit 31 calculates the positional relationship between the vehicle 50 and the recognition target objects during the acquisition of the image information G including the recognition target objects, based on the image recognition result for the recognition target objects obtained by the image recognition unit 18, the installation position and the installation angle of the imaging device 11, the angle of view, and the like. Next, based on the calculation result for the positional relationship between the vehicle 50 and the recognition target objects and the vehicle position information P during the acquisition of the image information G, the learned feature extracting unit 31 calculates the position of the recognition target objects, using the vehicle position information P as reference. In this example, the learned feature extracting unit 31 finds this position as a position in the direction along the link k that represents the road along which the vehicle 50 is traveling. In addition, the learned feature extracting unit 31 derives the position that has been calculated in this manner as the recognition position of the recognition target objects. This recognition position becomes the information about a position that includes the error included in the vehicle position information P because this recognition position is derived based on the vehicle position information P during the acquisition of the image information G including the recognition target objects. Thus, the position information is determined based on the recognition result (recognition information) relating to the same place that is stored in the learning database DB3 due to the image information G of the same place being recognized by image recognition a plurality of times.
As explained above, the learned feature extracting unit 31 derives the recognition positions of the recognition target objects as information that represents the positions of the recognition target objects on the road, using the vehicle position information P as reference. The learned feature extracting unit 31 adds the learning value, which will be described below, to each of the predetermined position ranges to which the recognition position belongs, and stores the result in the learning database DB3. The term “learning value” denotes a value that is allocated depending on the recognition result (characteristic amount) of the recognition target object satisfying predetermined learning conditions. For example, when the recognition target object is “a color that differs from a predetermined color of a road surface due to a stain” and the characteristic amount is the number of color extractions that are the results for the color extraction, a predetermined learning condition is that the number of color extractions is equal to or greater than a predetermined threshold value. Specifically, the learning condition is a condition for which it can be determined whether a recognition target object is fully characterized with respect to quantitatively represented recognition result (characteristic amount).
In the present example, a predetermined position range denotes a range that has been partitioned into units having a predetermined distance in a direction along the link k, which represents a road, and set. For example, the predetermined position range may be a range that has been partitioned into a 0.5 m unit in the direction along the link k. In addition, the term “learning value” denotes a value that is added to a position range to which the recognition position of the features of the recognition target objects in the learning database DB3 belongs. For example, the learning value may be set to one point for each successful image recognition. Specifically, in this example, the recognition position information Aa is structured by the information that represents the position range in which the recognition position of the features of the recognition target objects are included and the information about the learning value (for example, “1”). As has been explained above, the recognition position includes an error that is contained in the vehicle position information P because the recognition position is derived using, as reference, the vehicle position information P during the acquisition of the image information G including the recognition target objects.
In addition, as will be described below, when the learning value of the position range a4 becomes equal to or greater than a predetermined learned threshold value T1, the learned feature information Fb for the recognition target objects is generated by the feature information generating unit 36 by setting the position that represents the position range a4 as a representative point, and is stored in the learning database DB2. By applying statistical processing in this manner, a large error in the positions that satisfy the learning conditions is also suppressed. In the example in
As explained above, according to the present example, not only distinct objects such as road markings, but indistinct objects such as indistinct markings or features on the road surface can be recognized as learned features. Therefore, it is possible to collect learned features at a low cost. Of course, more accurate image recognition may be carried out, shapes such as stains, grime, and cracks may be specified based on the results for color extraction and edge extraction, and learned features whose indistinctness has been reduced may be collected.
As explained above, road markings that are provided on the surface of a road are included as learned features that are features present on the road surface, and the type thereof may also be identical to the initial features. In addition, naturally, various types of road markings that are not registered as initial feature information because, for example, their frequency of usage is low, the freedom of shape is large, or the area with respect to the recognition object area is wide, are also included.
Note that a feature that is provided with a characteristic that is used as a recognition target object is not limited to stains, grime, cracks, or the like that are normally present on the road surface, but can also include ones that appear when predetermined conditions occur together, such as a particular time and weather. For example, shadows are also included in features. In this case, the recognition information is stored in the learning database DB3 in association with at least one set of information about the recognition time and the weather during the recognition. In addition, the learned feature extracting unit 31 extracts a recognition target object (the characteristic of the shadow) that is associated with at least one of the recognition time or the weather during the recognition and can be repeatedly recognized by image recognition under predetermined conditions as limited learned features that are valid only when these conditions are satisfied.
Specifically, it is possible to use this recognition target object as a limited learned feature by including these limiting conditions in the feature attribute information Ab, which will be described below. A feature such as a shadow has a strong characteristic as a recognition target object because the contrast with the road surface is high, but the point that the appearance thereof is influenced by the time and weather is a cause of instability. However, features can be used as advantageous learned features by using such features as limited learned features that take into consideration predetermined conditions such as the time and the weather during which such features appear.
In this connection, the learned feature extracting unit 31 stores the generated recognition position information Aa in association with the feature attribute information Ab that represents each of the attributes of the recognition target object in order to make the recognition target object that has been successfully recognized by image recognition distinguishable from other recognition target objects. The attributes of the recognition target object that are included in the feature attribute information Ab may be ones that distinguish one recognition target object from another recognition target object. Therefore, for example, feature attribute information Ab is provided with one or more sets of information that is selected from among types of characteristics that serve as a recognition target object, that is, classifications such as color, edges, and noise and the like, or in the case that shape recognition has been carried out, the specific shape and size thereof, and the link ID of the link k on which the recognition target object is present and the like. In addition, the limiting conditions for the limited learned features such as the time and weather may also be included in the feature attribute information Ab. The information that structures the feature attribute information Ab for such features is generated based on, for example, the image recognition result for the recognition target object obtained by the image recognition unit 18 and the vehicle position information P during the acquisition of the image information G according to the image recognition processing.
The estimated position determining unit 34 functions as an estimated position determining device that determines the estimated position pg (refer to
A feature information managing unit 35 carries out the management of the feature information F that has been stored in the feature database DB2 based on the learned results for the target features that have been stored in the learning database DB3. In the present example, this feature information managing unit 35 is provided with a feature information generating unit 36 and a feature-to-feature distance determining unit 37. These will be explained separately below.
The feature information generating unit 36 generates learned feature information Fb based on the learned results for features that have been stored in the learning database DB3. Specifically, the feature information generating unit 36 generates learned feature information Fb, in which the position information that represents the estimated positions pg for each of the recognition target objects that has been determined by the estimated position determining unit 34, and the feature attribute information based on the image recognition result for the recognition target object that has been obtained by the image recognition unit 18, are associated. In the learning database DB3, the feature attribute information that structures the learned feature information Fb is generated by using the content of the feature attribution information Ab that has been stored in association with the recognition position information Aa for the recognition target object. Thus, the learned feature information Fb, similar to the initial feature information Fa, is generated as position information and information that is provided with feature attribution information that has been associated therewith. In addition, the learned feature information Fb that has been generated by the feature information generating unit 36 is stored in the feature database DB2. In the present example, learned feature information Fb5, which includes the attributes of features such as classifications such as stains, grime, cracks, road markings (paint) and limiting conditions for limited learned features, is generated by the feature information generating unit 36 and stored in the feature database DB2.
The feature-to-feature distance determining unit 37 f determines the feature distance D, which represents the distance between two learned features, based on a plurality of sets of recognition position information Aa for the two learned features. In the present example, the feature-to-feature distance determining unit 37 finds the feature distance D between the two learned features by calculating the distance between the estimated position pg for one learned feature and the estimated position pg for the other feature by using the information for the estimated positions pg for the two features that have been determined by the estimated position determining unit 34. In the example shown in
Next, an exemplary feature information collecting method will be explained with reference to
As shown in
When it has been determined that there is feature information F (step #4=YES), the vehicle 50 is traveling along a road for which the feature database DB2 has been prepared. Therefore, the position correction image recognition processing described above is carried out by the image recognition unit 18 (step #5). Specifically, the image recognition unit 18 carries out image recognition processing by setting recognition areas for each of the target features that are shown by each of the sets of the feature information F that have been acquired. When the target features inside the recognition area have been recognized by image recognition, the correction of the vehicle position P is carried out by the vehicle position information correcting unit 19 based on the image recognition result for the target features and the position information for the target features, which is included in the feature information F for the recognition target object. Thus, the position of the vehicle 50 is recognized with a high accuracy (i.e., in the “high accuracy vehicle position recognition state”).
In contrast, when it has been determined that there is no feature information F (step #4=NO), the navigation apparatus 1 executes feature information collecting image recognition processing (step #6). First, the image processing for the recognition target objects that are included in the image information G is executed by the image recognition unit 18. When a recognition target object has been recognized by image recognition (step #7=YES), it is determined that there is a feature that will become a candidate for a learned feature, and when no recognition target object has been recognized by image recognition (step #&=NO), the processing ends.
When a recognition target object has been recognized by image recognition (step #7=YES), the recognition position of the feature is derived by the learned feature extracting unit 31 based on the vehicle position information P that has been acquired in step #1. In addition, the position information for the recognition target object and the learning value are stored in the learning database DB3 (step #8). Specifically, the recognition position information Aa that represents the derived recognition position of the feature is generated as a learning value for a predetermined position range to which the recognition position belongs, and as shown in
Next, it is determined whether the learning value that is stored in the learning database DB3 that is equal to or greater than the predetermined learned threshold value T1 (step #9). When the learning value is less than the predetermined learned threshold value Ti (step #9=NO), the processing ends. In contrast, when the learning value, which has been stored in the learning database DB3 and that serves as the recognition position information Aa for the target features, is equal to or greater than the predetermined learned threshold value T1 (step #9=YES), the estimated position pg of the feature is determined by the estimated position determining unit 34. Subsequently, the learned feature information Fb that is associated with the estimated position pg of the recognition target object and the feature attribute information based on the image recognition result is generated by the feature information generating unit 36 (step #10). The generated learned feature information Fb is stored in the feature database DB2 step (#11).
While various features have been described in conjunction with the examples outlined above, various alternatives, modifications, variations, and/or improvements of those features and/or examples may be possible. Accordingly, the examples, as set forth above, are intended to be illustrative. Various changes may be made without departing from the broad spirit and scope of the underlying principles.
For example, in the example described above, as shown in
Note that this method can be used to determine whether learning should be carried out. For example, when water drops adhere to the lens of the imaging device 11 during inclement weather, the image information G deteriorates. As a result, the distribution of the recognition result will not change sufficiently, and thus the recognition results at this time may be excluded from the stored target objects for learning.
Road markings that are not registered as initial feature information due to having a low use frequency but have a simple shape can be used as features that have a characteristic as a recognition target object. It is possible to set the shape of such a road marking to a predetermined shape and set this predetermined shape as a recognition target object. For example, a triangular mark that shows “there is a through street ahead” and characters that are drawn on the road surface as instruction indicators and the like correspond to such road markings.
A characteristic as a recognition target object may be set to “the proportion of white on the road surface,” and based on the result of carrying out the extraction processing for a predetermined color, a possibility that a feature having a characteristic as a recognition target object is a road marking may be determined. Then, when the possibility that the feature is a road marking is high, a shape recognition algorithm for road markings (pedestrian crossings, stop lines, and the like) that have a high frequency of appearance may be executed, and higher accuracy features may be collected.
In the example described above, the recognition position of a recognition target object that is represented by the recognition position information Aa is derived as information that represents the position of the feature on the road based on the vehicle position information P. However, for example, setting the recognition position that is represented by the recognition position information Aa to the position of the vehicle 50, which is represented by the vehicle position information P during the acquisition of the image information G including the recognition target object, is also possible. In this case, when the learned feature information Fb is generated by the feature information generating unit 36, it is advantageous to calculate the position of the feature on the road that is the recognition target object in the image information G for the position of the vehicle 50, and to set the position of the feature on the road having a characteristic that serves as a recognition target object as the position information that is included in the learned feature information Fb.
In the example described above, a learning value of one point is added for each time that image recognition is successful. However, the number of points that differs according to the recognition results may be added. For example, as explained using
In the example described above, the entire structure of the feature information collecting apparatus 2 is mounted in the vehicle 50. However, as shown in
(7) In each of the examples described above, the image recognition apparatus 3 and the vehicle position recognition apparatus 4 are used in the navigation apparatus 1. However, the range of application of the present invention is not limited to this, and it is naturally possible for these apparatuses to be used for other applications, such as a vehicle travel control apparatus.
Number | Date | Country | Kind |
---|---|---|---|
2007-091323 | Mar 2007 | JP | national |