1. Field of the Invention
The present invention is directed to image processing generally and image classification and object recognition specifically.
2. Description of the Related Art
Object identification based on image data typically involves applying known image processing techniques to enhance certain image characteristics and to match the enhanced characteristics to a template. For example, in edge matching, edge detection techniques are applied to identify edges, and edges detected in the image are matched to a template. The problem with edge matching is that edge detection discards a lot of useful information. Greyscale matching tries to overcome this by matching the results of greyscale analysis of an image to templates. Alternatively, image gradients, histograms, or results of other image enhancement techniques may be compared to templates. These techniques can be used in combination. Alternative methods use feature detection such as the detection of surface patches, corners and linear edges. Features are extracted from both the image and the template object to be detected, and then these extracted features are matched.
The existing techniques suffer from various shortcomings such as inability to deal well with natural variations in objects, for example based on viewpoints, size and scale changes and even translation and rotation of objects. Accordingly, an improved method of object detection is needed.
It is an object to provide a novel system and method for object identification that obviates and mitigates at least one of the above-identified disadvantages of the prior art.
According to an aspect, a method of object classification at a computing device can comprise:
The method can further comprise classifying the object based on the objective measures The method can further comprise maintaining the parameters, connectivities and sub-objects as a primary multi-dimensional data structure and maintaining the objective measures as a secondary multi-dimensional data structure. The method can also comprise decomposing the sub-objects until each sub-object is a primitive object.
Decomposing can be repeated on the sub-objects for n times where n is an integer >1. The parameters can comprise on one or more of sensory data measures and derived physical measures. The sensory data measures can comprise one or more of tone, texture and gray value gradient. The data can be received from a sensing device. The data can also be received from non-imaging sources. Generating of the objective measures can include determining an occurrence or co-occurrence of sub-objects, parameters and connectivities.
Generating at least one objective measure can further comprise:
Linking can be performed based on the connectivities. The connectivities can include one or more of a spatial, temporal or functional relationship between a plurality of sub-objects. The classification can be based on a rule based association of the objective measures. Generating of the objective measures can include pattern analysis of the parameters.
The method can further comprise:
Generating at least one objective measure can be further based on the environment parameters. The environment sub-objects and the sub-objects can be linked and at least one of the at least one objective measure can be based on the linkage between the sub-objects and the environment sub-objects.
Another aspect provides a computing device for object classification. The computing device typically comprises a processor configured to:
The processor can be further configured to classify the object based on the objective measures. The processor can also be configured to decompose the sub-objects until each sub-object is a primitive object. The processor can also be configured to:
The processor can be further configured to:
The processor can be further configured to:
maintain said parameters, connectivities and sub-objects as a primary multi-dimensional data structure; and
maintain said objective measures as a secondary multi-dimensional data structure.
These together with other aspects and advantages which will be subsequently apparent, reside in the details of construction and operation as more fully hereinafter described and claimed, reference being had to the accompanying drawings forming a part hereof, wherein like numerals refer to like parts throughout.
Referring back to
In variations, data corresponding to a data collection area can be obtained from other data sources 105 besides a sensing device. For example, the data can be manually derived to correspond to an area, such as in the case of a drawing or a tracing, or can be represented by any other graphical data, such as data stored within a geo-spatial information system. In other variations, sources producing non-image, non-graphical data, such as an array of measurements taken of area and/or object dimensions or other properties distributed or non-spatially recorded material properties can be used. In other variations, data can be derived from the results of a number of processing steps performed on original data collected. In further variations, data can be derived from statistical or other alphanumerical data stored in an array form that has been derived from real objects. It will now occur to those of skill in the art that there are various other sources of data that can be used with system 100.
A data collection area can be any area, microscopic or macroscopic, corresponding to which data can be collected, derived or consolidated. Accordingly, an area may be comprised of portions of land, sea, air and space, as well as areas within structures such as areas within rooms, stadiums, swimming pools and others. An area may be comprised of portions of a man made structure such as portions of a building, a bridge or a vehicle. An area may also be comprised of portions of living beings such as a portion of an abdomen, or tree trunk, and may include microscopic areas such as a cell culture or a tissue sample.
An area can contain objects and environments surrounding the objects. For example, an object can be any man-made structure or any part or aggregate of a man-made structure such as a building, a city, a bridge, a road, a railroad, a canal, a vehicle, a ship or a plane, as well as any natural structure or any part or aggregate of natural structures such as an animal, a plant, a tree, a forest, a field, a river, a lake or an ocean. An environment can comprise any entities within the vicinity of the object, and comprise any man-made or natural structures, or part or aggregate thereof, such as vehicles, and buildings, infrastructure, or roads, as well as animals, plants, trees forests, fields, rivers, lakes or oceans.
For example, in an embodiment, an object can be one or more machine parts being used in an assembly line, whereas an environment could consist of additional machine parts, portions of the assembly line and other machines and identifiers within the vicinity of the machine parts that comprise the object. In another embodiment, an object can be any part of a body, such as an organ, a bone, a tumor, a cyst, and an environment could comprise of tissues, organs and other body parts within the vicinity of the object. In yet other embodiments, an object can be a cell, a collection of cells or cell organelles, whereas an environment could be the cells and other tissue within the vicinity of the object. In other embodiments, an object can be a single data, or a set of data, or a pattern of data, surrounded by other data in array form. As it will now occur to those of skill in the art, a data collection area comprising an object and an environment can include any object and environment at any scale ranging from microscopic such as cells to macroscopic such as cities.
Data 56 obtained by at least one data source 105 can be transferred to an apparatus 60 for processing and interpreting in accordance with an embodiment of the invention. In variations, apparatus 60 can be integrated with the data sources 105, or located remotely from the data sources 105. In further variations, data 56 can be further processed either prior to receiving by apparatus 60 or by apparatus 60 prior to performing other operations For example, statistical measures can be taken across the array of data 56 originally recorded. As a further example, in the case of a radar image derived data set, statistical data sets derived from the original radar image pixel values, can be generated for transfer to an apparatus 60 for processing and interpreting. Other variations will now occur to those of skill in the art.
Apparatus 60 can be based on any suitable computing environment, and the type is not particularly limited so long as apparatus 60 is capable of receiving data 56 and is generally operable to interpret data 56 and to identify object 40 and environment 44. In the present embodiment apparatus 60 is a server, but can be a desktop computer client, terminal, personal digital assistant, smartphone, tablet or any other computing device. Apparatus 60 comprises a tower 64, connected to an output device 68 for presenting output to a user and one or more input devices 72 for receiving input from a user. In the present embodiment, output device 68 is a monitor, and input devices 72 include a keyboard 72a and a mouse 72b. Other output devices and input devices will occur to those of skill in the art. Tower 64 is also connected to a storage device 76, such as a hard-disc drive or redundant array of inexpensive discs (“RAID”), which contains reference data for use in interpreting data 56, further details of which will be provided below. Tower 64 typically houses at least one central processing unit (“CPU”) coupled to random access memory via a bus. In the present embodiment, tower 64 also includes a network interface card and connects to a network 80, which can be the intranet, Internet or any other type of network for interconnecting a plurality of computers, as desired. Apparatus 60 can output results generated by apparatus 60 to network 80 and/or apparatus 60 can receive data, in, addition to data 56, to be used to interpret data 56.
Referring now to
Beginning first at 205, data is received from a data source corresponding to a data collection area. A data collection area can be any area, microscopic or macroscopic, or other form of two or multi-dimensional arrangement of original data, regarding which data can be collected, created and consolidated. Referring to
Continuing with the example embodiment shown in
Sensing devices can produce a variety of data type outputs such as images derived from electromagnetic spectrum such as optical, infrared, radar and others. Data can also, for example, be derived from magnetic or gravitational sensors. Additionally, data produced or derived can be two, or three dimensional such as three dimensional relief data from LIDAR, or be more dimensional such as n-dimensional data sets in array form where n is an integer value and a multiple of one.
It will now, occur to a person of skill in the art, sensing devices 52 can be operationally located in various locations, remotely or proximally, around and within area 48. For example, for macroscopic scale areas 48, sensing devices 52 can be located on structures operated in space, such as satellites, in air, such as planes and balloons, on land such as cars, buildings or towers, on water such as boats or buoys and in water such as divers or submarines. Sensing devices 52 can also be operationally located on natural structures such as animals birds, trees, and fish. For smaller or microscopic scale areas 48, sensor devices can be operationally located on imaging analysis systems such as microscopes, within rooms such as MRIs on robotic manipulators and other machines such as in manufacturing assemblies. Other locations will now occur to those of skill in the art.
Continuing with the example embodiment, data 56 is received at apparatus 60 from device 52. In the present example embodiment, data 56 includes a photographic image of area 44, but in other variations, as it will occur to those of skill in the art, data 56 can include additional or different types of imaging data or data corresponding to other representations of area 48, alone or in combination. In variations where multiple types or sets of data are present, the different types or sets of data can be combined prior to performing the other portions of method 200, or can be treated separately and combined, as appropriate at various points of method 200.
Next, at step 210, an object is detected by processing the data. In an embodiment, object detection can result in a distinct pattern of elements or an object data signature on the basis of determining a boundary for the data the object. In a variation, the detected object can be extracted from the data 56 enabling, for example, reduced data storage and processing requirements. Referring back to
Object detection can be performed either automatically or manually. In an embodiment, apparatus 60 is operable to apply to data 56, various data and image processing operations, alone or in combination, such as edge detection, image filtering and segmentation to perform automatic object detection. The specific operations and methods used for automatic object detection can vary, and alternatives will now occur to those of skill in the art.
Manual detection of an object 40 can be performed by an operator using input devices 72 to segment object 40 by identifying, for example, the pixels comprising object 40, or by drawing an outline around object 40, or simply clicking on object 40. The specific operations and methods used for object detection can yes will now occur to those of skill in the art.
In a variation, detection can be assisted based on pre-processing data 56. Pre-processing can generate sets of enhanced or derived data that can replace or accompany data 56. For example, data 56 can be enhanced in preparation for object detection. In other variations, data 56 can be filtered. In yet other variations, imaging measures can be performed such as texture, color, and gradient as well physical measures on basic shapes such as shape size and compactness. Accordingly, object detection can be performed based on the pre-processed data.
Next, at 212 object 40 is parameterized. To accomplish parameterization apparatus 60 is operable to calculate measures for object 40, on the basis of object data signature 40′ for example. For example, apparatus 60 can derive certain physical measures such as size and compactness for object 40 based on object data signature 40′. In one variation, an object 40 can be characterized, where appropriate as one of a basic geometric shape such as a circle, rectangle, trapezoid, multi-sided, irregular, sphere, doughnut, and others that will now occur to a person of skill in the art. Once an object 40 is characterized as a basic shape, certain physical measures can be derived such as radius, length of sides, ratio of side lengths, area, volume size, compactness and others that will now occur to a person of skill in the art. In other variations, measures can be calculated based on sensory data characteristics that can be derived for an object 40 from the modality of data 56. For example, for photographic images, the sub-objects can be translated into, through image processing, composition of color, gray value gradients, tone measures, texture measures and others that will now be apparent to those of skill in the art.
Continuing with method 200, at 215, sub-objects of an object are detected. In an embodiment, an analysis of the previously detected object data signature can be performed to determine whether the object can be further decomposed into a second level of sub-objects, i.e. whether the object is a higher-level object. Accordingly, an object is either identified as a primitive object, which does not have any detectable sub-objects, or a higher-level object which does have detectable sub-objects. The identification of an object as a primitive object or as a higher-level object can be accomplished automatically or manually using various data and image processing algorithms, alone or in combination, such as edge detection, image filtering and segmentation to perform sub-object detection. The specific operations and methods used for object detection can vary, and alternatives will now occur to those of skill in the art.
In a variation, detection of sub-objects can be assisted based on pre-processing the detected object or object data signature. Pre-processing can generate sets of enhanced or derived data that can replace or accompany the object and its data signature. For example, object data signature can be enhanced in preparation for object detection. In other variations, object data signature can be filtered. In yet other variations, when the object is part of a digital image, imaging measures can be performed such as texture, color, gradient, histogram, or other measures, such as statistical measures, as well physical measures on basic shapes such as shape size and compactness. In variations, such pre-processing can be applied to any data in, for example, an array form representing the object, and the results of such pre-processing can be stored and utilized as additional derived data sets accompanying the original data containing the original object during object classification and recognition Accordingly sub-object detection can be performed based on the pre-processed data.
Continuing with the example embodiment, to accomplish sub-object detection apparatus 60 is operable to apply to an object data signature various data and image processing algorithms.
Referring now to
At 220, apparatus 60 decomposes object 40 into the detected sub-objects and their connectivities. In the present embodiment, this represents the second level of decomposition and results with storage of second level sub-object data signatures 440′ in a data structure capable of storing multi-dimensional data structures, either separately, or in combination with data 56. The second level decomposition can be based on object data signature 40′ and/or second level sub-object data signatures 440′.
In general, an object 40 can be decomposed into all of the sub-objects detectable in data 56, or can be decomposed into a subset of the detectable sub-objects to increase the efficiency of the algorithm. The selection of the subset of sub-objects can be based on, at least in part, the type of object being identified, the modality of data 56 or the type of image sensing device 52 or data source 105 used in obtaining data 56, which can thus be of imaging or non-imaging type including any type of data derived from data 56, so as to increase the accuracy of object identification. For example, in some variations, sub-objects that are frequently found in most objects can be avoided to increase the efficiency of the algorithm without reducing accuracy since their contribution to object identification can be relatively small or.
Sub-object connectivities define how each sub-object is connected or related to other sub-objects in its level, including itself where appropriate. For example, connectivities can define physical connections where second level sub-objects 440 are directly connected to each other as with hood 440-2, and side panel 440-3. In other variations, connectivities can define relative physical placement in two or three dimensions such as physical distance between sub-objects, or relative distance as in the case of sub-object side panel 440-3 and sub-object rear wheel 440-5 which are adjacent to each other, or as in, the case of hood 440-2, and rear wheel 440-5 which are separated by one other sub-object. Connectivities can also define how sub-objects are functionally related including chain of logic interdependencies. For example, in the example shown in
Referring now to Table I, and continuing with example embodiment of
Continuing with Table 1, row 2 shows the relative spatial relationship between sub-object Windshield 440-1 and other sub-objects identified in
At 225, apparatus 60 is operable to parametrize at least some of the second level sub-objects 440 and their connectivities 450. To accomplish parameterization apparatus 60 is operable, for example, to calculate measures on the basis of sub-object data signatures 440′ and connectivities 450. For example, apparatus 60 can derive certain physical measures such as size and compactness for sub-objects 440 based on second level sub-object data signatures 440′. In one variation, a sub-object 440 can be characterized, where appropriate, as one of a basic geometric shape such as a circle, rectangle, trapezoid, multi-sided irregular, sphere, doughnut, and others that will now occur to a person of skill in the art. Once a sub-object 440 is characterized as a basic shape, certain physical measures can be derived such as radius length of sides, ratio of side lengths, area, volume size, compactness and others that will now occur to a person of skill in the art. In other variations, measures can be calculated based on sensory data characteristics that can be derived for each sub-object 440 from the modality of data 56. For example, for photographic images, the sub-objects can be translated into, through image processing, composition of color, gray value gradients, tone measures, texture measures and others that will now be apparent to those of skill in the art.
Referring to
Referring now to Table II, a parameterized form of connectivities 450 is indicated in the form of a matrix that shows the relative logical distance between sub-objects 440, as calculated in the present embodiment.
Although, in the present example embodiment a table was used to represent parameterized connectivities 450, if will now occur to a person of skill in the art that various other representations, both quantitative and qualitative and data structures such as multi-dimensional matrices, or databases or a combination thereof can also be used to represent and store parameterized connectivities 450 and other parameters. Furthermore, parameterized sub-objects 440, parameterized connectivities 450, sub-object data signatures 440′, and other relevant data can be stored separately, in combination, and in combination with or linked to data 56 and data related to object 40 including object data signature 40′, and parameters derived from it, resulting in a highly multi-dimensional data structure or database corresponding to object 40. Moreover it will also occur to a person of skill in the art that although in the present embodiment the type of connectivities shown is relative spatial distance, in other variations other types of connectivities can be calculated, represented and stored, including those based on spatial, temporal and functional relationships of sub-objects.
Referring back to
Referring back to
At 220 apparatus 60 decomposes sub-object 440-6 into the detected sub-objects and connectivities. Continuing with the example embodiment of
At 225, apparatus 60 is operable to parameterize at least some of the first, or lowest, level sub-objects 4440 and connectivities 4450. In the present example embodiment, parameterization is accomplished by apparatus 60 by calculating measures on the basis of sub-objects 4440 as well as connectivities 4450. Referring to
Referring now to Table IV, a parameterized form of connectivities 4450 is indicated in the form of a matrix that shows the relative logical distance between sub-objects 4440, as calculated in the present example embodiment.
Referring back to
Referring now to
In the present embodiment, method 200 is performed by apparatus 60 until all detected sub-objects have been decomposed into primitive sub-objects; namely until all detected higher-level objects have been decomposed into primitive objects. In a variation, the decomposition can be repeated until a predetermined number “n” of iterations of the algorithm has been reached. Where n is set to one, an object is decomposed once into its immediate sub-objects, namely the second level of sub-objects. Where n is set to an integer greater than one an object and its sub-objects will iterate through method 200 n times, as long as there are higher-level sub-objects available, generating n-level decomposition of the object. In a further variation, the object 40 can be decomposed only to a level of decomposition that matches the decomposition level of a stored decomposed object that is used as a reference for the decomposition and processing.
Referring now to Fig, 5, a method for object recognition or identification is shown generally at 500. In order to assist in the explanation of the method, it'll be assumed that method 500 is operated using system 100 as shown in
At 505 a decomposed object is received by apparatus 60. The received object can be represented by one or more data structures and, as described above, can include representation of each object, all or some of its sub-objects, connectivities and parameters derived from all or some of its sub-objects and connectivities of sub-objects.
Continuing with
Objective measures include data that represents occurrence or co-occurrence of sub-objects, connectivities and related measures either individually or as combinations and can be maintained as entries within a data storage matrix, such as multi-dimensional database. Objective measures can further include results of additional calculations and abstractions performed on the parametric measures, objects, sub-object and corresponding data signatures and connectivities related to those sub-objects. In a variation, the sub-objects and connectivities recorded during the object decomposition can be entered info the “primary” custom designed multi-dimensional database as patterns of database entries and connectivity networks. In a further variation, classification measure can be the decomposed object data structure received for the sub-objects used.
A set of objective measures can be represented as a set within a secondary multi-dimensional data structure such as a multi-dimensional matrix, representing a multi-dimensional feature space. It will now occur to those of skill in the art that various other operations and calculations, such as inference analysis, can be performed on decomposed object data structure to generate additional data for use as part of a classification measure, and that the resulting set of objective measures can be stored, either at storage 76 or other storage operably connected to apparatus 60, for example, through network 80, using various representations and data structures including multi-dimensional matrices or data structures.
Continuing with the example object 40 of the example embodiment, a set of objective measures, starting at the lowest level of decomposition, which in the present embodiment is second decomposition level, includes the co-occurrence of at least two of the three sub-object 4440, the measures generated for each sub-objects 4440, radius and circumference, and the parameterized connectivities 4450 of Table IV.
Referring back to
In other variations, other classifications, as they will now occur to a person of skill in the art can be performed. For example, objective measures related to different objects that are typically part of a database stored either at storage 76 or other storage operably connected to apparatus 60 through, for example, network 80 can be retrieved. Once the reference objective measures are retrieved, they can be compared against the calculated objective measures for the object currently being identified.
In an embodiment, the comparison can be a simple comparison of each objective measure for occurrence or co-occurrence. In a variation where all classification measures are quantitative, a vector operation of multiple classification measures can constitute the comparison. In a further variation, when stored as a pattern, the co-occurrence of elements, as well as the connectivities, can be described as abstract patterns such that, patterns of co-occurrence of elements and across connectivities become apparent. The comparison in these variations can comprise analyzing patterns across the different dimensions, and determines sets of comparison results characterizing these patterns, or example as vectors characterizing those patterns.
It will now occur to those of skill in the art that the comparison can include many different operations performed on various multi-dimensional sets including quantitative or qualitative elements.
The result of the recognition can be an inference indicative of the degree of confidence on the basis of classification and recognition. In the present embodiment, the results of the comparison are indicated as a 0 or a 1, 1 indicating a highest confidence, and 0 indicating no confidence. In variations, probabilities can be generated to indicate the degree of confidence. In yet other variations, vectors of results can be generated each element of which indicates various dimensions of confidence, such as confidence in sub-object presence, connectivities, pattern matching results and/or other measures. It will now occur to those of skill in the art that the comparison results can include many different results including quantitative or qualitative results.
In further variations, classification and recognition can be applied to each sub-object and the results of such operations, as well as any recognition results stored. Accordingly, during reiteration of method 500, the recognized sub-objects, their objective measures, their recognition results and other corresponding data can be used for generation of additional objective measures at 510, and subsequently in the classification and recognition of the entire object through the rest of method 500,
Continuing with method 500, at 520, a determination is performed as to whether an object can be classified and recognized. The identification is typically based on the confidence results. In a further variation recognition 520 can be delayed until a number of or all decomposition levels as well as the object are analyzed. In the present embodiment, it'll be assumed, for illustrative purposes, that the comparison result is a 0 and that accordingly, method 500 advances to step 522.
At 522 a determination is made as to whether a higher decomposition is available where sub-objects at a higher level of integration are present. The determination is yes if current-level sub-objects form higher-level sub-objects, and accordingly, the current level sub-objects can be linked to form higher level or more highly integrated sub-objects. If the determination is no, accordingly, the highest level of decomposition, namely the greatest level of integration (in this example embodiment the object itself) has been reached and thus the object is not recognized as determined at 535. Since, in accordance with the present example, sub-objects of higher level integration exist, method 500 advances to 525.
At 525, parametric measures associated with sub-objects 4440 are linked on the basis of connectivities to obtain linked objective measures. In a variation the linking can include either all sub-objects 4440 and correlated data or can be reduced to re-combining the intermediate processing results from just several sub-objects 4440 to generate additional linked objective measures.
Advancing to 510, apparatus 60 generates objective measures based on the sub-objects of the next decomposition level, namely sub-objects 440. In variations, sub-objects from other levels or from a mixture of levels can also be used. In addition additional classification measures can be generated on the basis of linked parametric measures. In a further variation, classification measures can be linked on the basis of connectivities of the sub-objects 4440 to generate linked classification measures.
Next, at 515, classification and recognition is performed for sub-objects 440 Assuming now that the results of classification and recognition 520 yields high confidence recognition, above that of a predetermined threshold, method 500 terminates by identifying the example object as a vehicle at 530.
In the example embodiment, identification is assumed to have occurred when all decomposition levels were analyzed in an iterative manner, one level at a time. In a variation, all sub-objects can be analyzed at once. In other variations, recognition can occur earlier, or only at the primitive level. In further variations, even if recognition occurs at a lower level of decomposition (for example at the level of nuts and bolts in the example) method 500 can continue to iterate through sub-objects with higher level of integration (for example wheels in the example) to further increase confidence in classification and recognition results. This is in accordance with the fact that in some variations, each iteration of method 500 through sub-objects with higher level of integration can serve to strengthen confidence.
Although methods 200 and 500 were presented in a specific order, they do not have to be performed in exactly the manner presented. In variations, elements from each method can be mixed and also elements within each can be performed in order different from shown. For example, in one variation, an object or individual sub-objects can be classified and recognized.
Referring now to
Referring to
In another variation of methods 200 and 500, objective measures can be generated as an object is decomposed, and recognition determined at each decomposition level before decomposing the object any further.
Referring now to
Referring to
In further variations of method 200, 500, 600 and 700, not all sub-objects detected are used in the decomposition or recognition processes. Accordingly, even when the data 56 does not allow for detection of all sub-objects, identification can still be accomplished. In further variations, detection of objects and sub-objects can be performed at different resolutions allowing the methodology to be applied to objects with varying degree of complexity. In yet other variations, limiting storage of object and sub-object data to data signatures and parameterized sets of data can reduce the amount of storage needed by abstracting away the objects and sub-objects from image data. In additional variations, each identified sub-object can be iterated through methods 200 and 500, one by one, resulting in recognized sub-objects that can then be used in the recognition process of the object.
In further variations, data 56 can also be analyzed to detect the environment objects 44 surrounding object 40. Accordingly each environment object can be identified using methods 200, 500, 600 and or 700 and as described above, or variations thereof, and the results of this identification can be used to further improve the identification of object 40. In an embodiment, environment parameters can be generated for environment objects and can be used in generating additional objective measures during object identification. For example, location and positioning on an object 40 in relation to environment objects 44 can further inform identification of object 40. In a further variation, environment objects and sub-objects can be linked to object 40 or sub-objects of object 40 and their links that can be used in determining objective measures.
Once an object 40 and its environment objects 44 is classified, recognized or otherwise identified they can be indicated on a graphical output device as part of a representation of area 48. The indication can take the form of graphical indicators, text indicators or a combination. The representation of area 48 can take the form of a digital map, a photograph, an illustration, or other graphical representation of area 48 that will now occur to those of skill in the art. For example, objects can be outlined or otherwise indicated on a digital map a digital photograph of area 48 using certain colors for different types of objects 40 or environmental object 44. In this example, one color can be used for indicating objects 40 and environmental object 44 identified as man-made structures, another for objects 40 and environmental objects 44 identified as natural structures, and other color object combinations that will now occur to a person of skill in the art. Further color representations or hues can be used to differentiate between different types of man-made structures or natural structures. In this case, dark blues can be used to indicate rivers, and light blue, seas, for example. Textual description of the identified objects 40 and environment objects 44 can also be included as part of the graphical representation of area 48. The textual descriptions such as vehicle, river and others can appear superimposed on top of the identified objects 40 and environmental object 44, near the identified objects 40 or environmental objects 44 or can appear or disappear after a specific trigger action such as a mouse-over, or a specific key or key sequence activation. It will now be apparent to those of skill in the art that different types of coloring, shading and other graphical or textual schemes can be used to represent identified objects 40 and environment objects 44 within a representation of area 48.
The many features and advantages of the invention are apparent from the detailed specification and, thus, it is intended by the appended claims to cover all such features and advantages of the invention that fall within the true spirit and scope. Further, since numerous modifications and changes will readily occur to those skilled in the art, it is not desired to limit the invention to the exact construction and operation illustrated and described, and accordingly all suitable modifications and equivalents may be resorted to, falling within scope.