The present invention relates generally to the field of digital image processing. In particular, various embodiments of the present invention pertain to the use of scene capture metadata associated with digital image files to provide additional context to the records.
Since the advent of photography, photographers have been capturing interesting subjects and scenes with their cameras. These photographs capture a moment in time at a particular location with specific content. To insure that this contextual information about the photograph is preserved, photographers performed some sort of manual operation. With film-based cameras and photographic prints, a handwritten record was often created by scribing information on the back of the print or perhaps in a notebook. This is tedious and many photographers avoid the process leaving countless photographs without information to adequately understand the content of the photograph.
With the advent of digital photography, the problem remains. While physically scribing on a digital image is impossible, “tagging” an image with ASCII text is supported by many digital image management software programs. Tagging is the process of associating and storing textual information with a digital image so that the textual information is preserved with the digital image file. While this may seem less tedious than writing on the back of a photographic print, it is relatively cumbersome and time consuming and is avoided by many digital photographers.
Other digital technologies have been applied to provide scene capture metadata for digital images. Many digital capture devices record the time of capture which is then included in the digital image. Technologies such as the Global Positioning System (GPS) and cellular phone networks have been used to determine the photographer's physical location at the time a digital photograph is taken which is then included in the digital image. Time and location are key pieces of contextual information but lack the context a photographer is capable of adding. For example, the time and location (08-12-07 14:02:41 UTC 42° 20′ 19.92″ N 76° 55′ 39.58″ W) may be recorded with the digital image by the digital capture device. However, such information, by itself, often is not very helpful for photographers.
In U.S. Pat. No. 6,914,626 Squibbs teaches a user-assisted process for determining location information for digital images using an independently-recorded location database associated with a set of digital images.
In U.S. Patent Application Publication No. 2004/0183918 Squilla, et al. teach using geo-location information to produce enhanced photographic products using supplemental content related to the location of captured digital images. However, no provision is made for enabling users to access context information for their images.
Accordingly, improved techniques for providing and improving the usefulness of contextual information associated with digital images are needed.
The above described problem is addressed and a technical solution is achieved in the art by systems and methods for processing geo-location information associated with a digital image file, the method implemented at least in part by a data processing system and comprising:
a) receiving a digital image file having at least associated geo-location information relating to the digital image file;
b) providing a venue database that stores geographic boundaries for a plurality of venues;
c) identifying a venue where the digital image file was captured, the venue being identified by at least comparing the geo-location information to the geographic boundaries stored in the venue database; and
d) adding a metadata tag to the digital image file, the metadata tag providing an indication of the identified venue.
According to some embodiments, the present invention provides a method for providing a service that obtains contextual information for a user's digital image files. The method is implemented at least in part by a data processing system and includes receiving a digital image file; using the scene capture geo-location information from the file to identify the venue in which the image was captured; and storing an indication of the capture venue in computer memory. In some embodiments, the indication of the capture venue is associated with the digital image file and the association stored in computer memory.
According to another embodiment of the present invention, a message is transmitted to a computer system relating to the identified capture venue of a digital image file. This message can, in some embodiments, be an advertisement related to the venue. The digital image files themselves can be modified to include the capture venue in other embodiments.
According to further embodiment of the present invention, a portion of the venue can be identified using the scene capture geo-location information from the digital image file. In these embodiments a message or advertisement can be transmitted that is related to just the identified portion of the venue.
According to still another embodiment of the present invention, the scene capture time is used in conjunction with the geo-location information to identify both the venue and a specific event occurring at the venue at the time of scene capture. A message can be transmitted to a computer system indicating the capture event of a digital image file. This message can, in some embodiments, be an advertisement related to the event. The digital image files themselves can be modified to include the capture event in other embodiments.
In some embodiments, orientation-of-capture information for the scene is used in conjunction with the geo-location information to identify both the location of capture and the field-of-view captured. The field-of-view can then be used in the process of identifying the venue or the portion of the venue.
In addition to the embodiments described above, further embodiments will become apparent by reference to the drawings and by study of the following detailed description.
The present invention will be more readily understood from the detailed description of exemplary embodiments presented below considered in conjunction with the attached drawings, of which:
Some embodiments of the present invention utilize digital image file scene capture information in a manner that provides much greater context for describing and tagging digital records. Some embodiments of the invention provide contextual information specific not only to the time and location of the capture of digital image files but derives information pertaining to the specific venue, event, or both where the content was captured.
The invention is inclusive of combinations of the embodiments described herein. References to “a particular embodiment” and the like refer to features that are present in at least one embodiment of the invention. Separate references to “an embodiment” or “particular embodiments” or the like do not necessarily refer to the same embodiment or embodiments; however, such embodiments are not mutually exclusive, unless so indicated or as are readily apparent to one of skill in the art. The use of singular and/or plural in referring to the “method” or “methods” and the like is not limiting.
The phrase, “digital image file”, as used herein, refers to any digital image file, such as a digital still image or a digital video file. It should be noted that, unless otherwise explicitly noted or required by context, the word “or” is used in this disclosure in a non-exclusive sense.
The data processing system 110 includes one or more data processing devices that implement the processes of the various embodiments of the present invention, including the example processes of
The processor-accessible memory system 140 includes one or more processor-accessible memories configured to store information, including the data and instructions needed to execute the processes of the various embodiments of the present invention, including the example processes of
The phrase “processor-accessible memory” is intended to include any processor-accessible data storage device, whether volatile or nonvolatile, electronic, magnetic, optical, or otherwise, including but not limited to, registers, floppy disks, hard disks, Compact Discs, DVDs, flash memories, ROMs, and RAMs.
The phrase “communicatively connected” is intended to include any type of connection, whether wired or wireless, between devices, data processors, or programs in which data can be communicated. Further, the phrase “communicatively connected” is intended to include a connection between devices or programs within a single data processor, a connection between devices or programs located in different data processors, and a connection between devices not located in data processors at all. In this regard, although the processor-accessible memory system 140 is shown separately from the data processing system 110, one skilled in the art will appreciate that the processor-accessible memory system 140 can be stored completely or partially within the data processing system 110. Further in this regard, although the peripheral system 120 and the user interface system 130 are shown separately from the data processing system 110, one skilled in the art will appreciate that one or both of such systems can be stored completely or partially within the data processing system 110.
The peripheral system 120 can include one or more devices configured to provide digital image files to the data processing system 110. For example, the peripheral system 120 can include digital video cameras, cellular phones, digital still-image cameras, or other data processors. The data processing system 110, upon receipt of digital image files from a device in the peripheral system 120 can store such digital image files in the processor-accessible memory system 140.
The user interface system 130 can include a mouse, a keyboard, another computer, or any device or combination of devices from which data is input to the data processing system 110. In this regard, although the peripheral system 120 is shown separately from the user interface system 130, the peripheral system 120 can be included as part of the user interface system 130. The user interface system 130 also can include a display device, a processor-accessible memory, or any device or combination of devices to which data is output by the data processing system 110. In this regard, if the user interface system 130 includes a processor-accessible memory, such memory can be part of the processor-accessible memory system 140 even though the user interface system 130 and the processor-accessible memory system 140 are shown separately in
Referring to
Referring back to
In one embodiment of the present invention, identify venue information step 215 works by comparing the geo-location information 210 to each venue in the venue database 220 until a matching venue is identified (or until it is determined that no matching venues are in the database). To determine whether the geo-location information 210 matches a particular venue, the geo-location information 210 is compared to the appropriate geometric description of the venue.
For example, when the venue is represented as a circle in the venue database 220, the venue can be described as center point with a radius of defined length representing the approximate geographic boundary of the venue. A determination of whether the image capture location is inside the circle is made by measuring the distance from the image capture location to the center point of the venue circle using a distance measure such as Haversine Distance. If the distance from the image capture location to the center point is less than or equal to the radius of the venue circle, the venue is identified. When the venue is represented as a rectangle, the venue can be described as a pair of vertices representing diagonal corners of the approximate geographic boundary of the venue. A determination of whether the image capture location is inside the venue is made by comparing the image capture location with the vertices of the rectangle. Likewise, when the venue is represented as a closed polygon, a determination of whether the location is inside the polygon can be made using a standard geometric technique commonly known to those skilled in the art.
Venue information 225 identified by the identify venue information step 215 can take many different forms. In one embodiment of the present invention, venue information 225 is a text string providing a name for the identified venue. For example, the text string could be “Washington Monument” or “Yellowstone National Park” or “Upstate Racetrack.” Alternatively, the venue can be identified by other means such as an ID number corresponding to an entry in the venue database 220.
Store venue information step 215 is used to store the venue information 225 in the processor-accessible memory system 140. In a preferred embodiment of the present invention, the venue information 225 is stored as an additional metadata tag in the digital image file 205. For example, the venue information 225 can be stored as a custom venue metadata tag in accordance with the well-known EXIF image file format. Preferably, the custom venue metadata tag is a text string providing the name of the identified venue. Alternately, the venue information 225 can be stored in many other forms such as a separate data file associated with the digital image file 205, or in a database that stores information about multiple digital image files.
In another example, if the venue is identified to be a national park, a travel agency may transmit a message offering to book hotel rooms near that particular national park, or near other national parks. Alternately, a message may be transmitted offering framed photographs of the national park taken by professional photographers. In this case, the message may include photographs of the venue showing the product offerings.
In response to the product offering, the user may choose to order the product or service using place order step 265. In response the vendor will then fulfill the order with fulfill order step 270.
In another embodiment of the present invention, venues can be comprised of a plurality of portions, with each portion representing an identifiable area of the venue. In
The identified venue information 225 and event information 245 can then be associated with the digital image file 205 and stored in the processor-accessible memory system 140 (
An image field-of-view (FOV) 510 with a field-of-view border 513 can be defined by the image capture location 507, image distance 514, and horizontal angle-of-view (HAOV) 516. The FOV is bisected by the center-of-view line 512. The HAOV (in degrees) can be defined by the following equation:
where Ws is the sensor width (given by the sensor size metadata 530) and F is the focal length (given by the focal length metadata 528) of the capture device lens system. The image distance 514 can be equal to the focus distance given by the focus distance metadata 532 or some arbitrary amount larger than the focus distance to account for image content in the background of the captured image. Once an image FOV 510 has been established for a captured image it can be determined if a venue (or venue portion) 505 intersects and thus identifies the venue or portion of the venue. Geometric techniques (known to those skilled in the art) can be used to determine the intersection of the image FOV 510 with the venue 505 using either the lines defining the FOV border 513 or the center-of-view line 512. An indication of the identified venue or portion of the venue can then be stored in the processor-accessible memory system 140 (
It is to be understood that the exemplary embodiment(s) is/are merely illustrative of the present invention and that many variations of the above-described embodiment(s) can be devised by one skilled in the art without departing from the scope of the invention. It is therefore intended that all such variations be included within the scope of the following claims and their equivalents.