This application is related to U.S. Ser. No. 09/742,028, entitled TIMELINE-BASED GRAPHICAL USER INTERFACE FOR EFFICIENT IMAGE DATABASE BROWSING AND RETRIEVAL, filed Dec. 20, 2000 in the name of Elizabeth Rosenzweig et al.; U.S. Ser. No. 09/745,025, entitled COMPREHENSIVE, MULTI-DIMENSIONAL GRAPHICAL USER INTERFACE USING PICTURE METADATA FOR NAVIGATING AND RETRIEVING PICTURES IN A PICTURE DATABASE, filed Dec. 20, 2000 in the name of Elizabeth Rosenzweig et al.; U.S. Ser. No. 09/745,028 entitled GRAPHICAL USER INTERFACE ADAPTED TO ALLOW SCENE CONTENT ANNOTATION OF GROUPS OF PICTURES IN A PICTURE DATABASE TO PROMOTE EFFICIENT DATABASE BROWSING, filed Dec. 20, 2000 in the name of Prasad Prabhu et al.; U.S. Pat. No. 6,351,556 entitled METHOD FOR AUTOMATICALLY COMPARING CONTENT OF IMAGES FOR CLASSIFICATION INTO EVENTS issued Feb. 26, 2002 in the name of Loui et al.; U.S. Pat. No. 6,606,409 entitled FADE-IN AND FADE-OUT TEMPORAL SEGMENTS issued Aug. 12, 2003 in the name of Warnick et al.; U.S. Pat. No. 6,606,411 entitled METHOD FOR AUTOMATICALLY CLASSIFYING IMAGES INTO EVENTS issued Aug. 12, 2003 in the name of Loui et al.; U.S. Pat. No. 6,810,146 entitled METHOD AND SYSTEM FOR SEGMENTING AND IDENTIFYING EVENTS IN IMAGES USING SPOKEN ANNOTATIONS issued Oct. 26, 2004 in the name of Loui et al.; U.S. Pat. No. 6,847,733 entitled RETRIEVAL AND BROWSING OF DATABASE IMAGES BASED ON IMAGE EMPHASIS AND APPEAL issued Jan. 25, 2005 in the name of Savakis et al.; U.S. Pat. No. 6,865,297 entitled METHOD FOR AUTOMATICALLY CLASSIFYING IMAGES INTO EVENTS IN A MULTIMEDIA AUTHORING APPLICATION issued Mar. 8, 2005 Loui et al.; U.S. Pat. No. 6,915,011 entitled EVENT CLUSTERING OF IMAGES USING FOREGROUND/BACKGROUND SEGMENTATION issued Jul. 5, 2005 in the name of Loui et al.; U.S. Pat. No. 6,937,273 entitled INTEGRATED MOTION-STILL CAPTURE SYSTEM WITH INDEXING CAPABILITY issued Aug. 30, 2005 in the name of Loui; and U.S. Patent Application Publication No. 2002/0075329 entitled PICTURE DATABASE GRAPHICAL USER INTERFACE UTILIZING MAP-BASED METAPHORS FOR EFFICIENT BROWSING AND RETRIEVING OF PICTURES published Jun. 20, 2002 in the name of Prabhu et al.; U.S. Patent Application Publication No. 2003/0009493 entitled USING DIGITAL OBJECTS ORGANIZED ACCORDING TO A HISTOGRAM TIMELINE published Jan. 9, 2003 in the name of Parker et al.; U.S. Patent Application Publication No. 2003/0059107 entitled METHOD AND SYSTEM FOR AUTOMATED GROUPING OF IMAGES published Mar. 27, 2003 in the name of Sun et al.; U.S. Patent Application Publication No. 2003/0198390 entitled METHOD FOR AUTOMATICALLY CLASSIFYING IMAGES INTO EVENTS published Oct. 23, 2003 in the name of Loui et al.; U.S. Patent Application Publication No. 2004/0208365 entitled METHOD FOR AUTOMATICALLY CLASSIFYING IMAGES INTO EVENTS published Oct. 21, 2004 in the name of Loui et al.; and U.S. Patent Application Publication No. 2005/0010602 entitled SYSTEM AND METHOD FOR ACQUISITION OF RELATED GRAPHICAL MATERIAL IN A DIGITAL GRAPHICS ALBUM published Jan. 13, 2005 in the name of Loui et al.
The present invention relates to methods, systems and graphical user interfaces adapted for accessing digital image, audio, and video content in digital picture databases.
Recent advances in the quality and ease of use of digital devices that capture, produce and/or generate digital still images, digital video images, audio recordings, animations, and other types of audio and/or visual content data files have allowed the creation of large collections of content data files. Such collections of content data files can be stored in a common storage location and can be distributed across a wide variety of storage locations. Further, collections of content data files can be formed in an ad hoc manner using content searching tools that are adapted to search large network systems, such as the Internet, for particular content. Accordingly, it is becoming increasingly common for users of a display device to be faced with the challenge of navigating through a large number content data files to be able to locate and access a content data file of interest. One way to help a user to do this is to provide a graphical metaphor that can be presented by the display device to help to provide a visual structure that facilitates navigation through the content data files of the collection. Such a graphical metaphor is usually referred to as a graphical user interface (GUI). Such a GUI conveniently organizes and groups digital content in a collection and allows a user to browse such organized and grouped content using one or more displayed screens.
A number of recently introduced GUIs provide users of content data file collections with different methods for navigating among content data files in the collection. Some navigation methods may work better than others, depending on the circumstances of use and the nature of the content in a collection. It would therefore be desirable to provide automatic methods for selecting from among available navigational and organizational methods to select from among different navigation methods those that can better enable a user-friendly and efficient navigation.
In one aspect of the invention, a method for selecting a method for categorizing content data files is provided. The method comprises the steps of: identifying a collection of content data files, each content data file comprising image, video, or audio data; determining a number of groups (G1) of content data files that will be generated when a first categorization method is applied to the identified collection of content data files; determining a range of representations (R) that can be presented on a display, each representation being associated with a group of content data files from the identified collection; selecting the first categorization method when the number of groups (G1) is within the range of representations; selecting a different categorization method when the number of groups generated by the first categorization method is not within the range of representations (R).
In another aspect of the invention, a method for operating a display device is provided. The method comprises the steps of: accessing a plurality of digital images; determine how many groups of images would formed from the plurality of images as a result of the application of each of the categorization methods; determining a range of representations (R) that can be displayed on an imaging device each representation being associated with one of the groups of images; and selecting a categorization method from the plurality of categorization methods that forms a number of groups (G) of images from the plurality of images when the number of groups (G) is within the range of representations (R).
In yet another aspect of the invention, a display device is provided. The display device comprises: a source of content data files; a user input adapted to receive a user request for content data files; and, a controller that receives the user request for content data files and accesses a collection of content data files in response thereto, the controller further being operable to determine groups of the content data files generated using a plurality of different of content data file categorization methods; wherein the controller selects one of the categorization methods from the plurality of different of categorization methods based upon whether application of that method will yield a number of groups of content data files that is within a determined range of representations of such groups.
Features and advantages of the present invention will become apparent to those skilled in the art from the description below, with reference to the following drawing figures, in which:
Controller 12 cooperates with user interface 14 to allow display device 10 to interact with a user. User interface 14 can comprise any form of transducer or other device capable of receiving an input from a user and converting this input into a form that can be used by controller 12 in operating display device 10. For example, user interface 14 can comprise a touch screen input, a touch pad input, a 4-way switch, a 6-way switch, an 8-way switch, a stylus system, a mouse, a keyboard, a keypad, a trackball system, a joystick system, a voice recognition system, a gesture recognition system, affective sensing system, or other such systems.
Controller 12 also cooperates with display 16 to cause display 16 to present image content such as still images, sequences of still images, text, symbols, graphics, animations, streams of image information or other forms of video signals. Display 16 can comprise, for example, a color liquid crystal display (LCD), organic light emitting display (OLED) also known as an organic electroluminescent display (OELD) or other type of video display. Display 16 can be fixed to display device 10. Display 16 can also be separable from or separate from display device 10. In embodiments where display 16 is separable from or separate from display device 10, display device 10 and display 16 will each incorporate communication modules (not shown) capable of exchanging information that will allow controller 12 to control what is displayed on display 16. In other alternative embodiments, display device 10 can have more than one display 16. In the embodiment of
Display device 10 can also have other displays 28 such as a segmented LCD or LED display, an LED or other visible display device which can also permit controller 12 to provide information to a user. This capability is used for a variety of purposes such as establishing modes of operation, indicating control settings, user preferences, and providing warnings and instructions to a user of display device 10. Other systems such as known systems and actuators for generating audio signals, vibrations, haptic feedback and other forms of signals can also be incorporated into display device 10 for use in providing information, feedback and warnings to the user of display device 10. Using display 16 and/or other displays 28, display device 10 can present image content as well as information such as the status and mode of operation of display device 10.
Display device 10 is adapted to receive content data files and to present visual and audio signals in a human perceptible manner. As used hereinafter, the term content data file can comprise any form of digital data that can be used to generate human perceptible visual signals including but not limited to graphics, text, still images, sequences of still images, video streams and/or audio signals.
Content data files can be supplied to display device 10 by way of a content source 20. In the embodiment shown in
Image capture system 22 comprises lens system 30 and an image sensing system 32. In operation, light from a scene is focused by lens system 30 and forms an image at image sensing system 32. Lens system 30 can have one or more elements, be of a fixed focus type or can be manually or automatically adjustable. Lens system 30 is optionally adjustable to provide a variable zoom that can be varied manually or automatically. Other known arrangements can be used for lens system 30.
Image sensing system 32 converts light that is focused onto lens system 30 into image signals representing an image of the scene. Image sensing system 32 can use for example an image sensor (not shown) having a charge couple device (CCD), a complementary metal oxide sensor (CMOS), or any other electronic image sensor known to those of ordinary skill in the art.
Signal processor 18 receives the image signals from image sensing system 32 and processes these image signals to form image content. The image content can comprise one or more still images, multiple still images and/or a stream of apparently moving images such as a video segment. Where image content comprises a stream of apparently moving images, the image content can comprise image data stored in an interleaved or interlaced image form, a sequence of still images, and/or other forms known to those of skill in the art of digital video.
Signal processor 18 can apply various image processing algorithms to the image signals when forming image content. These algorithms can include, but are not limited to, color and exposure balancing, interpolation and compression. Where the image signal is in the form of an analog signal, signal processor 18 can also convert the analog signals into a digital form.
An optional audio system 38 is provided. Audio system 38 can include a microphone (not shown) and conventional amplification and analog to digital conversion circuits known for converting sonic energy into digital audio signals. Digital audio signals captured by audio system 38 are provided to signal processor 18. Signal processor 18 converts these audio signals into audio content in digital form. Where the audio content is captured in association with the image content, signal processor 18 automatically associates the image and audio content in a common digital file integrated with the image content. Audio system 38 can also include a speaker system and/or an audio output port to which a speaker or amplifier system can be joined for reproducing captured audio inputs and for reproducing, in audio form, audio from accessed content data filing.
Captured content data files are stored in a memory 40. Memory 40 can include conventional memory devices including solid state, magnetic, optical or other data storage devices. Memory 40 can be fixed within display device 10 or it can be removable in part or in whole. In the embodiment of
Content source 20 can provide content data files that are captured by other devices and transferred to display device 10. In the embodiment of
Communication interface 24 can be an optical, radio frequency or other transducer that converts image and other data into a form that can be conveyed to display device 10 by way of an optical signal, radio frequency signal or other form of signal. Examples of communication interface 24 include, but are not limited to, a cellular telephone transceiver, an 802.11 interface, a so called Blue Tooth transceiver, and an Infrared communication transceiver. Communication interface 24 can also be used to acquire a digital image and other information from a host computer or network (not shown). Communication interface 24 can also optionally be adapted to acquire image and/or audio content from sources such as conventional radio and television signals and from digital radio and television signals. Communication interface 24 can receive such content wirelessly or using circuit connections such as audio/video cables containing image and/or audio content.
Communication interface 24 can also receive signals containing information and instructions for execution by controller 12 including but not limited to, signals from a remote control device (not shown) and can operate display device 10 in accordance with such signals.
Similarly, content data files that are captured or otherwise provided by another device can be stored in the form of files on a removable memory 46 with removable memory 46 being operatively joined to memory interface 26. Memory interface 26 can comprise a port controlled by controller 12 to access digital imagery, either through a storage device such as a Compact Flash card, or through an interface connection such as a Universal Serial Bus (USB) connection. Controller 12 and memory interface 26 are operable using techniques known in the art to extract content data files from a removable card.
It will be appreciated that, in the embodiment of
Content data files that are obtained from content source 20 are then stored in internal memory 42. Internal memory 42 and removable memory card 46 can consist of any of a number of rewritable memories, for example, a solid-state memory, Compact Flash-Cards, or a non-solid-state memory, for example a miniature disk drive or an optical drive.
Content data files can comprise data that is stored in a compressed form. For example where a content data file comprises a still image or a sequence of still images, the still image(s) can be stored in a compressed form such as by using the JPEG (Joint Photographic Experts Group) ISO 10918-1 (ITU-T.81) standard. This JPEG compressed image data is stored using the so-called “Exif” image format defined in the Exchangeable Image File Format version 2.2 published by the Japan Electronics and Information Technology Industries Association JEITA CP-3451. Similarly, other compression systems such as the MPEG-4 (Motion Pictures Export Group) or Apple QuickTime™ standard can be used to store digital image data in a video form. Other image compression and storage forms can be used.
As is illustrated in
Internal memory 42 has a capacity for data storage including content data files. In some embodiments, the storage capacity of internal memory 42 may be quite large, however, in other embodiments it can be somewhat lower. In either type of embodiment, display device 10 can optionally be associated with an archival storage device 60 that can receive content data files from display device 10, and store vast quantities of such content data files using, for example, a mass memory such as an optical drive, RAID array, or semiconductor memory and that can supply content data files to display device 10 as requested, such as by way of a communication interface 24. In still other embodiments, communication interface 24 can be used to access collections of personal, commercial or other remote content that is stored on a separate device, a combination of separate devices or a communication network such as the internet that permits display device 10 to access content data files stored on a vast number of interconnected devices, many of which have local storage.
Such collections can be pre-existing or they can be collected in an ad hoc manner for example, in response to one or more search requests submitted by the user of display device 10 that cause controller 12 to initiate a search of a large compilation of data, such as the Internet for content data files having similar content and to create an ad hoc collection. Processor 34 can cause such accessed content to be loaded into memory 40 or can access such content by downloading such content data files or providing thumbnails, summaries or link to such content data files as necessary.
Accordingly, there is a need for display device 10 that can be operated in a manner that facilitates navigation through a large quantity of content data files in a convenient manner and that can be used to facilitate navigation through a variety of different databases, sources or storage locations.
After selection of the collection (step 72), controller 12 accesses the desired collection (step 74) and determines a number of groups (G1) that will be generated by a first categorization method when the content categorization method is applied to the collection (step 76). Such a determination can be made by actually applying the first categorization method to the collection or it can be made by using algorithms to predict the number of groups that will be formed when the first categorization method is applied. For example, in certain time based methods of categorization, it can be determined, for example, the method will determine a particular number of groups based upon the overall time period over which content data files were, for example, captured or otherwise obtained. This can be done however, without actually categorizing all of the content data files to form such groups. The number of groups (G1) of content data files that application of the first categorization method will form can be predicted in similar fashion for a variety of non-timeline based categorization methods. The first categorization method can take any known form of categorization method, some of which are described in greater detail below.
In the embodiment of
In some display devices 10, the range of representations (R) can be made constant and can be preset or set in accordance with user preferences. However, in other display devices 10 the number of representations (R) can vary depending on the type of use to which the display device 10 is being put, the personal preferences of a user of display device 10 and other factors. Factors that might influence the selection of a range of representations in either a generally fixed or a variable selection embodiment can include image resolution of display 16, the physical size of display 16, the nature of the content within each group, the shape of display 16, the proportion of display 16 that is available for such presentation and a general understanding of human visual acuity and human short term memory. The range of representations (R) can be calibrated so that each representation provides sufficient visual information to permit a user to observe the representations and have a general understanding of the way in which content data files are grouped. Typically, the range of representations (R) is selected to allow the user to observe each representation on display 16 at a single time so that with a glance at a single display level presenting such representations a user can have a basic understanding of the general scope of content in the collection and a group structure that can easily guide a user of display device to access such content.
As is also illustrated in both
In the embodiment of
In the embodiment of
As is illustrated in
Controller 12 can use any of a variety of ways of choosing which of a plurality of available content categorization methods to use as a first, second, third or other method. In one embodiment, a preferred sequence can be used to choose which of a plurality of available content categorization methods are to be used as a first, second or other subsequent method. Controller 12 can make such a choice based upon the type of collection, the size of the collection, user preference or number of groups obtained using previously applied content categorization methods. Further, it will be appreciated that there are typically a variety of ways in which any categorization methods can be applied to a collection of content data files to organize the content data files into groups. Accordingly, as noted above controller 12 can be adapted to select different ways of applying a first categorization method and a second (or other additional) categorization method. Alternatively, controller 12 can be adapted to apply only one variant of a categorization method and to forgo use of that categorization method where that variant does not create the requisite number of groups.
In either of the embodiments of
It will be appreciated that such groups can be formed in a variety of fashions. For example, in a typical case, each different content data file can be grouped in the collection with a representation by establishing virtual links therebetween so that the representation can be used as a convenient point for locating all content data files that are associated with the representation. Alternatively, a collection can be stored or otherwise electronically reorganized for storage in a manner that is consistent with the selected method of categorization. Such storage can involve moving or reorganizing such content data files on remote devices if permissible, or copying such content data files to a new location, such as in memory 40, wherein such content data files can be stored in a manner that is in accordance with the grouping established by the characterization method.
Each formed group is then associated with a representation that can be presented on display 16 (step 92). The representation can have an appearance that is based upon the content itself, metadata associated with the content, the date of capture or generation of the content, the date upon which the content was provided to display device 10 or other such factors. For example, the appearance of a representation can incorporate or otherwise be based upon a single still image from a still image type of content data file or a so-called key frame from a video type of image content. The representation can also have an appearance that is based at least in part upon characterization method and/or other factors involved in organizing the content. Finally, the representation can also have an appearance that is based at least in part upon some type of data or metadata associated with one or more of the content data files of the group associated with that representation.
The representations are then presented on display 16 (step 94). Typically such representations are presented in an ordered fashion that helps a user to understand the nature of such representations. For example, in
It will be appreciated that other organizational metaphors 114 can be provided that can be usefully applied in similar fashion for use with other representations.
It will also be appreciated that in other embodiments, controller 12, can select a categorization method using variations of the steps illustrated as steps 76-84 in the embodiment of
A selection between more than one available categorization method can also be made by automatically selecting an available categorization method that requires a user to take the minimum number of steps to access any of the grouped images. For example, some categorization methods may categorize content data files into groups and sub groups within such groups. As is illustrated in
In contrast as is illustrated in
Accordingly, in one embodiment of the invention, controller 12 can be adapted to perform the step of selecting between available categorization methods by selecting a categorization method that requires a user to take a minimum number of steps to access any of the grouped images.
Returning now to
Content Data File Categorization Method
There are a variety of categorization methods that can be used to define groups of content data files within a collection. Accordingly the first categorization method, second categorization method or any other categorization method described above can comprise any known categorization method. The following provides a sampling of categorization methods that can be used. This sampling is not limiting.
One example of such a categorization method is a method that groups the content data files into a timestamp group based upon a timestamp associated with a content data files, such as a data of capture, a date of creation or a data of acquisition of a content data file. Such a method can, for example, organize the content into groups of content that represent different ranges of time within a time frame. For example, such a categorization method can group images into a timestamp group with each group representing a day, week or month of capture.
Another example categorization method can organize the content data files according to file groups or other storage locations based on file locations in which the content data files are stored. Another example categorization method can examine file names for content data files or file paths leading to content data files and can build groups based upon similarities or patterns in such file names or file paths.
Still another example categorization method can organize groups of content data files according to content metadata which can comprise any information that is associated with the content data files but that does not necessarily comprise part of the data representing the content.
Such metadata can be stored in the content data files or otherwise associated therewith. The metadata can describe the scene in a still image or video, such as a caption, and can also provide in a straightforward manner, information such as the date and time the picture was captured, the location from which the picture was captured, identify people or objects in the picture, and information regarding format and data structure.
Many prior art digital cameras can be programmed to automatically store metadata along with a captured content such as the date and time at which the content was captured or edited. More advanced digital cameras can also be programmed to automatically store along with the actual image, the location of picture capture by harnessing automatic location systems. For example, the Global Positioning Satellite (GPS) is a well-known method for pinpointing the location of a special GPS receiver with a fairly high degree of accuracy. Other methods include the use of Radio Triangulation (RT) systems. Using such an approach, a GPS receiver can be either incorporated in the hardware of the digital camera, or located nearby. A subsequent image file will contain not only the raw image data, but also a date and time stamp, along with header information related to the location of the GPS receiver when the image is collected.
Where content data files are associated with location information, a categorization method can be used that organizes content data files into groups using location information that is associated with the content data files such as GPS type global positioning data or other location information that represents a location of capture, storage or acquisition of a content data file.
Still more advanced digital cameras may contain pattern recognition software for identifying objects and people in an image, and for converting such information to metadata, and histogram software for generating bar chart or other such displays representing color illumination values within an image.
In yet another example, of a categorization method, the content of the content data files is analyzed to identify particular characteristics of the content of still image, video and/or audio data stored therein. For example, image analysis techniques can be used to identify particular image types such as landscape images, facial images, group images, macro images, night images, flash enabled images, or images having particular subject matter such as cars, airplanes, etc. Similarly, audio signal recognition can be used to identify particular sounds. The presence of such subject matter can also be determined based upon metadata such as user-entered annotations.
Aspects of the processing of the content data files can also be used as a discriminator for particular categorization methods, for example, organization of groups in some categorization methods can be based, at least in part upon, detecting content data files that are stored in particular formats, subject to modifications or the application of special effects such as overlays, added text, color tone modifications such the imposition black and white, grayscale, sepia tone scales, particular audio modifications and other factors can be used to organize
Aspects of the content data files that indicate the quality of the content in such data files can also be used as a discriminating factor in other categorization methods. Examples of such quality measurements include analysis of an image content for focus characteristics, contrast characteristics, color characteristics, noise levels, signal-to-noise ratio, undesirable image artifacts or the analysis of audio data to for sound quality metrics such as noise levels, channels of audio and characteristics of the sampling used for such audio signals.
Still more methods that can be used to categorize content data files include, but are not limited to, the methods described in any of the following cross-referenced U.S. Patents and patent applications each of which is hereby incorporated by reference: U.S. Pat. No. 6,351,556 entitled METHOD FOR AUTOMATICALLY COMPARING CONTENT OF IMAGES FOR CLASSIFICATION INTO EVENTS issued Feb. 26, 2002 in the name of Loui et al.; U.S. Pat. No. 6,606,409 entitled FADE-IN AND FADE-OUT TEMPORAL SEGMENTS issued Aug. 12, 2003 in the name of Warnick et al.; U.S. Pat. No. 6,606,411 entitled METHOD FOR AUTOMATICALLY CLASSIFYING IMAGES INTO EVENTS issued Aug. 12, 2003 in the name of Loui et al.; U.S. Pat. No. 6,810,146 entitled METHOD AND SYSTEM FOR SEGMENTING AND IDENTIFYING EVENTS IN IMAGES USING SPOKEN ANNOTATIONS issued Oct. 26, 2004 in the name of Loui et al.; U.S. Pat. No. 6,847,733 entitled RETRIEVAL AND BROWSING OF DATABASE IMAGES BASED ON IMAGE EMPHASIS AND APPEAL issued Jan. 25, 2005 in the name of Savakis et al.; U.S. Pat. No. 6,865,297 entitled METHOD FOR AUTOMATICALLY CLASSIFYING IMAGES INTO EVENTS IN A MULTIMEDIA AUTHORING APPLICATION issued Mar. 8, 2005 Loui et al.; U.S. Pat. No. 6,915,011 entitled EVENT CLUSTERING OF IMAGES USING FOREGROUND/BACKGROUND SEGMENTATION issued Jul. 5, 2005 in the name of Loui et al.; U.S. Pat. No. 6,937,273 entitled INTEGRATED MOTION-STILL CAPTURE SYSTEM WITH INDEXING CAPABILITY issued Aug. 30, 2005 in the name of Loui; and U.S. Patent Application Publication No. 2002/0075329 entitled PICTURE DATABASE GRAPHICAL USER INTERFACE UTILIZING MAP-BASED METAPHORS FOR EFFICIENT BROWSING AND RETRIEVING OF PICTURES published Jun. 20, 2002 in the name of Prabhu et al.; U.S. Patent Application Publication No. 2002/0168108 entitled EVENT CLUSTERING OF IMAGES USING FOREGROUND/BACKGROUND SEGMENTATION published Nov. 14, 2002 in the name of Loui et al.; U.S. Patent Application Publication No. 2003/0009493 entitled USING DIGITAL OBJECTS ORGANIZED ACCORDING TO A HISTOGRAM TIMELINE published Jan. 9, 2003 in the name of Parker et al.; U.S. Patent Application Publication No. 2003/0059107 entitled METHOD AND SYSTEM FOR AUTOMATED GROUPING OF IMAGES published Mar. 27, 2003 in the name of Sun et al.; U.S. Patent Application Publication No. 2003/0198390 entitled METHOD FOR AUTOMATICALLY CLASSIFYING IMAGES INTO EVENTS published Oct. 23, 2003 in the name of Loui et al.; U.S. Patent Application Publication No. 2004/0208365 entitled METHOD FOR AUTOMATICALLY CLASSIFYING IMAGES INTO EVENTS published Oct. 21, 2004 in the name of Loui et al.; and U.S. Patent Application Publication No. 2005/0010602 entitled SYSTEM AND METHOD FOR ACQUISITION OF RELATED GRAPHICAL MATERIAL IN A DIGITAL GRAPHICS ALBUM published Jan. 13, 2005 in the name of Loui et al.
The invention has been described in detail with particular reference to certain preferred embodiments thereof, but it will be understood that variations and modifications can be effected within the spirit and scope of the invention.
Number | Name | Date | Kind |
---|---|---|---|
4396903 | Habicht et al. | Aug 1983 | A |
4567610 | McConnell | Jan 1986 | A |
5083860 | Miyatake et al. | Jan 1992 | A |
5157511 | Kawai et al. | Oct 1992 | A |
5164831 | Kuchta et al. | Nov 1992 | A |
5339385 | Higgins | Aug 1994 | A |
5382974 | Soeda et al. | Jan 1995 | A |
5418895 | Lee | May 1995 | A |
5424945 | Bell | Jun 1995 | A |
5485611 | Astle | Jan 1996 | A |
5493677 | Balogh et al. | Feb 1996 | A |
5495538 | Fan | Feb 1996 | A |
5539841 | Huttenlocher et al. | Jul 1996 | A |
5576759 | Kawamura et al. | Nov 1996 | A |
5579471 | Barber et al. | Nov 1996 | A |
5594807 | Liu | Jan 1997 | A |
5598557 | Doner et al. | Jan 1997 | A |
5717613 | Nakajima | Feb 1998 | A |
5717643 | Iwanami et al. | Feb 1998 | A |
5719643 | Nakajima | Feb 1998 | A |
5748771 | Fujiwara | May 1998 | A |
5751378 | Chen et al. | May 1998 | A |
5754227 | Fukuoka | May 1998 | A |
5778108 | Coleman, Jr. | Jul 1998 | A |
5805746 | Miyatake et al. | Sep 1998 | A |
5809161 | Auty et al. | Sep 1998 | A |
5809202 | Gotoh et al. | Sep 1998 | A |
5842194 | Arbuckle | Nov 1998 | A |
5852823 | De Bonet | Dec 1998 | A |
5862519 | Sharma et al. | Jan 1999 | A |
5872859 | Gur et al. | Feb 1999 | A |
5875265 | Kasao | Feb 1999 | A |
5911139 | Jain et al. | Jun 1999 | A |
5937136 | Sato | Aug 1999 | A |
5953451 | Syeda-Mahmood | Sep 1999 | A |
5959697 | Coleman, Jr. | Sep 1999 | A |
5963670 | Lipson et al. | Oct 1999 | A |
5978016 | Lourette et al. | Nov 1999 | A |
5982369 | Sciammarella et al. | Nov 1999 | A |
5982984 | Inuiya | Nov 1999 | A |
6005613 | Endsley et al. | Dec 1999 | A |
6005679 | Haneda | Dec 1999 | A |
6011595 | Henderson et al. | Jan 2000 | A |
6012091 | Boyce | Jan 2000 | A |
6021231 | Miyatake et al. | Feb 2000 | A |
6061497 | Sasaki | May 2000 | A |
6072904 | Desai et al. | Jun 2000 | A |
6097389 | Morris et al. | Aug 2000 | A |
6161108 | Ukigawa et al. | Dec 2000 | A |
6195458 | Warnick et al. | Feb 2001 | B1 |
6204840 | Petelycky et al. | Mar 2001 | B1 |
6246790 | Huang et al. | Jun 2001 | B1 |
6250928 | Poggio et al. | Jun 2001 | B1 |
6272461 | Meredith et al. | Aug 2001 | B1 |
6278446 | Liou et al. | Aug 2001 | B1 |
6282317 | Luo et al. | Aug 2001 | B1 |
6285995 | Abdel-Mottaleb et al. | Sep 2001 | B1 |
6301586 | Yang et al. | Oct 2001 | B1 |
6311189 | deVries et al. | Oct 2001 | B1 |
6332122 | Ortega et al. | Dec 2001 | B1 |
6335742 | Takemoto | Jan 2002 | B1 |
6345274 | Zhu et al. | Feb 2002 | B1 |
6351556 | Loui et al. | Feb 2002 | B1 |
6360237 | Schulz et al. | Mar 2002 | B1 |
6396963 | Shaffer et al. | May 2002 | B2 |
6408301 | Patton et al. | Jun 2002 | B1 |
6477491 | Chandler et al. | Nov 2002 | B1 |
6486896 | Ubillos | Nov 2002 | B1 |
6486898 | Martino et al. | Nov 2002 | B1 |
6487531 | Tosaya et al. | Nov 2002 | B1 |
6490407 | Niida | Dec 2002 | B2 |
6519000 | Udagawa | Feb 2003 | B1 |
6545660 | Shen et al. | Apr 2003 | B1 |
6563911 | Mahoney | May 2003 | B2 |
6564209 | Dempski et al. | May 2003 | B1 |
6567980 | Jain et al. | May 2003 | B1 |
6606409 | Warnick et al. | Aug 2003 | B2 |
6606411 | Loui et al. | Aug 2003 | B1 |
6629104 | Parulski et al. | Sep 2003 | B1 |
6683649 | Anderson | Jan 2004 | B1 |
6701063 | Komoda et al. | Mar 2004 | B1 |
6701293 | Bennett et al. | Mar 2004 | B2 |
6707939 | Weinholz et al. | Mar 2004 | B1 |
6734909 | Terane et al. | May 2004 | B1 |
6738075 | Torres et al. | May 2004 | B1 |
6741963 | Badt et al. | May 2004 | B1 |
6751343 | Ferrell et al. | Jun 2004 | B1 |
6784925 | Tomat et al. | Aug 2004 | B1 |
6810146 | Loui et al. | Oct 2004 | B2 |
6810149 | Squilla et al. | Oct 2004 | B1 |
6819796 | Hong et al. | Nov 2004 | B2 |
6847733 | Savakis et al. | Jan 2005 | B2 |
6865297 | Loui et al. | Mar 2005 | B2 |
6950989 | Rosenzweig et al. | Sep 2005 | B2 |
7054870 | Holbrook | May 2006 | B2 |
7149961 | Harville et al. | Dec 2006 | B2 |
7281216 | Bauer et al. | Oct 2007 | B2 |
7296032 | Beddow | Nov 2007 | B1 |
7415662 | Rothmuller et al. | Aug 2008 | B2 |
7508437 | Suzuki | Mar 2009 | B2 |
7739276 | Lee et al. | Jun 2010 | B2 |
7753789 | Walker et al. | Jul 2010 | B2 |
20020075310 | Prabhu et al. | Jun 2002 | A1 |
20020075322 | Rosenzweig et al. | Jun 2002 | A1 |
20020075329 | Prabhu et al. | Jun 2002 | A1 |
20020075330 | Rosenzweig et al. | Jun 2002 | A1 |
20020168108 | Loui et al. | Nov 2002 | A1 |
20030007688 | Ono | Jan 2003 | A1 |
20030009493 | Parker et al. | Jan 2003 | A1 |
20030012557 | Tingey et al. | Jan 2003 | A1 |
20030051022 | Sogabe et al. | Mar 2003 | A1 |
20030059107 | Sun et al. | Mar 2003 | A1 |
20030084065 | Lin et al. | May 2003 | A1 |
20030198390 | Loui et al. | Oct 2003 | A1 |
20040005923 | Allard et al. | Jan 2004 | A1 |
20040114904 | Sun et al. | Jun 2004 | A1 |
20040158862 | Nam et al. | Aug 2004 | A1 |
20040177319 | Horn | Sep 2004 | A1 |
20040208365 | Loui et al. | Oct 2004 | A1 |
20040208377 | Loui et al. | Oct 2004 | A1 |
20050010602 | Loui et al. | Jan 2005 | A1 |
20050050043 | Pyhalammi et al. | Mar 2005 | A1 |
20050091596 | Anthony et al. | Apr 2005 | A1 |
20050102637 | Suzuki | May 2005 | A1 |
20050192924 | Drucker et al. | Sep 2005 | A1 |
20050200912 | Yamakado et al. | Sep 2005 | A1 |
20050225644 | Shibuya et al. | Oct 2005 | A1 |
20050240865 | Atkins et al. | Oct 2005 | A1 |
20060026529 | Paulsen et al. | Feb 2006 | A1 |
20060227992 | Rathus et al. | Oct 2006 | A1 |
20060259863 | Obrador et al. | Nov 2006 | A1 |
20070005581 | Arrouye et al. | Jan 2007 | A1 |
20070094251 | Lu et al. | Apr 2007 | A1 |
20070118802 | Gerace et al. | May 2007 | A1 |
20080306921 | Rothmuller et al. | Dec 2008 | A1 |
Number | Date | Country |
---|---|---|
WO 2004049206 | Jun 2004 | WO |
Number | Date | Country | |
---|---|---|---|
20070185890 A1 | Aug 2007 | US |