This application is based on Japanese Patent Application No. 2009-197041 filed on Aug. 27, 2009, the contents of which are hereby incorporated by reference.
1. Field of the Invention
The present invention relates to an image display device which displays an image.
2. Description of Related Art
A digital image taking device, which stores taken images (including moving images and still images) on a storage medium as data instead of on a film, is widely used. In such an image taking device, the number of storable image data items is limited depending on a capacity of the storage medium. However, because a large-capacity storage medium has been realized in recent years, a user is able to take and store a large number of image data items with no stress.
However, when the number of image data items stored on the storage medium becomes significantly large, it becomes difficult to search for desired image data from the storage medium.
Therefore, there has been proposed an image display device which displays reduced images of the image data items together with a calendar, to thereby enable the user to search for the desired image data by following a clue such as the day, the week, or the month when the image is taken. Further, when there exist a plurality of image data items on the same day, the same week, or the same month, the image display device displays as many reduced images as can be displayed.
However, in the image display device described above, when there exist a large number of image data items on the same day, the same week, or the same month, the reduced image of the desired image data is not always displayed by luck. Further, if the reduced image of the desired image data is not displayed, the search for the desired image data becomes difficult.
An image display device according to the present invention includes a display unit which displays corresponding images corresponding to image data items classified into categories, in which the display unit preferentially displays a corresponding image corresponding to an image data item which belongs to a representative category selected from the categories.
In the accompanying drawings:
Significance and effects of the present invention become apparent from the following description of an embodiment. Note that, the following embodiment is merely one of the embodiments of the present invention, and meanings of terms used to describe the present invention and components thereof are not limited to those described in the following embodiment.
<<Overall Configurations of Image display Device and Image Taking Device>>
Hereinafter, the embodiment of the present invention is described with reference to the drawings. First, overall configurations of an image display device and an image taking device are described with reference to the drawings.
As illustrated in
“Tag” mainly indicates the category to which the image data belongs. “Category” refers to classification in accordance with subjects in the image data, such as food, a train, a cat, a dog, a portrait (adult, child, man, woman, or particular person). “Image taking information” mainly refers to information which indicates a situation (for example, image taking date/time or image taking place) at a time when the image data is obtained by an image taking operation.
Note that, the image analysis unit 2, the tag generation unit 3, the tag writing unit 4, and the storage unit 5 are assumed as a block of a storage system, and the storage unit 5, the image taking information extraction unit 6, the tag extraction unit 7, the operation unit 8, the display control unit 9, and the display unit 10 are assumed as a block of a display system.
Further, as illustrated in
The image data output from the image taking information writing unit 26 may be temporarily stored in a storage unit (not shown) and then transferred to the image display device 1 of
Note that, the storage unit of the image taking device may be detached from the image taking device 20 to be connected to the image display device 1, to thereby input the image data to the image display device 1.
Further, although the image display device 1 of
Further, the image display device 1 of
Further, although the image taking device 20 including both the image taking date/time information generation unit 24 and the image taking place information generation unit 25 is described above, the image taking device 20 may include any one of the image taking date/time information generation unit 24 and the image taking place information generation unit 25 (for example, image taking date/time information generation unit 24) alone. However, for concrete description, the case where the image taking device 20 includes both the image taking date/time information generation unit 24 and the image taking place information generation unit 25 is described below.
Next, operations of the image taking device 20 and the image display device 1 are described with reference to the drawings. First, the operation of the image taking device 20 is described.
As illustrated in
After the image data is generated in the image taking unit 21, the image data is temporarily stored in the image memory 22. The user may check the taken image, by displaying the image data stored in the image memory 22 on the display unit 23. Further, the image taking information writing unit 26 acquires the image taking date/time information from the image taking date/time information generation unit 24, and also acquires the image taking place information from the image taking place information generation unit 25. After that, the image taking information writing unit 26 writes those pieces of image taking information into the predetermined region in the display data. In this manner, the image data is generated by the image taking device 20.
Next, the operation of the image display device 1 is described with reference to the drawings. First, an operation of the storage system is described with reference to the drawing.
As illustrated in
The image analysis unit 2 analyzes an image represented by the image data (hereinafter, also simply referred to as image), and automatically determines the category to which the image data belongs (STEP 2). Details of an analysis method for the image and an automatic determination method for the category of the image data by the image analysis unit 2 are described later. Note that, in addition to (or instead of) the automatic determination of the category of the image data by the image analysis unit 2 performed in STEP 2, manual designation of the category of the image data by the user may be performed. Further, categories to be automatically determined by the image analysis unit 2 may be designated by the user.
The tag generation unit 3 generates a tag indicating the category which is automatically determined by the image analysis unit 2 (or manually designated). Then, the tag writing unit 4 writes the tag generated by the tag generation unit 3 into the predetermined region of the image data (STEP 3), and stores the image data in the storage unit 5 (STEP 4). In this manner, the operation of the storage system is completed.
Next, an operation of the display system, in particular, the operation of generating the display image by the display control unit 9, is described with reference to the drawing.
As illustrated in
Next, the image taking information extraction unit 6 extracts the image taking information from the predetermined region (for example, header region) of the image data which is stored in the storage unit 5. Further, similarly, the tag extraction unit 7 extracts the tag from the predetermined region of the image data which is stored in the storage unit 5 (STEP 11). The extracted image taking information and tag are input to the display control unit 9. Note that, at this time, the display control unit 9 may read out other data (for example, data of frame of display image) which may be necessary to generate the display image from the storage unit 5.
The display image includes sections in which corresponding images of the image data items are displayed. Note that, the corresponding images displayed in the sections are only the corresponding images selected by the display control unit 9. Note that, there may be sections where corresponding images are not displayed. Details of the display image and a method of selecting the corresponding images to be displayed in the sections are described later.
“Corresponding image” refers to, for example, a thumbnail image attached to the image data or an image obtained by adjusting the image of the image data (for example, reduced image of still image or reduced image of one flame contained in moving image). Note that, the corresponding image is not limited to the images describe above, and may include, for example, a character or an icon, or may be an image obtained by combining the character and the icon with the images describe above.
Further, “section” refers to, for example, a temporal section of, for example, day, week, month, year, and predetermined day of the week on a calendar, a spatial section of, for example, village, town, city, prefecture, region, country, and predetermined distance area on a map, or a section of a combination thereof. Note that, the type and the number of the sections included in one display image may be set based on the instruction from the user input through the operation unit 8 when the display image is generated, may be set by the user in advance, or may be set automatically by the display control unit 9. Note that, for concrete description, a case where the corresponding images are displayed in the temporal sections is mainly described below.
The display control unit 9 selects one section (STEP 12). Then, the display control unit 9 selects the corresponding image which is to be displayed in the section, and reads out the corresponding image from the storage unit 5 (STEP 13). The display control unit 9 generates the display image by displaying the read-out corresponding image in the section.
In STEP 13, the display control unit 9 determines whether or not the corresponding image is an image which may be displayed in the section based on the image taking information on the image data. In addition, the display control unit 9 determines whether or not to display the corresponding image in the display image based on the tag of the image data and the representative category thereof.
After the corresponding image to be displayed in the section is selected and read out in STEP 13, the display control unit 9 checks whether or not there is an unselected section (STEP 14). When there is an unselected section (NO of STEP 14), the process returns to STEP 12 to select the unselected section. On the other hand, when selection of all the sections is completed (YES of STEP 14), the operation of the display system is completed.
The display unit 10 displays the display image generated by the display control unit 9. At this time, when a new instruction is input from the user through the operation unit 8, the display control unit 9 performs adjustment or regeneration of the display image in response to the instruction.
Note that, in the flow charts illustrated in
<<Image Analysis Unit>>
Next, an example of the automatic determination method for the category of the image data by the image analysis unit is described with reference to the drawing.
In the automatic determination method illustrated in
Note that, in
<Feature Amount Calculation Example>
Further, the feature amount S may be a “feature vector”. Hereinafter, a method of calculating the “feature vector” is described with reference to the drawings.
An image 100 illustrated in
The filters 111, 112, 113, and 114 extract edges extending in the horizontal direction, the vertical direction, a right oblique direction, and a left oblique direction of the image 100, respectively, and output filter output values indicating intensity of the extracted edges. The filter 115 extracts an edge extending in a direction not classified in the horizontal direction, the vertical direction, the right oblique direction, and the left oblique direction, and outputs a filter output value indicating intensity of the extracted edge.
The intensity of the edge represents a gradient magnitude of a pixel signal (for example, luminance signal). For example, when there is an edge extending in the horizontal direction of the image 100, a relatively large gradient occurs in the pixel signal in the vertical direction which is orthogonal to the horizontal direction. Further, for example, when spatial filtering is performed by causing the filter 111 to function on the small region having the focused pixel 101 at the center thereof, the gradient magnitude of the pixel signal along the vertical direction of the small region having the focused pixel 101 at the center thereof is obtained as the filter output value. Note that, this is common to the filters 112 to 115.
In a state where a certain pixel in the image 100 is determined as the focused pixel 101, the filters 111 to 115 are caused to function on the small region having the focused pixel 101 at the center thereof, to thereby obtain five filter output values. Among the five filter output values, the maximum filter output value is extracted as an adopted filter value. When the maximum filter output value is the filter output value obtained from one of the filters 111 to 115, the adopted filter value is called one of a first adopted filter value to a fifth adopted filter value. Therefore, for example, when the maximum filter output value is the filter output value from the filter 111, the adopted filter value is the first adopted filter value, and when the maximum filter output value is the filter output value from the filter 112, the adopted filter value is the second adopted filter value.
The position of the focused pixel 101 is caused to move from one pixel to another in the horizontal direction and the vertical direction in the image 100, for example. In each movement, the filter output values of the filters 111 to 115 are obtained, to thereby determine the adopted filter value. After the adopted filter values with respect to all the pixels in the image 100 are determined, histograms 121 to 125 of the first to fifth adopted filter values as illustrated in
The histogram 121 of the first adopted filter value is a histogram of the first adopted filter value obtained from the image 100. In the example illustrated in
In addition, color histograms representing a state of color in the image 100 are created. For example, when pixel signals in each pixel forming the image 100 include an R signal representing intensity of red color, a G signal representing intensity of green color, and a B signal representing intensity of blue color, a histogram HSTR of an R signal value, a histogram HSTG of a G signal value, and a histogram HSTB of a B signal value in the image 100 are created as the color histograms of the image 100. For example, when the number of bins of each color histogram is 16, 48 frequency data items may be obtained from the color histograms HSTR, HSTG, and HSTB. A vector (for example, 48-dimensional vector) having the frequency data items obtained from the color histograms as elements thereof is obtained as a color vector HC.
When the feature vector of the image 100 is expressed by H, the feature vector H is obtained by an expression “H=kC×HC+kE×HE”, where kC and kE denote predetermined coefficients (note that, kC≠0 and kE≠0). Therefore, the feature vector H of the image 100 represents the feature amounts in accordance with a shape and color of an object in the image 100.
Note that, in a moving picture experts group (MPEG) 7, the derivation of the feature vector H (feature amount) of the image is performed by using five edge extracting filters, and the five edge extracting filters may be applied to the filters 111 to 115. In addition, the feature vector H (feature amount) of the image 100 may be derived by applying a method standardized in MPEG 7 to the image 100. Further, the feature vector H may be calculated by using only one of the feature amounts of a shape and color.
Further, in addition to (or instead of) the feature vector described above, the feature amount may be calculated based on existence of people (particularly, number of people) in the image. The existence of people in the image may be determined by, for example, using various known technologies for face detection. Specifically, for example, by using a weak learner which applies a weight table created from a large number of teacher samples (face and non-face sample images) by using Adaboost (Yoav Freund, Robert E. Schapire, “A decision-theoretic generalization of on-line learning and an application to boosting”, European Conference on Computational Learning Theory, Sep. 20, 1995.), a face may be detected from the image.
In addition, the feature amount may be calculated based on existence of a particular person in the image. The existence of a particular person in the image may be determined by, for example, using various known technologies for face recognition. Specifically, for example, the determination may be performed by comparing a sample image of a particular person stored in advance with a face of a person detected from the image by face detection.
Similarly, sexuality (male or female) or age (for example, adult or child) of a person detected from the image may be determined, to thereby calculate the feature amount based on the determination result.
Further, the above-mentioned feature vector may be calculated from a background region which is a region excluding a person region from the entire image. At this time, the person region may be a region in which a person is assumed to be contained based on a location and a size of a face region detected by face detection. When a person is not contained in the image, the entire image may be the background region.
<<Display Control Unit>>
<Basic Operation>
Next, an operation of generating the display image, which is a basic operation of the display control unit 9, is described with reference to the drawing.
The display control unit 9 refers to the image taking date/time among the pieces of image taking information of the image data so as to generate the display image 200 illustrated in
Further, among the images determined to be the corresponding images which may be displayed on the certain day, the display control unit 9 selects and displays the corresponding image belonging to the representative category preferentially.
Details of a method of selecting the corresponding image to be displayed in the display image are described with reference to the drawing.
Among corresponding images 210 to 214 illustrated in
In this example, the corresponding images 210 and 211 of the image data items belonging to the category “train” which is the representative category are displayed preferentially. Note that, when the number of corresponding images which may be displayed in a certain section (13th day), that is, the number of corresponding images 210 and 211 (two) in which the image data items thereof belong to the representative category (train), is larger than the number of corresponding images (one) which is displayable in the certain section (13th day), the corresponding image 210 of the image data more matching to the representative category (more train like) may be selectively displayed.
The image data more matching to the representative category (more train like) is, for example, image data having smaller distance difference (hereinafter, referred to as “high in score”) between the feature amount S and the feature amount M illustrated in
The representative category of the display image 200 may be changed. For example, when the instruction to change the representative category to “cat” is input by the user through the operation unit 8 while the display image 200 of
With the configuration as described above, in the display images 200 and 220 displayed on the display unit 10, the corresponding images 201 and 221 belonging to the representative categories are preferentially displayed, respectively. Therefore, the user may easily and rapidly search for the desired image data by determining the category of the image data which is desired by the user as the representative category.
Note that, in the display images 200 and 220 illustrated in
Further, in the display images 200 and 220 illustrated in
The representative category of a display image 230 illustrated in
With this configuration, for example, it is possible to selectively display corresponding images of the image data items in sections in which the image taking operation has been frequently performed. Further, for example, when the user recognizes the section of the desired image data, the corresponding images of the image data items in the section may be selectively displayed. Further, by hiding the sections unnecessary for search, larger display regions of the sections necessary for search may be secured. Therefore, the user may search for the desired image data more easily and rapidly.
<Other Operation Examples>
Next, various operation examples of the display control unit 9 are described. Note that, the above-mentioned basic operation and each operation example described below may be executed in combination as appropriate unless contradiction occurs.
[Automatic Selection of Representative Category]
First, an example of a method of automatically selecting the representative category by the display control unit 9 is described. In this example, the automatic selection method is a method of selecting a category having a high determination (designation) frequency as the representative category.
For example, the category which has the largest number of image data items belonging thereto may be selected as the representative category. In this case, in order to select the representative category, the display control unit 9 may refer to all the image data items stored in the storage unit 5, or refer to image data items in certain sections (for example, sections included in display image, that is, one month in
Further, for example, in each section, a category (section category) which has the largest number of image data items belonging thereto, which may be displayed as the corresponding images, may be obtained, to thereby select a category which exhibits the highest count among the obtained section categories as the representative category. Specifically, for example, when the display image 200 illustrated in
With this configuration, the category to which the image data desired by the user belongs with a strong possibility may be automatically selected as the representative category. Therefore, the search for the image data may be performed more easily and rapidly.
Note that, it is preferred that the automatic determination of the category of the image data be performed in the image analysis unit 2, because various instructions from the user with respect to the category of the image data are not required, and also the corresponding images of the image data items which may be desired by the user with a strong possibility may be displayed.
[Image Data Search]
Next, an example of a method of searching for image data by the display control unit 9 is described with reference to the drawings.
In the search method in this example, the user first selects image data which is similar to the desired image data from the corresponding images included in the display image 240 of
Specifically, search is performed for the image data similar to the image data serving as the query. Whether or not the image data items are similar to each other may be determined by using, for example, the feature amounts illustrated in
Further, as illustrated in
With the configuration described above, the search may be performed by using the image data corresponding to the designated corresponding image as the query. Therefore, the query may be designated intuitively and easily. As a result, easy and effective search may be performed.
Further, by displaying the corresponding images of the image data items obtained through the search arranged in the order of image data items similar to the image data serving as the query, the corresponding images may be displayed in order from the image data which may be desired by the user with a strong possibility. Therefore, the search for the desired image data may be performed easily and rapidly.
Note that, the image data items of search targets may be all the image data items stored in the storage unit 5, or the image data items belonging to the same category as the image data serving as the query. However, by searching the image data items widely, effective search may be performed. In particular, it is preferred to widely search the image data items without considering the sections included in the display image 240. Further, there may be a plurality of image data items serving as the query.
[Switching of Corresponding Image]
Next, an example of a method of switching the corresponding image by the display control unit 9 is described with reference to the drawing.
Switching is performed as follows. The user inputs a switching instruction to the display control unit 9 through the operation unit 8. For example, when the corresponding image of the image data desired by the user is not displayed in the display image 200 of
After the switching instruction is input, the representative category and the sections are not changed but maintained, but the corresponding image 201 displayed in each section is changed. For example, displayed in each section is a corresponding image 271 of image data which is the second highest (or lowest) in score after the image data to which the corresponding image 201 displayed before the switching corresponds.
Note that, in the section in which the number of image data items which belong to the representative category and may be displayed as the corresponding images is equal to or lower than the number of corresponding images displayed at a time (one) (for example, 3rd to 6th days in display image 270 of
With this configuration, even if the number of corresponding images displayed at a time in each section is small, by sequentially performing the switching, a large number of corresponding images may be displayed. Therefore, the corresponding images of the image data items belonging to the representative category may be viewed easily by the user.
Note that, as illustrated in
Further, the switching may be performed only to one or a plurality of sections designated by the user. With this configuration, when the user almost surely remembers the image taking date/time of the desired image data, useless switching may be prevented.
[Generation of Display Image in Spatial Sections]
Referring to the display images 200, 220, 230, 240, and 270 of
The display image 300 illustrated in
In addition, among the images determined as the corresponding images which may be displayed in the certain prefecture, the display control unit 9 selects and displays the corresponding image belonging to the representative category preferentially.
Also in the case of displaying a corresponding image 301 in the spatial section, the display image 300 displayed on the display unit 10 is an image in which the corresponding images 301 belonging to the representative category are displayed preferentially. Therefore, the corresponding images 301 of the image data items belonging to the same category as that of the image data desired by the user may be displayed preferentially. Therefore, the user may search for the desired image data easily and rapidly.
Note that, various display methods and selection methods described to be applied to the temporal sections may be applied to the spatial sections as well. Further, sections may be both temporal and spatial. For example, sections may be defined by temporally dividing each section of the display image 300 of
<<Modified Example>>
In the image display device 1 according to the embodiment of the present invention, the operation of the display control unit 9 may be executed by a control device such as a microcomputer. In addition, all or some of functions implemented by such a control device may be written as a program, and by running the program on a program executing device (for example, computer), the all or some of the functions may be implemented.
Further, the present invention is not limited to the above-mentioned case, and the image display device 1 of
In the above, the embodiment of the present invention has been described above. However, the scope of the present invention is not limited thereto, and the present invention may be implemented with being subjected to various modifications without departing from the gist of the present invention.
The present invention is applicable to an image display device which displays an image, as typified by a display unit of an image taking device or a viewer.
Number | Date | Country | Kind |
---|---|---|---|
2009-197041 | Aug 2009 | JP | national |