1. Field of Invention
The present disclosure relates to an image searching method. More particularly, the present disclosure relates to an image searching method for dynamically searching related images and a user interface thereof.
2. Description of Related Art
In the modern society, people keep searching efficient solutions to deal with problems in daily life. For example, handheld devices, such as mobile phones, tablet computers or personal digital assistants (PDA), are useful tools with powerful functions, compact sizes and great operability. Furthermore, users look forward to that the devices can intelligently determine which function users tend to launch and accordingly reply related information to users. The intelligent judgment is more important to some common applications, such as viewing photos or videos stored in the mobile phones or tablet computers.
Recently, camera functions implemented in mobile phones or tablet computers has been highly developed, and an entity storage space or a cloud storage space of the mobile phones or the tablet computers is rapidly increased. Therefore, users tend to take pictures with their handheld devices, and also manage/review existed photos (such as searching their individual photos, group photos with their families/friends, photos at a specific location during a trip, and photos with their pets). However, it is usually hard to classify and search existed data on the handheld devices. In order to find an old photo file stored in the handheld devices, the users have to scroll down the screen of the mobile device repeatedly to locate the target file.
On the other hand, display interfaces on the handheld devices are usually smaller than traditional desktop computers. Sizes of the thumbnails are relatively small while the users are searching their target photos, such that it is hard to identify the target file correctly and efficiently from a lot of thumbnails displayed on the compact screen.
In general, a selection behavior is usually detected by a contact or an induction between a finger/stylus and a touch sensor. In embodiments of this disclosure, users can assign/select contents of their interest by a touch gesture while viewing an image on the electronic device. In response, the electronic device immediately searches a database built in the electronic device and display related images within the database, so as to elevate the efficiency of the electronic device.
An aspect of the disclosure provides a related image searching method suitable for searching a database storing a plurality of image files. The related image searching method includes steps as follows. A context-of-interest (COI) area is selected from a displayed image according to a touch input event. A content characteristic in the COI area is analyzed. An implication attribute of the content characteristic is determined. The database is searched according to the content characteristic and the implication attribute, so as to identify at least one image file from the database with the same content characteristic or the same implication attribute.
According to an embodiment of the disclosure, the related image searching method further includes steps as follows. The image files within the database are analyzed for obtaining a plurality of existed content characteristics of the image files. A plurality of existed content characteristic labels is established according to the existed content characteristics. The existed content characteristic labels respectively corresponding to each of the image files is recorded according to the existed content characteristics within each of the image files.
According to an embodiment of the disclosure, after the step of analyzing the content characteristic in the COI area, the related image searching method further includes steps as follows. Whether the content characteristic matches the existed content characteristics corresponding to the existed content characteristic labels is determined. If the content characteristic matches one of the existed content characteristics, the database is searched according to corresponding one of the existed content characteristic labels, so as to identify at least one image file from the database with the same existed content characteristic label.
According to an embodiment of the disclosure, during the step of selecting the context-of-interest (COI) area according to the touch input event, the touch input event includes a plurality of touch points or a touch track, and the related image searching method further includes steps as follows. A first COI area and a second COI area are selected together from the displayed image according to the touch points or the touch track of the touch input event. A first content characteristic in the first COI area and a second content characteristic in the second COI area are analyzed. A first implication attribute of the first content characteristic and a second implication attribute of the second content characteristic are determined. The database is searched according to a logical set formed by the first content characteristic, the first implication attribute, the second content characteristic and the second implication attribute, so as to identify at least one image file from the database with the same logical set.
According to an embodiment of the disclosure, the logical set is a conjunction set, a disjunction set or a complement set between the first content characteristic, the first implication attribute, the second content characteristic and the second implication attribute.
According to an embodiment of the disclosure, the content characteristic is a specific person, a specific object, a specific location, a specific scene or a specific pet. In aforesaid embodiment, the content characteristic is analyzed by an algorithm combination, and the algorithm combination is selected from the group consisting of a face recognition algorithm, an object recognition algorithm, a scene recognition algorithm and a pet recognition algorithm. In aforesaid embodiment, the implication attribute determined from the content characteristic includes at least one of a character category, an item category, a location category, a background category and a pet category.
According to an embodiment of the disclosure, during the step of searching the database according to the content characteristic and the implication attribute, the related image searching method further includes steps as follows. At least one image file from the database with the same content characteristic is searched preferably. If there is no image file found with the same content characteristic, at least one image file from the database with the same implication attribute is searched.
Another aspect of the disclosure provides a user interface controlling method, which includes steps as follows. A displayed image is shown. A context-of-interest (COI) area is selected from a displayed image according to a touch input event. A content characteristic in the COI area is analyzed. An implication attribute of the content characteristic is determined. The database is searched according to the content characteristic and the implication attribute, so as to identify at least one image file from the database with the same content characteristic or the same implication attribute. The at least one image file is displayed.
It is to be understood that both the foregoing general description and the following detailed description are by examples, and are intended to provide further explanation of the disclosure as claimed.
The disclosure can be more fully understood by reading the following detailed description of the embodiment, with reference made to the accompanying drawings as follows:
Although the present disclosure has been described in considerable detail with reference to certain embodiments thereof, other embodiments are possible. Therefore, the spirit and scope of the appended claims should not be limited to the description of the embodiments contained herein.
Reference is made to
A practical example is disclosed in following paragraphs for demonstration. Reference is made to
Via a touch input interface (e.g., a Resistive touch sensor, a capacitive touch sensor, an optical touch sensor, or acoustic touch sensor, etc.) on the electronic device, the user can trigger/form a touch input event at a specific location in the area corresponding to the displayed image 100. To form a touch input event at different location related to a screen is a common techniques in conventional touch sensing, so it is not explained further herein.
In the embodiment, the touch input event formed in the area corresponding to the displayed image 100 includes at least one touch point(s) or at least one touch track(s). The touch input event in the practical example illustrated in
The related image searching method is suitable for searching a database storing multiple image files, so as to identify related image files corresponding to the context-of-interest area COIa, COIb, COIc or COId selected by the user. In other words, the user can dynamically assign a target of interest from the displayed image 100, and the related image searching method will automatically retrieve related image files within the database (e.g., a digital album, a photo data folder, a multimedia data folder or an image storage space within the electronic device).
For example, when the user touches the location at the touch point Ta with his finger, the context-of-interest area COIa is selected from the displayed image 100; when the user touches the location at the touch point Tb with his finger, the context-of-interest area COIb is selected from the displayed image 100, and so forth.
Afterward, the step S120 is executed for analyzing a content characteristic in the context-of-interest area COIa, COIb, COIc or COId. The analyzing in step S120 is achieved by one singular algorithm or a combination of multiple algorithms.
For example, the algorithm for analyzing includes a face recognition algorithm. When the touch input event is the touch point Ta, the context-of-interest area COIa is selected from the displayed image 100, and the content characteristic in the context-of-interest area COIa is a face which belongs to a specific person (e.g., a specific male adult in this practical example shown in
In some embodiments, the content characteristic is a specific person, a specific object, a specific location, a specific scene or a specific pet. In this case, the analyzing in step S120 is achieved by a combination of multiple algorithms. The algorithm combination is selected from the group consisting of a face recognition algorithm, an object recognition algorithm, a scene recognition algorithm and a pet recognition algorithm.
As the practical example shown in
Afterward, step S140 is executed for determining an implication attribute of the content characteristic. In aforesaid step S120, the content characteristic in the context-of-interest area COIa, COIb, COIc or COId is a specific person, a specific item, a specific location, a specific scene or a specific pet. Aforesaid content characteristic is related to a particular object. Step S140 is executed for abstracting the implication attribute (i.e., schematic meaning) of the content characteristic.
For example, the context-of-interest area COIa is analyzed by step S120 as a specific face belong a specific person, and further determined by step S140 to obtain the implication attribute as non-specific male, non-specific adult or non-specific face of any people.
The context-of-interest area COIb is analyzed by step S120 as a specific face belong a specific person, and further determined by step S140 to obtain the implication attribute as non-specific kid, non-specific person wearing glasses or non-specific face of any people.
The context-of-interest area COIc is analyzed by step S120 as a specific face belong a specific location, and further determined by step S140 to obtain the implication attribute as non-specific mountain or non-specific outdoor location.
In other words, the implication attribute determined by step S140 includes at least one of a character category, an item category, a location category, a background category and a pet category.
Afterward, step S160 is executed for searching the database according to the content characteristic and the implication attribute, so as to identify at least one image file from the database with the same content characteristic or the same implication attribute.
In addition, reference is also made to
The image file(s) with the same implication attribute will be identified in step S162. Therefore, image files with high relevance (having the same content characteristic) will be identified at first in the step S161. When the searching result does not reveal the real targets of user's interest, the step 162 is executed for searching in a broaden scope.
Furthermore, aforesaid embodiments and the related image searching method shown in
Reference is made to
On the other hand, the touch input event includes a touch track with large selection area (e.g., the touch track Tc in
However, the disclosure is not limited to select the context-of-interest areas COIa and COIb. In practical applications, the multiple context-of-interest areas are selected according to user's interests and serve as references/conditions in the following searching.
Afterward, step S420 is executed for analyzing a first content characteristic in the first context-of-interest area and a second content characteristic in the second context-of-interest area. Step S440 is executed for determining a first implication attribute of the first content characteristic and a second implication attribute of the second content characteristic. The behaviors and operations of steps S420 and S440 are similar to steps S120 and S140 in aforesaid embodiments. The main difference is that, there are two (or more than two) context-of-interest areas analyzed and determined in steps S420 and S440.
Afterward, step S460 is executed for searching the database according to a logical set, which is formed by the first content characteristic, the first implication attribute, the second content characteristic and the second implication attribute, so as to identify at least one image file from the database with the same logical set. In this embodiment, the logical set is a conjunction set, a disjunction set or a complement set between the first content characteristic, the first implication attribute, the second content characteristic and the second implication attribute.
For example (referring to
Step S460 is executed to generate different outcomes by searching the database according to different logical sets (formed by the first content characteristic, the first implication attribute, the second content characteristic and the second implication attribute). For example, the outcomes can be group photos involving Brian and Alex (i.e., a conjunction set between the first content characteristic and the second content characteristic), photos with Brian and without Alex (i.e., a complement set between the first content characteristic and the second content characteristic), photos with Brian or Alex (i.e., a disjunction set between the first content characteristic and the second content characteristic), photos with Brian and non-specific kid (i.e., a conjunction set between the first content characteristic and the second implication attribute), photos with Brian and non-specific person wearing glasses (i.e., a conjunction set between the first content characteristic and the second implication attribute), photos with non-specific male adult and Alex (i.e., a conjunction set between the first implication attribute and the second content characteristic), etc.
Based on aforesaid embodiment, the related image searching method can be utilized to search photos with Brian at a specific location (e.g., a specific mountain), photos with Brian and a specific item, photos with Brian with non-specific female, individual photos of not-specific female. Aforesaid searching results can be achieved by selecting different context-of-interest areas in step S400 and setting different logical sets in step S460.
In addition, the related image searching method disclosed in the embodiments illustrated in
Therefore, the related image searching method may further include steps for establishing existed content characteristics. Reference is made to
As shown in
In this case, the existed image files stored within the database have corresponding existed content characteristic labels. Afterward, step S604 is executed for selecting a context-of-interest area from a displayed image according to a touch input event (referring to step S100 or S400 in aforesaid embodiments). Step S620 is executed for analyzing a content characteristic in the context-of-interest area (referring to step S120 or S420 in aforesaid embodiments).
After the content characteristic in the context-of-interest area is analyzed by step S620, step S630 is executed for determining whether the content characteristic matches the existed content characteristics corresponding to the existed content characteristic labels.
If the content characteristic matches one of the existed content characteristic labels in step S630, step S640 is executed for searching the database according to corresponding one of the existed content characteristic labels, so as to identify at least one image file from the database with the same existed content characteristic label.
Therefore, by comparing the content characteristic of the context-of-interest area with the existed content characteristic labels, it avoids a complex computation involving the comparison between all image contents within the database and the content characteristic. The relative image searching method can identify the image files with a corresponding existed content characteristic label as the searching result, so as to reduce computation loadings of image comparing.
On the other hand, if the content characteristic of the context-of-interest area does not match any one of the existed content characteristic labels, step S640 and S660 can be executed (referring to step S140, S160, S440, S460 in aforesaid embodiments) in this case.
In addition, reference is made to
Based on aforesaid embodiments, users can assign/select contents of their interest by a touch gesture while viewing an image on the electronic device. In response, the electronic device immediately searches a database built in the electronic device and display related images within the database, so as to elevate the efficiency of the electronic device.
It will be apparent to those skilled in the art that various modifications and variations can be made to the structure of the present disclosure without departing from the scope or spirit of the disclosure. In view of the foregoing, it is intended that the present disclosure cover modifications and variations of this disclosure provided they fall within the scope of the following claims.
Number | Name | Date | Kind |
---|---|---|---|
5159667 | Borrey et al. | Oct 1992 | A |
5579471 | Barber | Nov 1996 | A |
5684514 | Branscomb | Nov 1997 | A |
5819007 | Elghazzawi | Oct 1998 | A |
6173076 | Shinoda | Jan 2001 | B1 |
6389182 | Ihara | May 2002 | B1 |
6453307 | Schapire et al. | Sep 2002 | B1 |
6782395 | Labelle | Aug 2004 | B2 |
6850274 | Silverbrook | Feb 2005 | B1 |
6961446 | Imagawa | Nov 2005 | B2 |
7231082 | Lenoir | Jun 2007 | B2 |
8001143 | Gupta | Aug 2011 | B1 |
8204299 | Arcas | Jun 2012 | B2 |
8254684 | Raju | Aug 2012 | B2 |
8396268 | Zabair et al. | Mar 2013 | B2 |
8589402 | Iampietro | Nov 2013 | B1 |
8731719 | Franzius et al. | May 2014 | B2 |
20010003182 | Labelle | Jun 2001 | A1 |
20050163345 | van den Bergen | Jul 2005 | A1 |
20060215230 | Borrey | Sep 2006 | A1 |
20060230006 | Buscema | Oct 2006 | A1 |
20080069437 | Baker | Mar 2008 | A1 |
20080152231 | Gokturk | Jun 2008 | A1 |
20080313273 | Wang | Dec 2008 | A1 |
20090080698 | Mihara | Mar 2009 | A1 |
20090085772 | Huang et al. | Apr 2009 | A1 |
20090289905 | Ahn | Nov 2009 | A1 |
20100250553 | Higuchi | Sep 2010 | A1 |
20100316295 | Morimoto | Dec 2010 | A1 |
20110025694 | Ptucha | Feb 2011 | A1 |
20110025883 | Shkurko | Feb 2011 | A1 |
20110029540 | Ptucha | Feb 2011 | A1 |
20110029553 | Bogart | Feb 2011 | A1 |
20110029635 | Shkurko | Feb 2011 | A1 |
20110029860 | Ptucha | Feb 2011 | A1 |
20110082824 | Allison et al. | Apr 2011 | A1 |
20110164126 | Ambor | Jul 2011 | A1 |
20110222782 | Kashiwagi | Sep 2011 | A1 |
20110305377 | Drozdzal | Dec 2011 | A1 |
20120057775 | Suzuki | Mar 2012 | A1 |
20120154447 | Kim | Jun 2012 | A1 |
20120246732 | Burton | Sep 2012 | A1 |
20120269436 | Mensink et al. | Oct 2012 | A1 |
20120284793 | Steinbrecher et al. | Nov 2012 | A1 |
20130132331 | Kowalczyk et al. | May 2013 | A1 |
20130132361 | Chen et al. | May 2013 | A1 |
20130173533 | Nichols | Jul 2013 | A1 |
20130294646 | Shaw | Nov 2013 | A1 |
Number | Date | Country |
---|---|---|
200939049 | Sep 2009 | TW |
201322014 | Jun 2013 | TW |
Entry |
---|
Office Action issued in corresponding Taiwan application (May 5, 2015). |
Number | Date | Country | |
---|---|---|---|
20150063725 A1 | Mar 2015 | US |