Historically, online searching has been conducted by allowing a user to enter user-supplied search terms in the form of text. The results of the search were highly dependent on the search terms entered by the user. If a user had little familiarity with a subject, then the search terms supplied by the user were often not the best terms that would produce a useful result.
Moreover, as computing devices have become more advanced, consumers have begun to rely more heavily on mobile devices. These mobile devices often have small screens and small user input interfaces, such as keypads. Thus, it can be difficult for a consumer to search via the mobile device because the small size of the characters on the display screen make entered text difficult to read and/or the keypad is difficult or time consuming to use.
Implementations described and claimed herein address the foregoing problems by providing image-based text extraction and searching. In accordance with one implementation, an image can be selected by a user, and the associated image data and proximate textual data can be extracted in response to the image selection. For example, image data and textual data can be extracted from a web page by receiving a gesture input from a user who has select an image on the web page (e.g., by circling the image using a finger or stylus on a touch screen interface). The system then identifies the associated image data and the textual data located proximate to the selected image.
In accordance with another implementation, the extracted image data and textual data can be utilized to perform a computerized search. For example, one or more search options can be presented to a user based on the extracted image data and the extracted proximate textual data. The system can determine one or more database search terms based on the textual data and generate at least a first search query proposal related to the image data and the textual data.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Other implementations are also described and recited herein.
Users of computing devices can use textual entry to conduct a search. For example, a search query can be formed by a sequence of textual words entered into a browser's text search field. The browser can then execute the search on a computer network and return the results of the text search to the user. Such a system works adequately when the consumer knows what he or she is looking for, but it can be less helpful when the user does not know a lot about the subject or item being searched. For example, the user may be searching for an article of clothing that he or she saw in a magazine advertisement but that is not readily identifiable by name. Moreover, the consumer may be searching for an item that the consumer cannot adequately describe.
Also, the data content that is presented to consumers is increasingly image-based data. Moreover, such data content is often presented to consumers via their mobile devices, such as mobile phones, tablets, and other devices with surface-based user interfaces. The user interfaces on these devices, particularly mobile phones, can be very difficult for the consumer to use when entering text. Entering text can be difficult because of the size of the keypads, and mistakes in spelling or punctuation can be difficult to catch because of the small size of the displays on these mobile devices. Thus, text searching can be inconvenient and sometimes difficult.
For example, a user can use a gesture referred to as a lasso to encircle an image displayed on a device. The computing device associated with the display treats the lasso as a gesture input that is selecting the displayed image, which can be accomplished, for example, using a surface-based user interface.
In
A database 104 shown in
In one alternative implementation, the lasso input can be used to surround both an image and textual data. Additional textual data can also be extracted from outside the boundary of the lasso. The search to locate additional attributes can weight information related to the lassoed text more heavily than information related to text outside the lasso.
Once the selected image has been determined and the surrounding contextual data has been determined, the system 200 can generate one or more possible search queries. The search queries can be generated based on the extracted data and the selected image, or the extracted data and the image can first be used to generate additional search terms for the text search queries.
An extraction operation 220 performs entity extraction can be performed based on the contextual data generated by contextual operation 212. The entity extraction operation 220 can utilize the textual data that was proximate to the selected image and lexicon database 224 to determine additional possible search terms. For example, if the word “sandal” was published proximate to an image of a sandal, the entity extraction operation 212 may utilize the text “sandal” and the database 224 to generate alternative keywords, such as “summer footwear.” Thus, rather than proposing a search for sandals, the system 200 could propose a search for summer footwear.
Similarly, the selected image data can be sent to an image database to attempt to locate and further identify the selected image. Such a search can be performed in an image database 232. Once an image is detected in the image database 232, similar images can be located in the database. For example, if the user is searching for red shoes, the database can return not only the closest match to the user-selected image but also images corresponding to similar red shoes by other manufacturers. Such results might be used to form proposed search queries for searching for different models of red shoes.
In accordance with one implementation, a scalable image indexing and searching algorithm is based on a visual vocabulary tree (VT). The VT is constructed by performing hierarchical K-means clustering on a set of training feature descriptors representative of the database. A total of 50,000 visual words can be extracted from 10 million sampled dense scale-invariant feature transform (SIFT) descriptors, which are then used to build a vocabulary tree of 6 levels of branches and 10 nodes/sub-branches for each branch. The storage for the vocabulary tree in cache can be about 1.7 MB with 168 bytes for each visual word. The VT index scheme provides a fast and scalable mechanism suitable for large-scale and expansible databases. Besides the VT, one may also incorporate the image context around user-specified region of interest into the indexing scheme. One might utilize a large database with tens of millions of images. The dataset could be derived from two parts for example: a first from Flickr, which includes at least 700,000 images from 200 popular landmarks in ten countries, each image associated with its metadata (title, description, tag, and summarized user comments); and a second from a collection of local businesses from Yelp, which includes 350,000 user-uploaded images (e.g., food, menu, etc.) associated with 16,819 restaurants in twelve cities.
In addition to performing a search for an image and generating an output of possible images, the characteristics of those images can be utilized to propose a search query. For example, if all the images located in the search are shoes for women, the ultimate search query might be focused on items for women, rather than for items for both men and women. As such, the system 200 can not only extract data located proximate to an image, but also, the system 200 can utilize search results on the extracted data and search results based on the selected image to identify further data for use in a proposed search query.
Thus, in accordance with one implementation, different analyses can be performed to facilitate search query generation. For example, “context validation” allows extraction of the valid product specific attributes, and a large-scale image search allows similar images to be found in order to understand properties of a product from a visual perspective. Also, attribute mining allows attributes, such as the gender of a product, brand name, category name, etc. to be discovered from the prior two analyses.
After additional keywords and possible images are generated in this example, a suggestion operation 234 formulates and suggests one or more possible search queries that the user might want to make. For example, the system 200 might take a user-selected image of a tennis shoe and surrounding text data that indicated terms relating to tennis and use that data to generate proposed search queries for different brands of shoes for tennis. Thus, the system 200 might propose a search query to the consumer of “Search for shoes for tennis made by Nike?” or “Search for shoes for tennis made by Adidas?” or just “Search for shoes for tennis?”
Once the proposed search queries are presented to the user, a reformulation operation 240 presents the suggestions to the user and allows the user to re-formulate the searches, if appropriate. Thus, the user may reformulate one of the search queries listed above to read: “Search for shoes for racquetball made by Nike.” Alternatively, the user could simply select one or more of the formulated search queries if it was satisfactory for the user's intended purpose.
The proposed search queries can be formulated with image data as well. Thus, for example, an image(s) might be used to shop for a particular article of clothing. The image can be displayed to the user along with the proposed search query.
The selected search query can be implemented in the appropriate database(s). For example, an image search can be conducted in the image database. A textual search can be conducted in a text database. A search operation 236 performs a contextual image search after the user directs the selected or modified search to take place. In order to save time, all searches might be conducted while the user is thinking about which proposed search query to select. Then, the corresponding results can be displayed for the selected search query.
Once the user has selected a search query and the search results 244 for that search query have been generated, the search results can be sorted further. The search results 244 may be rearranged in other fashions as well (e.g., re-grouping, filtering, etc.).
For example, if the user is searching for an article of clothing, the search results can provide a recommendation 248 for various sites where the item of clothing can be purchased. In such an example, the task recommendation 248 is for the user to purchase the item from the site that offers the article of clothing for the lowest price.
Thus, as can be seen from
In one alternative implementation, one can allow a user to select an image. The image is searched on an image database. The top result of the search is hopefully the selected image. Regardless of whether it is, however, the metadata for the search result is explored to extract keywords. Those keywords can then be projected on a pre-computed dictionary. For example, the Okapi BM25 ranking function may be used. The text-based retrieval result may then be re-ranked.
A search operation 406 initiates a text-based search as a result of the gesture input without the need for the user to supply any user-generated search terms. A formulation operation 408 formulates a computerized search using the image selected by the user's gesture and at least a portion of textual data determined to be associated with the selected image.
A generating operation 608 uses the image data and textual data to generate in a computing device at least a first search query proposal that is related to the image data and to the textual data. In many instances, multiple different search queries can be generated to provide different search query options to the user. A presenting operation 610 presents the one or more proposed search query options to a user (e.g., via a user interface on a computing device).
A receiving operation 612 receives a signal from the user (e.g., via a user interface of the computing device), which can be utilized as an input to indicate that the user has selected the first search query proposal. If multiple search queries are proposed to the user, the signal may indicate which of the multiple queries the user selected.
Alternatively, the user can modify a proposed search query. The modified search query can be returned and indicated to be the search query that the user wants to search.
A search operation 614 conducts a computer-implemented search corresponding to the selected search query. Once the search results from the selected search query are received, as shown by a receiving operation 616, the search results can be reorganized, as shown by a reorganizing operation 618. For example, the search results can be reorganized based on the original image data and original textual data. Moreover, the search results may be reorganized based on the enhanced data generated from the original image data and the original textual data. The search results may even be reorganized based upon a trend noted in the search results and the original search information. For example, if the original search information indicates a search for a particular type of shoe but does not indicate the likely gender associated with the shoe and if the search results returned from the search indicate that most of the search results are for women's shoes, the search results can be reorganized to place the results that are for men's shoes further down the result list, as representing results that are less likely being of interest to the user.
A presenting operation 620 presents the search results to the user (e.g., via a user interface of a computing device). For example, image data for each result in the set of organized search results can be presented via a graphical display to the user. This presentation facilitates selection of one of the search results or conveyed images by the user on the mobile device. In accordance with one implementation, the selection by the user might be for the user to purchase the displayed result or to perform further comparison-shopping for the displayed result.
A search formulation module 720 can take the selected image data and the extracted textual data to formulate at least one search query as described above. The one or more search queries can be presented via the computing device 704 for selection by a user. The selected search query can then be executed in database 728.
The system bus 23 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, a switched fabric, point-to-point connections, and a local bus using any of a variety of bus architectures. The system memory may also be referred to as simply the memory, and includes read only memory (ROM) 24 and random access memory (RAM) 25. A basic input/output system (BIOS) 26, containing the basic routines that help to transfer information between elements within the computer 20, such as during start-up, is stored in ROM 24. The computer 20 further includes a hard disk drive 27 for reading from and writing to a hard disk, not shown, a magnetic disk drive 28 for reading from or writing to a removable magnetic disk 29, and an optical disk drive 30 for reading from or writing to a removable optical disk 31 such as a CD ROM, DVD, or other optical media.
The hard disk drive 27, magnetic disk drive 28, and optical disk drive 30 are connected to the system bus 23 by a hard disk drive interface 32, a magnetic disk drive interface 33, and an optical disk drive interface 34, respectively. The drives and their associated tangible computer-readable media provide nonvolatile storage of computer-readable instructions, data structures, program modules and other data for the computer 20. It should be appreciated by those skilled in the art that any type of tangible computer-readable media which can store data that is accessible by a computer, such as magnetic cassettes, flash memory cards, digital video disks, random access memories (RAMs), read only memories (ROMs), and the like, may be used in the example operating environment.
A number of program modules may be stored on the hard disk, magnetic disk 29, optical disk 31, ROM 24, or RAM 25, including an operating system 35, one or more application programs 36, other program modules 37, and program data 38. A user may enter commands and information into the personal computer 20 through input devices such as a keyboard 40 and pointing device 42. Other input devices (not shown) may include a microphone (e.g., for voice input), a camera (e.g., for a natural user interface (NUI)), a joystick, a game pad, a satellite dish, a scanner, or the like. These and other input devices are often connected to the processing unit 21 through a serial port interface 46 that is coupled to the system bus, but may be connected by other interfaces, such as a parallel port, game port, or a universal serial bus (USB). A monitor 47 or other type of display device is also connected to the system bus 23 via an interface, such as a video adapter 48. In addition to the monitor, computers typically include other peripheral output devices (not shown), such as speakers and printers.
The computer 20 may operate in a networked environment using logical connections to one or more remote computers, such as remote computer 49. These logical connections are achieved by a communication device coupled to or a part of the computer 20; the implementations are not limited to a particular type of communications device. The remote computer 49 may be another computer, a server, a router, a network PC, a client, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer 20, although only a memory storage device 50 has been illustrated in
When used in a LAN-networking environment, the computer 20 is connected to the local network 51 through a network interface or adapter 53, which is one type of communications device. When used in a WAN-networking environment, the computer 20 typically includes a modem 54, a network adapter, a type of communications device, or any other type of communications device for establishing communications over the wide area network 52. The modem 54, which may be internal or external, is connected to the system bus 23 via the serial port interface 46. In a networked environment, program engines depicted relative to the personal computer 20, or portions thereof, may be stored in the remote memory storage device. It is appreciated that the network connections shown are example and other means of and communications devices for establishing a communications link between the computers may be used.
A variety of applications lend themselves to image based searching. For example, image based searching is expected to be particularly useful for shopping. It should also be useful for identifying landmarks. And, it will have applicability for providing information about cuisine. These are but a few examples.
In an example implementation, software or firmware instructions for providing a user interface, extracting textual data, formulating searches, and reorganizing search results, and other hardware/software blocks stored in memory 22 and/or storage devices 29 or 31 and processed by the processing unit 21. The search results, image data, textual data, lexicon, storage image database, and other data may be stored in memory 22 and/or storage devices 29 or 31 as persistent datastores.
Some embodiments may comprise an article of manufacture. An article of manufacture may comprise a tangible storage medium to store logic. Examples of a storage medium may include one or more types of computer-readable storage media capable of storing electronic data, including volatile memory or non-volatile memory, removable or non-removable memory, erasable or non-erasable memory, writeable or re-writeable memory, and so forth. Examples of the logic may include various software elements, such as software components, programs, applications, computer programs, application programs, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, application program interfaces (API), instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. In one embodiment, for example, an article of manufacture may store executable computer program instructions that, when executed by a computer, cause the computer to perform methods and/or operations in accordance with the described embodiments. The executable computer program instructions may include any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, and the like. The executable computer program instructions may be implemented according to a predefined computer language, manner or syntax, for instructing a computer to perform a certain function. The instructions may be implemented using any suitable high-level, low-level, object-oriented, visual, compiled and/or interpreted programming language.
The implementations described herein are implemented as logical steps in one or more computer systems. The logical operations may be implemented (1) as a sequence of processor-implemented steps executing in one or more computer systems and (2) as interconnected machine or circuit modules within one or more computer systems. The implementation is a matter of choice, dependent on the performance requirements of the computer system being utilized. Accordingly, the logical operations making up the implementations described herein are referred to variously as operations, steps, objects, or modules. Furthermore, it should be understood that logical operations may be performed in any order, unless explicitly claimed otherwise or a specific order is inherently necessitated by the claim language.
The above specification, examples, and data provide a complete description of the structure and use of exemplary implementations. Since many implementations can be made without departing from the spirit and scope of the claimed invention, the claims hereinafter appended define the invention. Furthermore, structural features of the different examples may be combined in yet another implementation without departing from the recited claims.