The embodiments discussed herein are related to document searching using salience.
The information age has brought an ocean of information that is difficult to organize, filter, and rank. There are many different systems that organize large data sets. For instance, search engines organize webpages using a number of different algorithms and may return content based on the popularity of the content and the search term provided by the user. Some systems organize content based on semantic processing that focuses on the interrelationship of words within a document. And yet other systems organize content based on the popularity of words within the content. There are many other systems that use a number of techniques to organize, rank, and search data.
The subject matter claimed herein is not limited to embodiments that solve any disadvantages or that operate only in environments such as those described above. Rather, this background is only provided to illustrate one example technology area where some embodiments described herein may be practiced.
According to an aspect of an embodiment, a system includes a display, an eye tracking subsystem, a physiological sensor subsystem, and a controller. The display may display a document having content embedded within the document. The eye tracking subsystem may record viewing angle data corresponding to a number of viewing angles of an eye (or both eyes) over time as the user views at least a portion of the content within the document on the display. The physiological sensor subsystem may record a physiological response of the user over time as the user views the content within the document on the display. The controller may be coupled with the display device, the eye tracking subsystem, and the physiological sensor subsystem. The controller may be configured to provide the document to the display device for displaying to the user, associate at least a portion of the viewing angle data with a location of the content within the document, and associate the physiological response of the user with the content in the document using the viewing angle data.
The object and advantages of the embodiments will be realized and achieved at least by the elements, features, and combinations particularly pointed out in the claims.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are not restrictive of the invention, as claimed.
Example embodiments will be described and explained with additional specificity and detail through the use of the accompanying drawings in which:
There are many systems that rank, filter, or cluster documents based on the content of the documents. Search engines are a good example. These systems, however, do not associate the salience and/or focus of users viewing the content in the documents in the filtering, ranking, or clustering of documents. The various embodiments described herein, among other things, may include systems and methods that associate salience and/or predicted salience with documents and use the salience and/or predicted salience data for ranking, filtering, and/or clustering of documents.
The salience of an item is the state or quality by which it stands out relative to its neighbors. Generally speaking, salience detection may be an attentional mechanism that facilitates learning and survival by enabling organisms to focus their limited perceptual and cognitive resources on the most pertinent subset of the available sensory data. Salience may also indicate the state or quality of content relative to other content based on a user's subjective interests in the content. Salience in document organization may enable organization based on how pertinent the document is to the user and/or how interested the user is in content found within the document.
The focus of a user on content may be related to salience. Focus may include the amount of time the user spends viewing content relative to other content as well as the physiological or emotional response of the user to the content.
Salience and/or focus may be measured indirectly. For instance, the salience may be measured at least in part by using devices that relate to a user's physiological and/or emotional response to the content, for example, those devices described below. The salience and/or focus may relate to how much or how little the user cares about or is interested in what they are looking at. Such data, in conjunction with eye tracking data and/or keyword data, may suggest the relative importance or value of the content to the user. The focus may similarly be measured based in part on the user's physiological and/or emotional response and in part by the amount of time the user views the content using, for example, eye tracking data. A salience score may represent a numerical number that is a function of physiological data recorded from one or more physiological sensors and/or eye tracking data recorded from an eye tracking subsystem.
Embodiments of the present invention will be explained with reference to the accompanying drawings.
In at least one embodiment described herein, the controller 105 may be electrically coupled with and control the operation of each component of the system 100. For instance, the controller 105 may execute a program that displays a document stored in the memory 120 on the display 110 and/or through speakers or another output device in response to input from a user through the user interface 115. The controller 105 may also receive input from the physiological sensor 130, and the eye tracking subsystem 140.
As described in more detail below, the controller 105 may execute a process that associates inputs from one or more of an EEG system, the eye tracking subsystem 140, and/or other physiological sensors 130 with content within a document displayed in the display 110 and may save such data in the memory 120. Such data may be converted and/or saved as salience and/or focus data (or scores) in the memory 120. The controller 105 may alternately or additionally execute or control the execution of one or more other processes described herein.
The physiological sensor 130 may include, for example, a device that performs functional magnetic resonance imaging (fMRI), positron emission tomography, magnetoencephalography, nuclear magnetic resonance spectroscopy, electrocorticography, single-photon emission computed tomography, near-infrared spectroscopy (NIRS), Galvanic Skin Response (GSR), Electrocardiograms (EKG), pupillary dilation, Electrooculography (EOG), facial emotion encoding, reaction times, and/or event-related optical signals. The physiological sensor 130 may also include a heart rate monitor, galvanic skin response (GSR) monitor, pupil dilation tracker, thermal monitor or respiration monitor.
The eye tracking subsystem 140 may include an illumination system 210, an imaging system 215, a buffer 230, and a controller 225. The controller 225 may control the operation and/or function of the buffer 230, the imaging system 215, and/or the illumination system 210. The controller 225 may be the same controller as the controller 105 or a separate controller. The illumination system 210 may include one or more light sources of any type that direct light, for example, infrared light, toward the eye 205. Light reflected from the eye 205 may be recorded by the imaging system 215 and stored in the buffer 230. The imaging system 215 may include one or more imagers of any type. The data recorded by the imaging system 215 and/or stored in the buffer 230 may be analyzed by the controller 225 to extract, for example, eye rotation data from changes in the reflection of light off the eye 205. In at least one embodiment described herein, corneal reflection (often called the first Purkinje image) and the center of the pupil may be tracked over time. In other embodiments, reflections from the front of the cornea (the first Purkinje image) and the back of the lens (often called the fourth Purkinje image) may be tracked over time. In other embodiments, features from inside the eye may be tracked such as, for example, the retinal blood vessels. In yet other embodiments, eye tracking techniques may use the first Purkinje image, the second Purkinje image, the third Purkinje image, and/or the fourth Purkinje image singularly or in any combination to track the eye. In at least one embodiment described herein, the controller 225 may be an external controller.
In at least one embodiment described herein, the eye tracking subsystem 140 may be coupled with the display 110. The eye tracking subsystem 140 may also analyze the data recorded by the imaging system 215 to determine the eye position relative to a document displayed on the display 110. In this way, the eye tracking subsystem 140 may determine the amount of time the eye viewed specific content items within a document on the display 110. In at least one embodiment described herein, the eye tracking subsystem 140 may be calibrated with the display 110 and/or the eye 205.
The eye tracking subsystem 140 may be calibrated in order to use viewing angle data to determine the portion (or content items) of a document viewed by a user over time. The eye tracking subsystem 140 may return view angle data that may be converted into locations on the display 110 that the user is viewing. This conversion may be performed using calibration data that associates viewing angle with positions on the display.
The EEG system 300 may include a plurality of electrodes 305 that are configured to be positioned on the scalp of a user. The electrodes 305 may be coupled with a headset, hat, or cap (see, for example,
The electrodes 305 may be electrically coupled with an electrode interface 310. The electrode interface 310 may include any number of components that condition the various electrode signals. For example, the electrode interface 310 may include one or more amplifiers, analog-to-digital converters, filters, etc. coupled with each electrode. The electrode interface 310 may be coupled with buffer 315, which stores the electrode data. The controller 320 may access the data and/or may control the operation and/or function of the electrode interface 310, the electrodes 305, and/or the buffer 315. The controller 320 may be a standalone controller or the controller 105.
The EEG data recorded by The EEG system 300 may include EEG rhythmic activity, which may be used to determine a user's salience when consuming content with a document. For example, theta band EEG signals (4-7 Hz) and/or alpha band EEG signals (8-12 Hz) may indicate a drowsy, idle, relaxed user, and result in a low salience score for the user while consuming the content. On the other hand, beta EEG signals (13-30 Hz) may indicate an alert, busy, active, thinking, and/or concentrating user, and result in a high salience score for the user while consuming the content.
The term “content item” refers to one of the advertisement 505, the text 510, the image 515, and the video 520; the term may also refer to other content that may be present in a document. The term “content item” may also refer to a single content item such as music, video, flash, text, a PowerPoint presentation, an animation, an HTML document, a podcast, a game, etc. Moreover, the term “content item” may also refer to a portion of a content item, for example, a paragraph in a document, a sentence in a paragraph, a phrase in a paragraph, a portion of an image, a portion of a video (e.g., a scene, a cut, or a shot), etc. Moreover, a content item may include sound, media or interactive material that may be provided to a user through a user interface that may include speakers, a keyboard, touch screen, gyroscopes, a mouse, heads-up display, instrumented “glasses”, and/or a hand held controller, etc. The document 500 shall be used to describe various embodiments described herein.
At block 615 physiological data is received. Physiological data may be received, for example, from The EEG system 300 as physiological data recorded over time. Various additional or different physiological data may be received. The physiological data may be converted or normalized into salience data (and/or focus data). At block 620 the salience data and the eye tracking data may be associated with the content in document 500 based on the time the data was collected. Table 1, shown below, is an example of eye tracking data and salience data associated with the content in document 500.
The first column of Table 1 is an example of an amount of time a user spent viewing content items listed in the second column before moving to the next content item. Note that the user moves between content items and views some content items multiple times. As shown, summing the amount of time the user spends viewing specific content items; the user views the advertisement 505 for a total of 20 seconds, the text 510 for a total of 210 seconds, the image 515 for a total of 385 seconds, and the video 520 for a total of 35 seconds. Thus, the user spends most of the time viewing the image 515. This data is useful in describing how long the user is looking at the content, but does not reflect how interested, salient, or focused the user is when viewing the content in document 500.
The third column lists the average salience score of the content. In this example, the salience score is normalized so that a salience score of one hundred represents high salience and/or focus and a salience score of zero represents little salience and/or focus. The salience score listed in Table 1 is the average salience score over the time the user was viewing the listed content item. The average salience score for both times the user viewed the advertisement 505 is 46, the average salience score for the text 510 is 85, the average salience score for the image 515 is 63, and the average salience score for the video 520 is 45. Thus, in this example, the text 510 has the highest salience even though the user viewed the text 510 for the second longest period of time, and the image 515 has the second highest salience score even though it was viewed the longest period of time.
As shown in Table 1, process 600 may associate specific content items of document 500 with salience data based on the eye tracking data. Furthermore, process 600 may also associate specific content with the amount of time the content was viewed by the user. The salience data and the time data associated with the content may be used in a number of ways. For example, metadata may be stored with document 500 or as a separate metadata file that tags the specific content with either or both the salience data and/or the time the content was viewed. This metadata may also associate keywords or other semantic information with the content in document 500.
Process 600 may be used, for example, to tag the content in document 500 with eye tracking data and/or salience data. For example, content 505 may be tagged with a salience score of 46, the text 510 may be tagged with a salience score of 85, the image 515 may be tagged with a salience score of 63, and the video 520 may be tagged with a salience score of 45. In at least one embodiment described herein, the content may also be tagged with the amount of time the user views each content item or the percentage of time the user views each content time relative to the amount of time the user views document 500. In at least one embodiment described herein, the content may be tagged with a score that is a combination of the salience and the time the user viewed the content. The content may be tagged in a separate database or file, or embedded with the document 500.
In some embodiments, the content items within document 500 may be highlighted based on their salience score. For example, content items above a certain threshold may be highlighted. In this example, if the threshold is 50 then text 510 may be highlighted and the image 515 may be highlighted. As another example, the intensity, brightness, color, etc. of the content items may vary based on the salience score.
Highlighting of a content item may include any type of change in the content item, other content items, or the document that distinguishes the content item from other content items or indicates the significance of the content item. For example, highlighting may include circling the content item, bordering of all or portions of the content item, flashing of all or portions of the content item, changing the color of all or portions of the content item, changing the brightness of all or portions of the content item, changing the contrast of all or portions of the content item, changing of all or portions of the content item look like it has been marked with a highlighter, fading out of all or portions of content items that are not being highlighted, changing the volume of portions of the content item, starting a time-based content item (e.g., a video, or audio) at a different place in time, outlining all or portions of the content item, etc.
In some embodiments, a salience score may be determined for one or more content items within document 500. The content items may or may not be highlighted within the document based on the associated salience score. As another example, the salience score may be associated with content items in metadata. When document 500 is viewed at some later time the content items may or may not be highlighted based on the salience scores stored within metadata. In this way the content items that were found to have the highest salience by the user during one viewing may be identified for the user at some later viewing to aid the user in identifying content items that may be of interest.
Furthermore, the process 600 may be repeated with any number of documents. For instance, each of these documents may be provided to the user and associated with eye tracking data and/or physiological data as the user views each document, which may then be stored in a database.
At block 710, keywords may be associated with each content item within the document using any type of keyword generation and/or indexing technique. Keywords may be assigned to content items using any number of techniques such as, for example, semantic indexing, statistical techniques, natural language indexing, keyword optimization techniques, latent semantic indexing, content type indexing, subject matter indexing, document parsing, natural language processing, etc. The content may also be labeled based on the type of content such as text, video, image, advertisement, games, poll, flash, etc. For example, the metadata may identify the advertisement 505 as an advertisement, the text 510 as text, the image 515 as an image, and/or the video 520 as a video. Some content such as advertisements, flash, etc. may include different types of content. Such content may be labeled with one or more content type identifiers. Keywords from the text 510 may be used, which represent the various concepts described as text.
The keywords from the various different content items 505, 510, 515, and 520 within the document 500 may be consolidated to form keywords for the document 500. At block 715, the keywords may be ranked or weighted based on the salience data associated with the content.
For example, the advertisement 505 may be associated with keywords: rafting, family sightseeing, and Idaho. In the document these keywords may be ranked based on the advertisement 505's salience score of 46. The text 510 may be associated with the following keywords: kayak, whitewater, paddling, and Colorado River. In the document these keywords may be ranked based on the text 510's salience score of 85. The image 515 may be associated with keywords: image, whitewater, and Payette River. In the document these keywords may be ranked based on the image 515's salience score of 63. The video 520 may be associated with keywords: video, paddling safety, American Whitewater, and personal floatation device. In the document these keywords may be ranked based on the video 520's salience score of 45. In this example, the document 500 includes keywords in the following ranked order: kayak, whitewater, paddling, Colorado River, image, whitewater, Payette River, rafting, family sightseeing, Idaho, video, paddling safety, American Whitewater, and personal floatation device.
As another example, the keywords associated with each content item may also be ranked based on the relevance of the keywords to the content. Table 2 illustrates how the content keyword scores may be combined with the salience scores of each content item in the document 500 to produce a combined score. The first column lists the keywords associated with each content item listed in column 2. The content keyword score is listed in column three. The content keyword score is a normalized value (100 being the highest score and zero the lowest score) that depicts the relevance of the keyword listed in the first column with the content listed in the second column. Any number of techniques may be used to determine the content keyword score, for example, using term frequency—inverse document frequency techniques. The fourth column lists the overall average salience score of the content and the last column lists the combined score. In this example, the combined score is an average of the content keyword score and the salience score. Any other mathematical function that combines the content keyword score and the salience score may be used. The combined score may also be a function of the amount of time the user spent viewing the content. The combined score may weight either the content keyword score or the salience score more heavily, or the combined score may weight the content keyword score and the salience score equally. The combined score may incorporate other data known about the content item, the keywords, and/or the document. Process 700 may rank the keywords of all the documents in the database using the same technique, or using different techniques.
At block 720, a search term may be received. The search term may then be used at block 725 to return a document or a set of documents based on the salience. For instance, if the search term provided at block 720 is “kayak,” then the document 500 would likely be a relevant document based on the keywords in the document and the salience score because of the combined score of 75. Without the salience score the search term “kayak” would be less relevant because the keyword score is only 65. In this example, by adding the salience score, the search term becomes more or less relevant. Similarly, if the search term is “safety,” then the document 500 will be less relevant based on the combined score because the salience score pulled the content keyword score down from 85 to a combined score of 45. These scores provided in this example are relevant to a search term in comparison with combined scores of other documents in the database. In this example, process 700 uses the salience of the content to return documents that not only have a keyword associated with a search term, but also return documents that the user is interested in based on the salience of the document. In this way a search may provide results that are user specific.
The embodiments described herein may include the use of a special purpose or general purpose computer including various computer hardware or software modules, as discussed in greater detail below.
Embodiments described herein may be implemented using computer-readable media for carrying or having computer-executable instructions or data structures stored thereon. Such computer-readable media may be any available media that may be accessed by a general purpose or special purpose computer. By way of example, and not limitation, such computer-readable media may include non-transitory computer-readable storage media including Random Access Memory (RAM), Read-Only Memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Compact Disc Read-Only Memory (CD-ROM) or other optical disk storage, magnetic disk storage or other magnetic storage devices, flash memory devices (e.g., solid state memory devices), or any other storage medium which may be used to carry or store desired program code in the form of computer-executable instructions or data structures and which may be accessed by a general purpose or special purpose computer. Combinations of the above may also be included within the scope of computer-readable media.
Computer-executable instructions may include, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing device (e.g., one or more processors) to perform a certain function or group of functions. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
As used herein, the terms “module” or “component” may refer to specific hardware implementations configured to perform the operations of the module or component and/or software objects or software routines that may be stored on and/or executed by general purpose hardware (e.g., computer-readable media, processing devices, etc.) of the computing system. In at least one embodiment described herein, the different components, modules, engines, and services described herein may be implemented as objects or processes that execute on the computing system (e.g., as separate threads). While some of the system and methods described herein are generally described as being implemented in software (stored on and/or executed by general purpose hardware), specific hardware implementations or a combination of software and specific hardware implementations are also possible and contemplated. In this description, a “computing entity” may be any computing system as previously defined herein, or any module or combination of modulates running on a computing system.
All examples and conditional language recited herein are intended for pedagogical objects to aid the reader in understanding the invention and the concepts contributed by the inventor to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions. Although embodiments of the present inventions have been described in detail, it should be understood that the various changes, substitutions, and alterations could be made hereto without departing from the spirit and scope of the invention.