1. Field
This application relates generally to identifying implicit social relationships from digital communication and biological responses (bioresponse) to digital communication, and more specifically to a system and method for generating an implicit social graph from biological responses to digital communication.
2. Related Art
Biological response (bioresponse) data is generated by monitoring a person's biological reactions to visual, aural, or other sensory stimuli. Bioresponse may entail rapid simultaneous eye movements (saccades), eyes focusing on a particular word or graphic for a certain duration, hand pressure on a device, galvanic skin response, or any other measurable biological reaction. Bioresponse data may further include or be associated with detailed information on what prompted a response. Eye-tracking systems, for example, may indicate a coordinate location of a particular visual stimuli—like a particular word in a phrase or figure in an image—and associate the particular stimuli with a certain response. This association may enable a system to identify specific words, images, portions of audio, and other elements that elicited a measurable biological response from the person experiencing the multimedia stimuli. For instance, a person reading a book may quickly read over some words while pausing at others. Quick eye movements, or saccades, may then be associated with the words the person was reading. When the eyes simultaneously pause and focus on a certain word for a longer duration than other words, this response may then be associated with the particular word the person was reading. This association of a particular word and bioresponse may then be analyzed.
Bioresponse data may be used for a variety of purposes ranging, from general research to improving viewer interaction with text, websites, or other multimedia information. In some instances, eye-tracking, data may be used to monitor a reader's responses while reading text. The bioresponse to the text may then be used to improve the reader's interaction with the text by, for example, providing definitions of words that the user appears to have trouble understanding.
Bioresponse data may be collected from a variety of devices and sensors that are becoming more and more prevalent today. Laptops frequently include microphones and high-resolution cameras capable of monitoring a person's facial expressions, eye movements, or verbal responses while viewing or experiencing media Cellular telephones now include high-resolution cameras, proximity sensors, accelerometers, and touch-sensitive screens (galvanic skin response) in addition to microphones and buttons, and these “smartphones” have the capacity to expand the hardware to include additional sensors. Moreover, high-resolution cameras are decreasing in cost making them prolific in a variety of applications ranging from user devices like laptops and cell phones to interactive advertisements in shopping malls that respond to mall patrons' proximity and facial expressions. The capacity to collect biological responses from people interacting with digital devices is thus increasing dramatically.
Interaction with digital devices has become more prevalent concurrently with a dramatic increase in online social networks that allow people to connect, communicate, and collaborate through the internet. Social networking sites have enabled users to interact through a variety of digital devices including traditional computers, tablet computers, and cellular telephones. Information about users from their online social profiles has allowed for highly targeted advertising and rapid growth of the utility of social networks to provide meaningful data to users based on user attributes. For instance, users who report an affinity for certain activities like mountain biking or downhill skiing may receive highly relevant advertisements and other suggestive data based on the fact that these users enjoy specific activities. In addition, users may be encouraged to connect and communicate with other users based on shared interests, adding further value to the social networking site, and causing users to spend additional time on the site, thereby increasing advertising revenue.
A social graph may be generated by social networking sites to define a user's social network and personal attributes. The social graph may then enable the site to provide highly relevant content for a user based on that user's interactions and personal attributes as demonstrated in the user's social graph. The value and information content of existing social graphs is limited, however, by the information losers manually enter into their profiles and the networks to which users manually subscribe. There is therefore a need and an opportunity to improve the quality of social graphs and enhance user interaction with social networks by improving the information attributed to given users beyond what users manually add to their online profiles.
Thus, a method and system are desired for using bioresponse data collected from prolific digital devices to generate an implicit social graph—including enhanced information automatically generated about users—to improve beyond existing explicitly generated social graphs that are limited to information manually entered by users.
In one exemplary embodiment, a computer-implemented method of generating an implicit social graph includes receiving an eye-tracking data associated with a word. The eye-tracking data is received from a user device. The word is a portion of a digital document. The eye-tracking data comprises at least one fixation period of substantially seven-hundred and fifty milliseconds and at least one regression from another portion of the digital document to the word. A comprehension difficulty of the word is determined based on the eye-tracking data. One or more attributes to a user of the user device is assigned, by one or more processors based on the comprehension difficult, wherein the one or more attributes are determined based on a meaning of the word. An implicit social graph is generated based on the one or more attributes.
Optionally, the method can further include providing a suggestion to the user, based on the implicit social graph. At least one of a suggestion of another user, a product, or an offer can be provided. A targeted advertisement can be provided to the user, based on the implicit social graph.
In another exemplary embodiment, a computer-implemented method of generating an implicit, social graph, the method includes receiving an eye-tracking data associated with a word, wherein the eye-tracking data is received from a user device. The word is a portion of a digital document. The eye-tracking data includes an initial fixation period of substantially twice a mean period of as specified number of preceding, words. A comprehension difficulty of the word is determined based on the eve-tracking data. One or more attributes are assigned to a user of the user device based on the comprehension difficulty. The one or more attributes are determined based on a meaning of the word. An implicit social graph is generated based on the one or more attributes.
Optionally, the eye-tracking data further can include a regressive fixation from another portion of the digital document to the word. The regressive fixation can occur at least five-hundred milliseconds after a termination of the initial fixation duration. The regressive fixation can occurs after at least one second after a termination of the initial fixation duration. The specified number of preceding words can include three words of at least four characters each.
The present application can be best understood by reference to the following description taken in conjunction with the accompanying figures, in which like parts may be referred to by like numerals.
The Figures described above are a representative set, and are not an exhaustive with respect to embodying the invention.
The following description is presented to enable a person of ordinary skill in the art to make and use the various embodiments. Descriptions of specific devices, techniques, and applications are provided only as examples. Various modifications to the examples described herein will be readily apparent to those of ordinary skill in the art, and the general principles defined herein may be applied to other examples and applications without departing from the spirit and scope of the various embodiments. Thus, the various embodiments are not intended to be limited to the examples described herein and shown, but are to be accorded the scope consistent with the claims.
Disclosed are a system, method, and article of manufacture for generating an implicit social graph with bioresponse data Although the present embodiments have been described with reference to specific example embodiments, it will be evident that various modifications and changes may be made to these embodiments without departing from the broader spirit and scope of the various claims.
In step 120 of process 100, the significance of the bioresponse data is determined. In one embodiment, the received bioresponse data may be associated with portions of the visual, aural, or other sensory stimuli. For example, in the above eye-tracking, embodiment, the eye-tracking data may associate the amount of time, pattern of eye movement, or the like spent viewing each word with each word in the text message. This association may be used to determine a cultural significance, comprehension or lack thereof of the word, or the like.
In step 130 of process 100, an attribute is assigned to the user. The attribute may be determined based on the bioresponse data. For example, in the above eye-tracking embodiment, comprehension of a particular word may be used to assign an attribute to the user. For example, if the user understands the word “Python” in the text message “I wrote the code in Python,” then the user may be assigned the attribute of “computer programming knowledge.”
In step 140 of process 100, an implicit social graph is generated using the assigned attributes. Users are linked according to the attributes assigned in step 130. For example, all users with the attribute “computer programming knowledge” may be linked in the implicit social graph.
In step 150 of process 100, a suggestion may be provided to the user based on the implicit social graph. For example, the implicit social graph may be used to suggest contacts to a user, to recommend products or offers the user may find useful, or other similar suggestions. In one embodiment, a social networking site may communicate a friend suggestion to users who share a certain number of links or attributes. In another embodiment, a product, such as a book on computer programming, may be suggested to the users with a particular attribute, such as the “computer programming knowledge” attribute. One of skill in the art will recognize that suggestions are not limited to these embodiments. Information may be retrieved from the implicit social graph and used to provide a variety of suggestions to a user.
A hypergraph 200 of users 210-217 may be used to generate an implicit social graph 280 in
In some embodiments, the implicit social graph 280 may be a weighted graph, where edge weights are determined by such values as the bioresponse data that indicates a certain user attribute (e.g., eye-tracking data that indicates a familiarity (or a lack of familiarity) with a certain concept or entity represented by a visual component). One exemplary quantitative metric for determining an edge weight between two user nodes with bioresponse data may include measuring the number of common user attributes shared between two users as determined by an analysis of the bioresponse data For example, users with two common attributes, such as users 210 and 211, may have a stronger weight for edge 290 than users with a single common attribute, such as users 210 and 212. In this embodiment, the weight of edge 292 may have a lower weight than the weight of edge 290. In another example, a qualitative metric may be used to determine an edge weight. For example, a certain common attribute (e.g., eye-tracking data indicating a user recognizes an obscure actor) may have a greater weight than a different common attribute (e.g., eye-tracking data that indicates a sports team preference of the user). In this embodiment, the weight of edge 294, indicating users 212 and 214 both recognize an obscure actor, may be weighted more heavily than the weight of edge 296, indicating users 211 and 217 are both San Francisco Giants fans.
It should be noted that, in addition to bioresponse data, other values may also be used to construct the implicit social graph. For example, edge weights of the implicit social graph may be weighted by the frequency, recency, or direction of interactions between users and other contacts, groups in a social network, or the like. In some example embodiments, context data of a mobile device of a user may also be used to weigh the edge weights of the implicit social graph. Also, in some embodiments, the content of a digital document (e.g., common term usage, common argot, common context data if a context-enriched message) may be analyzed to generate an implicit social graph. Further, the implicit social graph may change and evolve over time as more data is collected from the user. For example, at one point in time, a user may not be a San Francisco Giants fan. However, some time later, the user may move to San Francisco and begin to follow the team. At this point, the user's preferences may change and the user may become a San Francisco Giants fan. In this example, the implicit social graph may change to include this additional attribute.
Returning to
In step 110 of process 100, bioresponse data is received. When a user is viewing data on a user device, bioresponse data may be collected. The viewed data may take the form of a text message, webpage element, instant message, email, social networking status update, micro-blog post, blog post, video, image, or any other digital document. The bioresponse data may be eye-tracking data, bean rate data, hand pressure data, galvanic skin response data, or the like. A webpage element may be any element of a web page document that is perceivable by a user with a web browser on the display of a computing, device.
In some embodiments, eye-tracking module 340 may utilize an eye-tracking method to acquire the eye movement pattern. In one embodiment, an example eye-tracking method may include an analytical gaze estimation algorithm that employs the estimation of the visual direction directly from selected eye features such as irises, eye corners, eyelids, or the like to compute a gaze 360 direction. If the positions of any two points of the nodal point, the fovea, the eyeball center or the pupil center can be estimated, the visual direction may be determined.
In addition, a light may be included on the from side of user device 310 to assist detection of any points hidden in the eyeball. Moreover, the eyeball center may be estimated from other viewable facial features indirectly. In one embodiment, the method may model an eyeball as a sphere and hold the distances from the eyeball center to the two eye corners to be a known constant. For example, the distance may be fixed to 13 mm. The eye corners may be located (for example, by using a binocular stereo system) and used to determine the eyeball center. In one exemplary embodiment, the iris boundaries may be modeled as circles in the image using a Hough transformation.
The center of the circular iris boundary may then be used as the pupil center. In other embodiments, a high-resolution camera and other image processing tools may be used to detect the pupil. It should be noted that, in some embodiments, eye-tracking module 340 may utilize one or more eye-tracking methods in combination. Other exemplary eye-tracking methods include: a 2D eye-tracking algorithm using a single camera and Purkinje image, a real-time eye-tracking algorithm with head movement compensation, a real-time implementation of a method to estimate gaze 360 direction using stereo vision, a free head motion remote eyes (REGT) technique, or the like. Additionally, any combination of any of these methods may be used.
When implemented using the OpenCV library, if no previous eye position from preceding frames is known, the input image may first be scanned for possible circles, using an appropriately adapted Hough algorithm. To speed up operation, an image of reduced size may be used in this step. In one embodiment, limiting the Hough parameters (for example, the radius) to a reasonable range provides additional speedup. Next, the detected candidates may be checked against further constraints like a suitable distance of the pupils and a realistic roll angle between them. If no matching pair of pupils is found, the image may be discarded. For successfully matched pairs of pupils, sub-images around the estimated pupil center may be extracted for further processing. Especially due to interlace effects, but also caused by other influences the pupil center coordinates, pupils found by the initial Hough algorithm may not be sufficiently accurate for further processing. For exact calculation of gaze 360 direction, however, this coordinate should be as accurate as possible.
One possible approach for obtaining a usable pupil center estimation is actually finding the center of the pupil in an image. However, the invention is not limited to this embodiment. In another embodiment, for example, pupil center estimation may be accomplished by finding the center of the iris, or the like. While the iris provides a larger structure and thus higher stability for the estimation, it is often partly covered by the eyelid and thus not entirely visible. Also, its outer bound does not always have a high contrast to the surrounding parts of the image. The pupil, however, can be easily spotted as the darkest region of the (sub-)image.
Using, the center of the Hough-circle as a base, the surrounding dark pixels may be collected to form the pupil region. The center of gravity for all pupil pixels may be calculated and considered to be the exact eye position. This value may also form the starting point for the next cycle. If the eyelids are detected to be closed during this step, the image may be discarded. The radius of the iris may now be estimated by looking for its outer bound. This radius may later limit the search area for glints. An additional sub-image may be extracted from the eye image, centered on the pupil center and slightly larger than the iris. This image may be checked for the corneal reflection using a simple pattern matching approach. If no reflection is found, the image may be discarded. Otherwise, the optical eye center may be estimated and the gaze 360 direction may be calculated. It may then be intersected, with the monitor plane to calculate the estimated viewing point. These calculations may be done for both eyes independently. The estimated viewing point may then be used for further processing. For instance, the estimated viewing point can be reported to the window management system of a user's device as mouse or screen coordinates, thus providing a way to connect the eye-tracking method discussed herein to existing software.
A user's device may also include other eye-tracking methods and systems such as those included and/or implied in the descriptions of the various eye-tracking operations described herein. In one embodiment, the eye-tracking system may include an external system (e.g., a Tobii T60 XL eye tracker, Tobii TX 300 eye tracker or similar eye-tracking system) communicatively coupled (e.g., with a USB cable, with a short-range Wi-Fi connection, or the like) with the device. In other embodiments, eve-tracking systems may be integrated into the device. For example, the eye-tracking system may be integrated as a user-facing camera with concomitant eye-tracking utilities installed in the device.
In one embodiment, the specification of the user-facing camera may be varied according to the resolution needed to differentiate the elements of a displayed message. For example, the sampling rate of the user-facing camera may be increased to accommodate a smaller display. Additionally, in some embodiments, more than one user-facing camera (e.g., binocular tracking) may be integrated into the device to acquire more than one eve-tracking sample. The user device may include image processing utilities necessary to integrate the images acquired by the user-facing camera and then map the eye direction and motion to the coordinates of the digital document on the display. In some embodiments, the user device may also include a utility for synchronization of gaze data with data from other sources, e.g., accelerometers, gyroscopes, or the like. In some embodiments, the eye-tracking method and system may include other devices to assist in eye-tracking operations. For example, the user device may include a user-facing infrared source that may be reflected from the eye and sensed by an optical sensor such as a user-facing camera.
Irrespective of the particular eye-tracking methods and systems employed, and even if bioresponse data other than eye-tracking, is collected for analysis, the bioresponse data may be transmitted in a format similar to the exemplary bioresponse data packet 500 illustrated in
Returning again to
Referring again to
Referring again to
Referring again to
Additionally, one of ordinary skill in the art will appreciate that the significance of eye-tracking data or any bioresponse data may extend beyond comprehension of terms and images and may signify numerous other user attributes. For instance, bioresponse data may indicate an affinity for a particular image and its corresponding, subject matter, a preference for certain brands, a preferred pattern or design of visual components, and many other attributes. Accordingly, bioresponse data, including eye-tracking, may be analyzed to determine the significance, if any, of a user's biological response to viewing various visual components.
Process 100 of
Returning again to
In other examples, eye-tracking data may be obtained for argot terms of certain social and age groups, jargon for certain professions, non-English language words, regional terms, or the like. A user's in-group status may then be assumed from the existence or non-existence of a comprehension difficulty for the particular term. In still other examples, eye-tracking data for images of certain persons, such as a popular sports figure, may be obtained. The eye-tracking data may then be used to determine a familiarity or lack of familiarity with the person. If a familiarity is determined for the athlete, then, for example, the user may be assigned the attribute of a fan of the particular athlete's team. However, the embodiments are not limited by these specific examples. One of ordinary skill in the art will recognize that there are other ways to determine attributes for users.
Further, in another embodiment, other types of bioresponse data besides eye-tracking may be used. For example, while viewing a digital document, galvanic skin response may be measured. In one embodiment, the galvanic, skin response may measure skin conductance, which may provide information related to excitement and attention. If a user is viewing a digital document such as a video, the galvanic skin response may indicate a user's interest in the content of the video. If the user is excited or very interested in a video about, for example, computer programming, the user may then be assigned the attribute “computer programming knowledge.”If a user is not excited or pays little attention to the video, the user may not be assigned this attribute.
In some embodiments, the operations of
Generate an Implicit Social Graph using the Assigned Attributes
Returning again to
After a set of users is collected in step 810 of process 800, the set of users may be linked according to their attributes in step 820 to generate a hypergraph, such as the graph described in accordance with
Returning again to
Referring again to
Not all steps described in process 800 are necessary to practice an exemplary embodiment of the invention. Many of the steps are optional, including, for example, steps 830 and 840. Moreover, step 840 may be practiced without requiring step 830, and the order as depicted in
Furthermore, information provided by one or more sensors on the user's device may be used to provide suggestions or advertisements to the user. For example, in one embodiment, a barometric pressure sensor may be used to detect if it is raining or about to rain. This information may be combined with the implicit social network to provide a suggestion to the user. For example, a suggestion for a store selling umbrellas or a coupon for an umbrella may be provided to the user. The store may be selected by determining the shopping preferences of the users who share several attributes with the user. One of ordinary skill in the art will recognize that the invention is not limited to this embodiment. Many various sensors and combinations may be used to provide a suggestion to a user.
Therefore, bioresponse data may signify culturally significant attributes that may be used to generate an implicit social graph that, alone or in combination with other information sources, may be used to provide suggestions to a user.
Profile information in member database 1054 may include, for example, a unique member identifier, name, age, gender, location, hometown, or the like. One of ordinary skill in the art will recognize that profile information is not limited to these embodiments. For example, profile information may also include references to image files, listing, of interests, attributes, or the like. Relationship database 1055 may store information defining first degree relationships between members. In addition, the contents of member database 1054 may be indexed and optimized for search, and may be stored in search database 1056. Member database 1054, relationship database 1055, and search database 1056 may be updated to reflect inputs of new member information and edits of existing member information that are made through computers 1070.
The application server 1051 may also manage the information exchange requests that it receives from the remote devices 1070. The graph servers 1052 may receive a query from the application server 1051, process the query and return the query results to the application server 1051. The graph servers 1052 may manage a representation of the social network for all the members in the member database. The graph servers 1052 may have a dedicated memory device, such as a random access memory (RAM), in which an adjacency list that indicates all first degree relationships in the social network is stored. The graph servers 1052 may respond to requests from application server 1051 to identify relationships and the degree of separation between members of the online social network.
The graph servers 1052 may include an implicit graphing module 1053. Implicit graphing module 1053 may obtain bioresponse data (such as eye-tracking data, hand-pressure, galvanic skin response, or the like) from a bioresponse module (such as, for example, attentive messaging module 1318 of
A bioresponse module may be any module in a computing device that can obtain a user's bioresponse to a specific component of a digital document such as a text message, email message, web page document, instant message, microblog post, or the like. A bioresponse module may include a parser that parses the digital document into separate components and may indicate a coordinate of the component on a display of devices 1070. The bioresponse module may then map the bioresponse to the digital document component that evoked the bioresponse. For example, in one embodiment, this may be performed with eye-tracking data that determines which digital document component is the focus of a user's attention when a particular bioresponse was recorded by a biosensor(s) (e.g., an eye-tracking system) of the devices 1070. This data may be communicated to the implicit graphing module 1053, the bioresponse data server 1072, or the like.
Implicit graphing module 1053 may use bioresponse data and concomitant digital document component used to generate the set of user attributes obtained from a plurality of users of the various devices communicatively coupled to the system 1050. In some embodiments, the graph servers 1052 may use the implicit social graph to respond to requests from application server 1051 to identify relationships and the degree of separation between members of an online social network.
The digital documents may originate from other users and user bioresponse data may be obtained by implicit graphing module 1053 to dynamically create the implicit social graph from the users' current attributes. In one embodiment, implicit graphing module 1053 may send specific, types of digital documents with terms, images, or the like designed to test a user for a certain attribute to particular user devices to acquire particular bioresponse data from the user. Additionally, implicit social graphing module 1053 may also communicate instructions to a bioresponse module to monitor certain terms, images, classes of terms or images, or the like.
In some embodiments, communication network 1076 may support protocols used by wireless and cellular phones, personal email devices, or the like. Furthermore, in some embodiments, communication network 1060 may include an internet-protocol (IP) based network such as the Internet. A cellular network may include a radio network distributed over land areas called cells, each served by at least one fixed-location transceiver known as to cell site or base station. A cellular network may be implemented with a number of different digital cellular technologies. Cellular radiotelephone systems offering mobile packet data communications services may include GSM with GPRS systems (GSM/GPRS), CDMA/1xRTT systems, Enhanced Data Rates for Global Evolution (EDGE) systems, EV-DO systems, Evolution For Data and Voice (EV-DV) systems, High Speed Downlink Packet Access (HSDPA) systems, High Speed Uplink Packet Access (HSUPA), 3GPP Long Term Evolution (LTE), or the like.
Bioresponse data server 1072 may receive bioresponse and other relevant data (such as, for example, mapping data that may indicate the digital document component associated with the bioresponse and user information) from the various bioresponse modules of
Through client devices 1110-1111, users 1104-1105 may communicate over network 1100 with each other and with other systems and devices coupled to network 1100, such as server device 1140, remote sensors, smart devices, third-party servers, or the like. Remote sensor 1130 may be a client device that includes a sensor 1131. Remote sensor 1130 may communicate with other systems and devices coupled to network 1100 as well. In some embodiments, remote sensor 1130 may be used to acquire bioresponse data, client device context data, or the like.
Similar to client devices 1110-1111, server device 1140 may include a processor coupled to a computer-readable memory. Client processors 1121 and the processor for server device 1140 may be any of a number of well known microprocessors. Memory 1120 and the memory for server 1140 may contain a number of programs, such as the components described in connection with the invention. Server device 1140 may additionally include a secondary storage element 1150, such as a database. For example, server device 1140 may include one or more of the databases shown in
Client devices 1110-1111 may be any type of computing platform that may be connected to a network and that may interact with application programs. In some example embodiments, client devices 1110-1111, remote sensor 1130 and/or server device 1140 may be virtualized. In some embodiments, remote sensor 1130 and server device 1140 may be implemented as a network of computers and/or computer processors.
Eye-tracking data may be obtained with an eye-tracking system and communicated over a network to the eye-tracking server 1250. Device 1230 GUI data may also be communicated to eye-tracking server 1250. Eye-tracking server 1250 may process the data and map the eye-tracking coordinates to elements of the display. Eye-tracking server 1250 may communicate the mapping data to the attentive messaging server 1270. Attentive messaging server 1270 may determine the appropriate context data to obtain and the appropriate device to query for the context data. Context data may describe an environmental attribute of a user, the device that originated the digital document 1240, or the like. It should be noted that in other embodiments, the functions of the eye-tracking, server 1250 may be performed by a module integrated into the device 1230 that may also include digital cameras, other hardware for eye-tracking, or the like.
In one embodiment the source of the context data may be a remote sensor 1260 on the device that originated the text message 1240. For example, in one embodiment, the remote sensor 1260 may be a GPS located on the device 1240. This GPS may send context data related to the position of device 1240. In addition, attentive-messaging server 1250 may also obtain data from third-party server 1280 that provides additional information about the context data. For example, in this embodiment, the third-party server may be a webpage such as a dictionary website, a mapping website, or the like. The webpage may send context data related to the definition of a word in the digital document. One of skill in the art will recognize that the invention is not limited to these examples and that other types of context data, such as temperature, relative location, encyclopedic data, or the like may be obtained.
Device 1300 may be battery-operated and highly portable so as to allow a user to listen to music, play games or videos, record video, take pictures, place and accept telephone calls, communicate with other people or devices, control other devices, any combination thereof, or the like. In addition, device 1300 may be sized such that it fits relatively easily into a pocket or hand of the user. By being handheld, device 1300 may be relatively small and easily handled and utilized by its user. Therefore, it may be taken practically anywhere the user travels.
In one embodiment, device 1300 may include processor 1302, storage 1304, user interface 1306, display 1308 memory 1310, input/output circuitry 1312, communications circuitry 1314, web browser 1316, and/or bus 1322. Although only one of each component is shown in
Processor 1302 may include, for example, circuitry for, and be configured to perform, any function. Processor 1302 may be used to run operating system applications, media playback applications, media editing applications, or the like. Processor 1302 may drive display 1308 and may receive user inputs from user interface 1306.
Storage 1304 may be, for example, one or more storage mediums, including, but not limited to, a hard-drive, flash memory, permanent memory such as ROM, semi-permanent memory such as RAM, any combination thereof, or the like. Storage 1304 may store, for example, media data (e.g., music and video files), application data (e.g., for implementing functions on device 1300), firmware, preference information data (e.g., media playback preferences), lifestyle information data (e.g., food preferences), exercise information data information obtained by exercise monitoring equipment), transaction information data (e.g., information such as credit card information), wireless connection information data (e.g., information that can enable device 1300 to establish a wireless connection), subscription information data (e.g., information that keeps track of podcasts or television shows or other media a user subscribes to), contact information data (e.g., telephone numbers and email addresses), calendar information data, any other suitable data, any combination thereof, or the like. One of ordinary skill in the art will recognize that the invention is not limited by the examples provided. For example, lifestyle information data may also include activity preferences, daily schedule preferences, budget, or the like. Each of the categories above may likewise represent many various kinds of information.
User interface 1306 may allow a user to interact with device 1300. For example, user interface 1306 may take a variety of forms, such as a button, keypad, dial, a click wheel, a touch screen, any combination thereof, or the like.
Display 1308 may accept and/or generate signals for presenting media information (textual and/or graphic) on a display screen, such as those discussed above. For example, display 1308 may include a coder/decoder (CODEC) to convert digital media data into analog signals. Display 1308 also may include display driver circuitry and/or circuitry for driving display driver(s). In one embodiment, the display signals may be generated by processor 1302 or display 1308. The display signals may provide media information related, to media data received from communications circuitry 1314 and/or any other component of device 1300. In some embodiments, display 1308, as with any other component discussed herein, may be integrated with and/or externally coupled to device 1300.
Memory 1310 may include one or more types of memory that may be used for performing device functions. For example, memory 1310 may include a cache, flash, ROM, RAM, one or more other types of memory used for temporarily storing data, or the like. In one embodiment, memory 1310 may be specifically dedicated to storing firmware. For example, memory 1310 may be provided for storing firmware for device applications (e.g., operating system, user interface functions, and processor functions).
Input/output circuitry 1312 may convert (and encode/decode, if necessary) data, analog signals and other signals (e.g., physical contact inputs, physical movements, analog audio signals, or the like) into digital data, and vice-versa The digital data may be provided to and received from processor 1302, storage 1304, memory 1310, or any other component of device 1300. Although input/output circuitry 1312 is illustrated as a single component of device 51300, a plurality of input/output circuitry may be included in device 1300. Input/output circuitry 1312 may be used to interface with any input or output component. For example, device 1300 may include specialized input circuitry associated with input devices such as, for example, one or more microphones, cameras, proximity sensors, accelerometers, ambient light detectors, magnetic card readers, or the like. Device 1300 may also include specialized output circuitry associated with output devices such as, for example, one or more speakers, or the like.
Communications circuitry 1314 may permit device 1300 to communicate with one or more servers or other devices using any suitable communications protocol. For example, communications circuitry 1314 may support Wi-fi (e.g., an 802.11 protocol), Ethernet, Bluetooth™ (which is a trademark owned by Bluetooth Sig, Inc.) high frequency systems (e.g., 900 MHz, 2.4 GHz, and 5.6 GHz communication systems), infrared, TCP/IP (e.g., any of the protocols used in each of the TCP/IP layers), HTTP BitTorrent, FTP, RTP, RTSP, SSH, any combination thereof, or the like. Additionally, the device 1300 may include a client program, such as web browser 1316, for retrieving, presenting, and traversing information resources on the World Wide Web.
Text message application(s) 1319 may provide applications for the composing, sending and receiving of text messages. Text message application(s) 1319 may include utilities for creating and receiving text messages with protocols such as SMS, EMS, MMS, or the like.
The device 1300 may further include at least one sensor 1320. In one embodiment, the sensor 1320 may be a device that measures, detects or senses an attribute of the device's environment and then converts the attribute into a machine-readable form that may be utilized by an application. In some embodiments, a sensor 1320 may be a device that measures an attribute of a physical quantity and converts the attribute into a user-readable or computer-processable signal. In certain embodiments, a sensor 1320 may also measure an attribute of a data environment, a computer environment or a user environment in addition to a physical environment. For example, in another embodiment, a sensor 1320 may also be a virtual device that measures an attribute of a virtual environment such as a gaming environment. Example sensors include, global positioning system receivers, accelerometers, inclinometers, position sensors, barometers, WiFi sensors, RFID sensors, near-field communication (NFC) devices, gyroscopes, pressure sensors, pressure gauges, time pressure gauges, torque sensors, ohmmeters, thermometers, infrared sensors, microphones, image sensors (e.g., digital cameras), biosensors (e.g., photometric biosensors, electrochemical biosensors), eye-tracking components 1330 (may include digital camera(s), directable infrared lasers, accelerometers), capacitance sensors, radio antennas, galvanic, skin sensors, capacitance probes, or the like. It should be noted that sensor devices other than those listed may also be utilized to ‘sense’ context data and/or user bioresponse data.
In one embodiment, eye-tracking component 1330 may provide eye-tracking data to attentive messaging module 1318. Attentive messaging module 1318 may use the information provided by a bioresponse tracking system to analyze a user's bioresponse to data provided by text messaging application 1319, web browser 1316 or other similar types of applications (e.g., instant messaging, email, or the like) of device 1300. For example, in one embodiment, attentive messaging module 1318 may use information provided by an eye-tracking system, such as eye-tracking component 1330, to analyze a user's eye movements to the data provided. However, the invention is not limited to this embodiment and other systems, such as other bioresponse sensors, may be used to analyze a user's bioresponse.
Additionally, in some embodiments, attentive messaging module 1318 may also analyze visual data provided by web browser 1316 or other instant messaging and email applications. For example, eye tracking data may indicate that a user has a comprehension difficulty with a particular visual component (e.g., by analysis of a fixation period, gaze regression to the visual component, or the like). In other examples, eye tracking data may indicate a user's familiarity with a visual component. For example, in one embodiment, eye-tracking data may show that the user exhibited a fixation period on a text message component that is within a specified time threshold. Attentive messaging, module 1318 may then provide the bioresponse data (as well as relevant text, image data, user identification data, or the like) to a server such as graph servers 1052 and/or bioresponse data server 1072. In some embodiments, entities, such as graph servers 1052 and/or bioresponse data server 1072 of
In step 1702 of process 1700, an attribute profile (e.g. of a user, of a set of users, etc.) can be generated and/or maintained (e.g. by a server process). The attributes can be based on each user's comprehension difficulties and/or lack of comprehension difficulties vis-à-vis a key term and/or key phrase. An attributes for a set of users can be an aggregated to determine an attribute for the set of users (e.g. can be a sum, weighted mean, an arithmetic mean). Each user's comprehension difficulties and/or lack of comprehension difficulties vis-à-vis a key term and/or key phrase can be based on the respective user's eye-tracking data vis-à-vis the key term and/or key phrase. The attribute can be related to a meaning of the key term and/or key phrase. Attributes can be aggregated to generate another attribute. Attributes can be weighted (e.g. a particular comprehension difficulty for a particular key word can be weighted greater than a comprehension of another key word and the score of each attribute can be used generate another user attribute).
In step 1704, a user cohort can be created and/or maintained based on selected matching attributes of a set of users. The membership of a user cohort can be automatically and/or dynamically updated based on each user's attributes as determined from the respective users eye-tracking data For example, a user may not exhibit a comprehension difficulty with respect to the name ‘Rahul Gandhi’ (e.g. substantially smooth eye movement across each word with a fixation of substantially two-hundred (200) milliseconds for each term; the user's fixation for ‘Rahul’ and ‘Gandhi’ are within a threshold of the average of other recent fixations for similar words that signify the same class of word (e.g. proper names of similar length); etc.). The user may be assigned the attribute ‘FAMILIAR_WITH_INDIAN_POLITICS’. This one attribute can then cause the user to be assigned membership in the user cohort ‘FAMILIAR_WITH_CONTEMPORARY_INDIAN_POLITICS’. Later, eye-tracking, data can indicate a comprehension difficulty with respect to the name ‘Manmohan Singh’. (e.g. the user's fixation for ‘Manmohan’ and ‘Singh’ are not within a threshold of the average of other recent fixations for similar words that signify the same class of word (e.g. proper names of similar length)). Consequently, the user may be dropped from the ‘FAMILIAR_WITH_CONTEMPORARY_INDIAN_POLITICS’ user cohort. Indeed, in one example, comprehension difficulties with respect to such proper names as ‘Manmohan Singh’ and/or ‘Rahul Gandhi’ can cause the user to be placed in a user cohort of ‘NOT_FAMILIAR_WITH_CONTEMPORARY_INDIAN_POLITICS’ (as well as be assigned attributes such as ‘NOT_FAMILIAR_WITH_INDIAN_POLITICS’, ‘NOT_FAMILIAR_WITH_WORLD_LEADERS’, etc). It is noted that the user attribute ‘FAMILIAR_WITH_INDIAN_POLITICS’ can be scored/weighted. In the present example, the user's score/weight for this attribute can be decreased by a specified amount and/or set to zero. Accordingly, in step 1706, user attributes can be updated (e.g. automatically and/or dynamically) when eye-tracking data indicates a user no longer has a comprehension difficulty with respect to one or more key terms and/or key phrases and/or when eye-tracking data indicates the user has a comprehension difficulty with respect to one or more newly specified key terms and/or key phrases. For example, at some specified point, a new Indian politician, ‘Brad Rao’ can be elected to nation office. A user can read the name ‘Brad Rao’ on a news website and eye-tracking data can indicate a comprehension difficulty within a specified parameter (e.g. one or more regressions to each word and/or a fixation of seven-hundred milliseconds for each word). The phrase ‘Brad Rao’ can be set as a key phrase. When the user reads a digital document a thread can automatically search the digital document for the key phrase. User eye-tracking data for the key phrase can be obtained. The user can be in the user cohort ‘FAMILIAR_WITH_CONTEMPORARY_INDIAN_POLITICS’. The comprehension difficulty vis-à-vis the new politician's name can cause the user to be dropped from the user cohort. Thus, in step 1708, the user cohort can be modified to remove and/or include users based on updated user attributes.
In one example of process 1700, a user cohort for ‘KNOWLEDGE_OF_SWEDEN’ can be generated, Key words and/or phrases that relevant to the cohort can be established. This can be done by an administrator and/or automatically by searching a database of key words and/or phrases and generating a list of with definitions that are relevant to ‘KNOWLEDGE_OF_SWEDEN’ within a specified threshold. Digital news (e.g. obtain current Swedish political figures, actors, etc.), maps (obtain geographic names of places in Sweden), travel guides, animal and plant text books (e.g. obtain plants and animal names native and/or unique to Sweden), and the like can also be searched with a search engine to obtain additional, key words and/or terms relevant to ‘KNOWLEDGE_OF_SWEDEN’. Additionally, a list of Swedish vocabulary and/or phrases and be maintained and a user's reading content can be searched to determine if it includes a Swedish word and/or phrase. Swedish words and/or phrases can be automatically included in the list of key words and/or phrases. A user's eye-tracking, for the list of key words and/or phrases can be obtained. In this example, users that did not show a comprehension difficulty vis-à-vis a specified percentage of ‘KNOWLEDGE_OF_SWEDEN’ key words and/or phrases can be included in the ‘KNOWLEDGE_OF_SWEDEN’. These users can be provided a ‘KNOWLEDGE_OF_SWEDEN’ attribute as well. It is noted that a user's ‘KNOWLEDGE_OF_SWEDEN’ attribute can be scored and/or weighted, in this way, some users with less comprehension difficulties vis-à-vis a greater number of key terms and/or phrases that indicate ‘KNOWLEDGE_OF_SWEDEN’ can be scored higher than user's with barely a sufficient number of lack of comprehension difficulties vis-à-vis terms and/or phrases that indicate a ‘KNOWLEDGE_OF_SWEDEN’ cohort membership. 1001111 it is noted that a list of various attributes of users migrating into and/out of the ‘KNOWLEDGE_OF_SWEDEN’ cohort can be generated and maintained. Migrating users can be members of various other cohorts. Probability values that a particular user may migrate to a particular cohort from can be calculated based on the gathered information (e,g, the list) and/or other user attributes. These probabilistic values can be assigned to users of the origin cohort. For example, it can be determined that, based on historical migration data in a particular user set, 75% of users in the ‘KNOWLEDGE_OF_MALTA’ cohort who are not in the ‘KNOWLEDGE_OF_SWEDEN’ cohort eventually migrate to the ‘KNOWLEDGE_OF_SWEDEN’ cohort within a three month period of time. The users in the ‘KNOWLEDGE_OF_MALTA’ cohort can then be assigned a 0.75 probability migration to ‘KNOWLEDGE_OF_SWEDEN’ cohort value. For example, historical analysis can indicate that a user with no comprehension difficulties (e.g. at a set eye-tracking metric such as a fixation of equal to or greater than seven-hundred milliseconds and one regression for a term to indicate a comprehension difficulty) for ‘Stockholm’ and ‘Carl Christoffer Gjörwell’ will have a 0.8 probability of also not exhibiting a comprehension difficulty for ‘Sveriges Kungahus’. Not exhibiting a comprehension difficulty for ‘Sveriges Kungahus’ can be a threshold for entry into the user cohort of ‘HIGH_KNOWLEDGE_OF_SWEDEN’. User cohorts can also indicate progression of knowledge in a subject. For example, a user can exhibit a lack of comprehension difficulty with respect to ‘Stockholm’ but a comprehension difficulty with respect to ‘Carl Christoffer Gjörwell’ and/or ‘Sveriges Kungahus’. Later, the user can exhibit a lack of comprehension difficulty with respect to Stockholm’ and ‘Carl Christoffer Gjörwell’ but a comprehension difficulty with respect to and/or ‘Sveriges Kungahus’. A time stamp for each event can be obtained and stored in a database. The user can be placed in a user cohort ‘LEARNING_ABOUT_SWEDEN’. The user's rate of learning can also be assigned a value. For example, by time difference can indicate the rate the user no longer exhibiting comprehension difficulties for key terms and/or key phrases for a particular topic. In a similar manner a user's decay of knowledge about a particular topic can also be measured and assigned to a user cohort based on an (proportional) increasing percentage of key terms and/or phrases for a topic that the user exhibits comprehension difficulties. Comprehension and comprehension difficulties for key words and/or key phrases can be based on different parameters (e.g. such as those variously provided for in
In some examples, users can be assigned a particular node in an implicit social network based on a probability of migration to a specified cohort value (e.g. greater then a set threshold.). In some examples, probability of migration to a specified cohort value can decay as a function of time (e.g. longer a user does migrate to another cohort the lower the probability value becomes). Rates of decay can be set according to past historical patterns and/or a user's score for the particular cohort (e.g. score did not reach threshold for inclusion but was increasing at a certain rate, score and/or slope of score as a function of time can be correlated to a probability dependent probability variable in a linear regression analysis, node membership in an implicit social network, etc.). In some examples, a user cohort can correspond with a node in an implicit social graph.
In various embodiments, collaborative filtering can include various methods for processing data (e.g. user comprehension difficulty data and/or lack of user-comprehension difficulty data obtained from user eye-tracking data) to develop profiles of users who are related by similar comprehension-difficulty profiles and/or recent changes in comprehension difficulty profiles with respect certain types of key words. Additionally, in some embodiments, various other recommender algorithms can predict the a ‘preference’ a user would give to an item (e.g. as music, books, or movies) or social element (e.g. people or groups) they had not yet considered, using a model built from the characteristics of an item (content-based approaches) and/or the user's profile developed from user attributes derived from user comprehension difficulty data and/or lack of user-comprehension difficulty data and the meaning of associated key words. In some embodiments, a gradient method (e.g. an algorithm to solve problems with the search directions defined by the gradient of the function at the current point) can be utilized with the various processes and system provided herein. Examples of gradient method can include a gradient descent and/or aconjugate gradient.
It is noted that, in some embodiments, trigger parameters used to indicate a comprehension difficulty can be automatically modified for a user. For example, a time used to indicate a comprehension difficulty and/or a number of regressions back to a word can be modified based on how many times a user has view the word during a particular period/event (e.g. a particular reading session on an e-book; a set period (if time such as past hour, last twenty-four (24) hours, etc.; whether the user has already indicated a reading comprehension difficulty with respect to the word, etc.). For example, the trigger parameters for a reading comprehension can be 750 ms and a regression for the user's first viewing of the word and 500 ms and zero regressions for the second and subsequent viewings of the word. In another example, the subsequent trigger parameters can be function of an average per word fixation time (e.g. a percentage of the first trigger parameter but greater than the current per word fixation time; twice the current per word fixation time; etc.). Conversely, in some embodiments, the values of subsequent trigger parameters can be increased (e.g. from 750 ms to 1000 ms; from one regression to two or more regressions; the fixation time to indicate a comprehension difficulty can increase (or decrease to a fixed lowest threshold in some examples) by a set percentage (e.g. five percent (5%) fifteen (15%), etc. each time the user view the word used in the text, etc.). These examples are provided by way of explanation and not by way of limitation. Other trigger parameters (e.g. reading comprehension difficulty trigger parameters) can be utilized in other embodiments.
At least some values based on the results of the above-described processes can be saved for subsequent use. Additionally, a computer-readable medium (e.g. a non-transitory computer readable medium) can be used to store (e.g., tangibly embody) one or more computer programs for performing any one of the above-described processes by means of a computer. The computer program may be written, for example, in a general-purpose programming language (e.g., Pascal, C, C+ , Java) or some specialized application-specific language.
Although the present embodiments have been described with reference to specific example embodiments, various modifications and changes can be made to these embodiments without departing from the broader spirit and scope of the various embodiments. For example, the various devices, modules, etc, described, herein can be enabled and operated using hardware circuitry, firmware, software or any combination of hardware, firmware, and software (e.g., embodied in a machine-readable medium).
In addition, it will be appreciated that the various operations, processes, and methods disclosed herein can be embodied in a machine-readable medium and/or a machine accessible medium compatible with a data processing system (e.g., a computer system), and can be performed in any order (e.g., including using means for achieving the various operations). Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense. In some embodiments, the machine-readable medium can be a non-transitory form of machine-readable medium.
This application is a continuation-in-part of and claims priority from U.S. patent application Ser. No. 13/644,426 filed Oct. 4, 2012. U.S. patent application Ser. No. 13/644,426 is a continuation in part of and claims priority from U.S. application Ser. No. 13/076,346, filed Mar. 30, 2011, U.S. patent application Ser. No. 13/076,346 claims priority from U.S. Provisional Application No. 61/438,975, filed Feb. 3, 2011. This application claims priority from U.S. Provisional Application No. 61/696,994, filed Sep. 5, 2012. This application claims priority from U.S. Provisional Application No. 61/811,309, filed Apr. 12, 2013. This application claims priority from U.S. Provisional Application No. 61/681,514, filed Aug. 9, 2012. This application claims priority from U.S. Provisional Application No. 61/809,419, filed Apr. 8, 2013. These applications are hereby incorporated by reference in their entirety for all purposes. This application claims priority from U.S. Provisional Application No. 61/803,139, filed Mar. 19, 2013. These applications are hereby incorporated by reference in their entirety for all purposes.