The present invention relates to apparatuses and methods for enabling improved display of textual content on an electronic display.
Communications devices, including PCs, smartphones, tablets, e-readers, etc. continue to grow in popularity, and have become an integral part of both personal and business communications. As a result, users continue to spend more time using their communications devices during the course of a day reading e-mails, reading web sites, sending short message service (SMS) messages, etc. The use of a communications device, and particularly a mobile communications device, for such functions, however, may present certain inconveniences to a user. For example, the display area of a mobile communications device may be limited, which may increase the time spent reading an e-mail or web site, as the user may have to scroll through multiple pages to read the entire e-mail or web site. Moreover, despite heavy technological (digital) advances, the illustration of textual information on electronic displays has not fundamentally changed. Textual information is typically displayed in lines such that the reader's eye moves sequentially from word to word.
Rapid Serial Visual Presentation (RSVP) is a method of displaying textual content in which each word of the textual content is displayed in sequential order, one at a time, at a certain display rate, at a fixed location on a display. RSVP was first introduced in the 1970s as a technique for presenting text one word at a time in a display. Many references since then have provided information on the use of RSVP in a variety of applications. Commercially available products based on RSVP include “Zap Reader” (www.zapreader.com/reader) and “Spreeder” (www.spreeder.com). Some prior methods exist for improving the effectiveness of an RSVP by varying the display time of a word in the display based on word length and word type (see, U.S. Pat. No. 6,130,968 to McIan et al. (“McIan”)) and based on word frequency (see WO/37256 by Goldstein et al. (“Goldstein 2002”)). While these techniques are beneficial in improving comprehension of the displayed text, new techniques and methods are needed to further increase a user's reading speed, and improve the presentation of dense content on electronic displays.
Isolated efforts have also been made to apply RSVP to particular applications (e.g., email application) in mobile communications devices (see, US 2011/0115819 to Hanson). However, the challenges and opportunities for integrating RSVP into user interfaces for increasing the density of displayable content remain largely unexplored.
Previous implementations of Rapid Serial Visual Presentation (RSVP) do not address using RSVP to improve user access to information from a homescreen (e.g., a “homescreen” of a smartphone running a mobile operating system such as iOS™, Android™, or Windows Phone™ or a “desktop” screen of a PC, laptop, etc., running an operating system such as Windows™, or Mac™ OS; or a homescreen/desktop screen of an intermediate portable device such as notepad, touchpad, etc. running a corresponding operating system; all referenced herein as simply a “homescreen” for simplicity) user interface or how to best integrate RSVP into a variety of applications. Given the increasing reliance on small-screen devices (particularly mobile communication devices, but also notebook computers and other highly portable computing devices) for a variety of purposes, there is a growing need to efficiently utilize screen space within user interfaces. Moreover, even with respect to larger screen devices, there are unexplored opportunities to create more efficient interfaces by incorporating RSVP techniques. Embodiments of the invention relate to electronic interfaces that effectively utilize RSVP to improve user access to information.
In one embodiment, a communications device displays a first icon representing a notification event associated with an application by displaying the first icon with an icon representing the application. The first icon further represents a presence of content that is displayable using RSVP. The communications device receives a first user interface action to select the notification event, and in response to the first user interface action, displays textual content associated with the notification event in a designated display area using RSVP. The RSVP content may contain embedded text, a uniform resource locator (URL), or an attachment. If the user selects embedded text, the RSVP content corresponding to the embedded text may be displayed in the designated display area using RSVP. In one embodiment, if the user selects a URL, the contents of the webpage corresponding to the URL may be displayed in the designated display area using RSVP. Alternatively, the webpage may be displayed by a browser application. If the user selects an attachment, an application associated with the attachment may be launched to open the attachment. For example, if the attachment is a photo, a photo viewer application may be launched to open the photo.
In another embodiment, the communications device displays a first icon within an application interface. The first icon represents presence of textual content that is displayable using RSVP. The communications device receives a first user interface action to select the first icon, and in response to the first user interface action, displays the textual content in a designated display area using RSVP. In an alternative to this embodiment, an application interface is configurable so that any selectable item within the application interface that is associated with textual content may have that content displayed using RSVP when the item is selected.
In yet another embodiment, a search query is received by at least one server computer. At least one search result corresponding to the search query is transmitted to a browser application for display in the browser application on a user device. The search result is configured to be displayed by the browser application with an icon representing presence of content, corresponding to the at least one search result, that is displayable using RSVP.
In yet another embodiment, textual information to be displayed as part of an online advertisement is received by at least one server computer. The textual information is configured to be displayable using RSVP. The configured text is transmitted to a user device in response to a request for an online advertisement.
In yet another embodiment, RSVP content may be embedded in a map, photo, diagram, presentation, etc. A map, photo, diagram, or presentation may be displayed by the appropriate application. A user may specify whether to add “global” RSVP content and/or “local” RSVP content. If the user chooses to add “global” (e.g., in reference to a document in its entirety) RSVP content, an interface which allows the user to add the “global” RSVP content may be displayed. If the user chooses to add “local” RSVP content, an interface which allows the user to specify location(s), element(s), and/or text selection, and add the corresponding “local” RSVP content may be displayed. For example, the user may specify locations on photos, maps, etc., specify elements or objects in photos, diagrams and presentations, etc. In one embodiment, the interface may also allow selection of text such that RSVP content may be associated with the selected text. In the case of a photo, in addition to specifying locations on the photo, the user may select areas of the photo, such as, for example a face of a person.
In yet another embodiment, RSVP content may be embedded in a spreadsheet in accordance with one embodiment of the present invention. RSVP content may be embedded as a comment on any cell in a spreadsheet. To embed RSVP content, a cell may be selected. Once selected, an interface which allows the user to input textual content may be displayed. In addition, the user may specify a sequence number for the comment. The user input text may be subsequently displayable using RSVP. In one embodiment, the content of every cell and/or every comment corresponding to a cell may be displayable using RSVP. Optionally, a notification marker/icon may be displayed indicating the presence of content or a comment including RSVP content. In another embodiment, a similar process may be used to embed RSVP content in a word processing application. For example, text may be selected, and a corresponding comment may be input by a user. The comment may then be embedded in the word processing document, and may subsequently be displayed in a DRDA using RSVP upon user selection of the comment.
These and other embodiments are more fully described below.
The present description is made with reference to the accompanying drawings, in which various example embodiments are shown. However, many different example embodiments may be used, and thus the description should not be construed as limited to the example embodiments set forth herein. Rather, these example embodiments are provided so that this disclosure will be thorough and complete. Various modifications to the exemplary embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features disclosed herein.
Referring now to
End user device 210 includes a display 205. In some embodiments, display 205 may be configured to accept touch input. Computer program product 211 configures device 210 to serially present text in a designated Rapid Serial Visual Presentation (“RSVP”) display area 200 on display 205 (for convenience, referenced herein simply as “DRDA 200”). User device 210 may include any type of electronic device capable of controlling text display. Some examples include desktop computers and portable electronic devices such as mobile phones, smartphones, multi-media players, e-readers, tablet/touchpad, notebook, or laptop PCs, and other communication devices. In some implementations (e.g., a smart phone or e-reader), the display 205 may be packaged together with the rest of device 210. However, in other implementations, a separate display device (e.g., a monitor) maybe be attached to device 210. While the illustrated embodiment shows a graphical border around DRDA 200, DRDA 200 simply refers to a region (e.g., a window) on display 205 where text is serially presented in accordance with an embodiment of the present invention and in particular implementations, DRDA 200 may or may not be outlined by a graphical border.
In one embodiment, user device 210 has typical computer components including a processor, memory and an input/output subsystem. In some implementations (e.g., a smart phone or e-reader), user device 210 may include a wireless transceiver, and one or more input interfaces including a touch enabled display, a trackball, keyboard, microphone, etc. In the illustrated embodiment, computer program product 211 is loaded into memory (not separately shown) to configure device 210 in accordance with the present invention. In one embodiment, text data may be loaded into memory for text processing and display processing by device 210 as will be further described herein. Text data loaded into memory for text processing and display processing may be retrieved from persistent storage on a user device such as device 210 and/or may be received from one or more server computers 101 through a connection to Network 102 (e.g., the Internet). One or more server computers 101 may be for example, one or more advertiser computers, one or more search engine computers, one or more web servers, one or more application servers, etc. In an alternative embodiment, at least some processing/pre-processing of text data for display in accordance with the principles illustrated herein may be carried out by one or more remote computers such as server computers 101 and then sent to end user device 210 for display in DRDA 200 on display 205. In such an alternative, some or all of a computer program product such as computer program product 211 for implementing an embodiment of the present invention may reside on one or more computers such as server computers 101 that are remote from end user device 210. In some embodiments, the entire computer program product may be stored and executed on remote computers and the results presented within a browser application component (e.g. a media player application) of user device 210 (browser application and media player application not separately shown).
In an embodiment of the invention, text (which includes, for example, strings of characters—e.g., letters, numbers, symbols, etc.—which constitute words, numeric figures, and combinations of both with punctuation marks and symbols) is presented serially (for example, one word at a time) within DRDA 200. As referenced herein, a “display element” will refer to a group of text data that is displayed at one time within DRDA 200. In other words, display elements are displayed serially. In the primary embodiment discussed herein, a display element will generally consist of one word. However, in alternative embodiments, two words may be presented as a single display element. Also, in the primary embodiment, two words are sometimes part of a single display element such as, for example, when a number e.g., “9,” is displayed together with a unit, e.g. “feet,” so that, for example, the text “9 feet” may constitute a single display element and be presented together.
In a conventional RSVP system, each word is centered in the display area, and the optimal fixation position shifts as words of differing lengths are sequentially displayed, resulting in saccade movements as the eyes shift to the optimal fixation position. The reader has to refocus on the display every time a new word appears that is of a different length than the previous word. The reader's eyes will move from one character to the next to find the optimal position, which is also referred to as a recovery saccade. In addition, when a longer word follows a shorter one, the saccadic movement direction will be from right to left. When reading text in lines in a traditional paragraph display, most saccadic movement is from left to right so the reader is accustomed to this type of eye movement. Only occasionally, if the optimal fixation position is not found directly, the reader may have to move back from right to left. Thus conventional RSVP forces the reader to experience saccades which are not normal. Conventional RSVP approaches offer no solution to these problems. In order to prevent or minimize recovery saccades in an RSVP, it is preferable to display each word such that the optimal fixation position does not shift in the display. The focal point of the reader can then remain fixed on the optimal fixation position, which is a specific point in each word that is determined by the total number of characters or width of the word. This optimal recognition position, hereinafter referred to as the “ORP,” can be identified in the display such that the reader's eyes are directed to focus there as the words are serially presented. An RSVP which incorporates an ORP is hereinafter referred to as “ORP-RSVP.” With an ORP-RSVP, text can then be presented at a faster rate because no saccades occur during the presentation. In addition, the elimination of saccades reduces eye fatigue and makes it more comfortable, resulting in a better reading experience for the user. Embodiments described herein may be implemented using conventional RSVP or ORP-RSVP.
In addition, words are rarely greater than 13 characters (according to Sigurd, only 0.4% of the words in the English language are longer than 13 characters—see Sigurd, B. et al, “Word Length, Sentence Length and Frequency—ZIPF Revisited”, Studia Lingustica 58 (1), pp 37-52, Blackwell Publishing Ltd, Oxford UK, 2004) and therefore, for the vast majority of words, it is preferable to limit the number of characters to the right side of the fixation point to 8 characters. Also, in some embodiments, a word having a length of greater than thirteen characters is divided into first and second display elements such that a first portion of the word is displayed first (along with a hyphen) and then the second portion of the word is displayed next. In some embodiments, an empirically determined ORP of each display element is presented at a fixed location of the DRDA 200. For example, each word of a plurality of words is serially presented and positioned in the display such that the ORP is displayed at a fixed display location within DRDA 200 and this enables recognition of each word in succession with minimal saccade by the reader. Determining and displaying the ORP for display elements, and presenting display elements within DRDA 200 is described in more detail in co-pending U.S. application Ser. No. 13/547,982, now U.S. Pat. No. 8,903,174, which is hereby incorporated by reference in its entirety. Research has demonstrated that it is possible to get information about a word from up to 4 characters from the left side of the fixation position and up to 15 characters to the right side, resulting in a perceptual span of 20 characters, and that the maximum character length of a word without saccade movement is 20 characters. The DRDA 200 can accommodate text of up to 20 characters in length without saccades, although it is preferred to limit the display to 13 characters for improved comprehension.
One embodiment of the present invention provides a method for serially displaying text on an electronic display comprising identifying an ORP for a plurality of words to be displayed and serially displaying the plurality of words such that the ORP of each word is displayed at a fixed display location on the electronic display. In one embodiment, the ORP is identified as a character in the word. In another embodiment, the ORP is identified as a proportionate position relative to the width of the word in pixels. In some embodiments, visual aids are used to mark the fixed display location (see e.g., hash marks 504 in
In one embodiment, configuring text content for RSVP display comprises parsing text into a plurality of display elements, inserting blank elements at the end of a sentence, and determining a multiplier for each display element that can be used, along with user selected settings and/or other display parameters, to determine a display time for each display element. While, in alternative embodiments, it is possible to display each element for the same amount of time, it has been demonstrated empirically that a longer display time is beneficial for comprehension of longer words. It has also been demonstrated empirically that a longer pause between sentences is beneficial for comprehension of longer sentences. Further details of certain exemplary systems and methods for preparing and displaying text using RSVP are described in co-pending U.S. application Ser. No. 13/547,982 referenced above.
User device 210 includes a desktop/homescreen manager 206 to control various elements to be displayed on a homescreen (e.g., a “homescreen” of a smartphone running a mobile operating system such as iOS™, Android™, or Windows Phone™ or a “desktop” display of a PC, laptop, etc., running an operating system such as Windows™, or Mac™ OS; or a homescreen/desktop screen of an intermediate portable device such as notepad, touchpad, etc. running a corresponding operating system; all referenced herein as simply a “homescreen” for simplicity). For example, desktop/homescreen manager 206 may control the icons, widgets, tiles, windows, folders, etc. and other information that may be displayed on a desktop or homescreen. An input manager 212 manages inputs received from one or more input mechanisms such as a touch-screen, trackball, keyboard, mouse, microphone, eye-tracking, a gesture detector, or other natural interface input detector, etc. For example, text input may be provided using a virtual (i.e., touch screen) or physical keyboard, mouse, trackball, etc. Alternatively, or in addition, a user may provide voice/speech input via a microphone, which may then be converted to text. Various applications 208 (including, for example, applications 208a, 208b, and 208c) may run on the device and may provide data to be displayed through desktop/homescreen manager 206.
Various messages (e.g., email, SMS) may be received over a network such as a wireless communications network connected to the Internet, via a wireless interface (not shown). Information received from the network, such as from one or more remote servers, may be provided to the applications 208 by event manager 202, and information may be passed from the applications 208 back to the network. Event manager 202 may manage notification events that are presented to a user, e.g., through display 205. For example, event manager 202 may receive notification events from the wireless network. Notification events may include, for example receipt of text messages, emails, voicemails, social network updates, file transfers, etc. The event manager 202 may in turn forward the notification events to corresponding applications. For example, an email notification may be forwarded to the email application. The application may then instruct the desktop or homescreen manager 206 to display status or notification information to alert the user.
As will be described in further detail below, RSVP library 204 allows user device 210 to present display elements using RSVP. In accordance with an exemplary embodiment, applications 208a, 208b, and 208c, which may be a word processing application, a spreadsheet application, a photo application, a map application, a webpage editor, a browser application, etc., may communicate with RSVP library 204 through a RSVP application programming interface (API), such as API 214. As will be apparent to one of skill in the art, an API is an interface used by software components to communicate with each other. In one embodiment, each application 208a, 208b, and 208c may include application specific RSVP software, such as RSVP software 209a, 209b, and 209c, respectively, which may allow applications 208a, 208b, and 208c to detect presence of RSVP content. Upon detecting RSVP content, RSVP software, such as RSVP software 209a, 209b, and/or 209c may call RSVP library 204 via the API. In response, RSVP library 204 may display RSVP notification markers/icons, display DRDA 200, display RSVP content in DRDA 200, etc. In an alternate embodiment, the application specific RSVP software, such as RSVP software 209a, 209b, and/or 209c may instead be included in RSVP library 204. In one embodiment, application specific RSVP software, such as RSVP software 209a, 209b, and 209c may additionally include logic to allow embedding of textual content within a file. Embedding textual content in a file is described in more detail in the description of
In yet another embodiment, for certain applications, notification markers/icons may not be displayed. For example, for SMS and/or email applications, where notification events typically include textual content, RSVP display may be enabled by default such that content associated with all notification events for these applications may be displayed using RSVP upon user selection of a notification event or upon user interaction with a selectable item (e.g., an email or a text message).
In the example illustrated in
Upon user selection of notification marker/icon 1002, DRDA 1010 may be displayed as shown in
However, if the notification event is determined to be displayable using RSVP, a notification marker/icon may be displayed in step 1208. In one embodiment, the notification marker/icon may be overlaid on top of the icon representing the corresponding application (see,
If it is determined in step 1303 that a URL is included in the RSVP content, an indication of the presence of a URL is displayed in step 1306 (as shown in
If it is determined in step 1303 that embedded text is included in the RSVP content, an indication of the presence of embedded text is displayed in step 1316 (as shown in
If it is determined that the file contains no RSVP content, the application may proceed as normal in step 1406. For example, if the file is a photo, and it does not contain RSVP content, the application may simply display the photo normally. If however, it is determined that the file contains RSVP content, the application requests display of notification markers/icons in appropriate locations in step 1408. In the example of a photo, the notification markers/icons may need to be displayed in specific locations on the photo. In one embodiment, the application may request, for example, RSVP library 204 to display the notification markers/icons. In another embodiment, the application may request the operating system to display the notification markers/icons. In yet another embodiment, the application may obtain the notification markers/icons from RSVP library 204, and display the notification markers/icons. In an alternate embodiment, step 1408 of displaying notification markers/icons is optional. A user device may be configured (e.g., using RSVP settings 308 in
Continuing with the description of
In addition to display advertisements, RSVP content may also be incorporated in, for example sponsored search results.
In some embodiments, users may embed RSVP content into files such as for example, photos, presentations, maps, etc.
If the user chooses to add “global” RSVP content, an interface which allows the user to input textual content, which will be embedded as “global” RSVP content is displayed in step 1706. In the embodiment described above where “global” content includes a description of the file, the “global” content may be provided in the metadata of the file. For example, a user may right-click the file icon, select an option to enter an RSVP description for the file, and input the textual content.
User input including textual content may be received in step 1708. In embodiments where text input is required, text input may be provided using a virtual (i.e., touch screen) or physical keyboard, mouse, trackball, etc. Alternatively, or in addition, the user may provide voice/speech input via a microphone. The voice/speech input may then be converted to text. If the user chooses to add “local” RSVP content, an interface which allows the user to specify location(s), element(s), and/or text selection, and add the corresponding textual content, which will be embedded as “local” RSVP content is displayed in step 1710. For example, the user may specify locations on photos, maps, etc., specify elements or objects in photos, diagrams and presentations, etc. User input including specified location(s), element(s), and/or text selection, and add the corresponding textual content may be received in step 1712. In one embodiment, the interface may also allow selection of text such that RSVP content may be associated with the selected text. In the case of a photo, in addition to specifying locations on the photo, the user may select areas of the photo, such as, for example a face of a person. The location(s), element(s), and/or text selection information and the corresponding textual content may be saved in step 1714. In one embodiment, location(s), element(s), and/or text selection information and the corresponding textual content may be saved in the meta data of the file. For example, in the case of a photo, the location(s), element(s), and/or text selection information and the corresponding textual content may be saved in the EXIF data. In one embodiment, the textual content may be saved as plain text, in which case, the textual data may be converted to RSVP content (e.g., text that is configured to be displayed using RSVP) prior to displaying. In other embodiments, the textual content may be converted to RSVP content, and the RSVP content may be saved in the meta data of the file.
While the present invention has been particularly described with respect to the illustrated embodiments, it will be appreciated that various alterations, modifications and adaptations may be made based on the present disclosure, and are intended to be within the scope of the present invention. While the invention has been described in connection with what are presently considered to be the most practical and preferred embodiments, it is to be understood that the present invention is not limited to the disclosed embodiments but, on the contrary, is intended to cover various modifications and equivalent arrangements included within the scope of the appended claims.
This application is a continuation of U.S. patent application Ser. No. 13/973,835 filed on Aug. 22, 2013 which will issue on Apr. 25, 2017 as U.S. Pat. No. 9,632,661 and which was a divisional of U.S. patent application Ser. No. 13/730,163 filed on Dec. 28, 2012. The content of both of those applications is incorporated herein by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
4845645 | Matin et al. | Jul 1989 | A |
4932416 | Rosenfeld | Jun 1990 | A |
5137027 | Rosenfeld | Aug 1992 | A |
5873109 | High | Feb 1999 | A |
5995101 | Clark et al. | Nov 1999 | A |
6018774 | Mayle et al. | Jan 2000 | A |
6067069 | Krause | May 2000 | A |
6078935 | Nielsen | Jun 2000 | A |
6098085 | Blonder et al. | Aug 2000 | A |
6113394 | Edgar | Sep 2000 | A |
6130968 | McIan et al. | Oct 2000 | A |
6292176 | Reber et al. | Sep 2001 | B1 |
6353824 | Boguraev et al. | Mar 2002 | B1 |
6409513 | Kawamura et al. | Jun 2002 | B1 |
6515690 | Back et al. | Feb 2003 | B1 |
6553373 | Boguraev et al. | Apr 2003 | B2 |
6568939 | Edgar | May 2003 | B1 |
6652283 | Van Schaack et al. | Nov 2003 | B1 |
6686928 | Reber et al. | Feb 2004 | B1 |
6704698 | Paulsen, Jr. et al. | Mar 2004 | B1 |
6816174 | Tiongson et al. | Nov 2004 | B2 |
6865572 | Boguraev et al. | Mar 2005 | B2 |
6925613 | Gibson | Aug 2005 | B2 |
7139006 | Wittenburg et al. | Nov 2006 | B2 |
7159172 | Bentley et al. | Jan 2007 | B1 |
7173621 | Reber et al. | Feb 2007 | B2 |
7272787 | Nakamura et al. | Sep 2007 | B2 |
7365741 | Chincholle et al. | Apr 2008 | B2 |
7395500 | Whittle et al. | Jul 2008 | B2 |
7404142 | Tischer | Jul 2008 | B1 |
7454717 | Hinckley et al. | Nov 2008 | B2 |
7475334 | Kermani | Jan 2009 | B1 |
7478322 | Konttinen | Jan 2009 | B2 |
7549114 | Bederson et al. | Jun 2009 | B2 |
7598977 | Ryall et al. | Oct 2009 | B2 |
7607088 | Bertram et al. | Oct 2009 | B2 |
7613731 | Larson | Nov 2009 | B1 |
7627590 | Boguraev et al. | Dec 2009 | B2 |
7647345 | Trepess et al. | Jan 2010 | B2 |
7710411 | Reber et al. | May 2010 | B2 |
7730397 | Tischer | Jun 2010 | B2 |
7783978 | Andrews et al. | Aug 2010 | B1 |
7835581 | Mathan et al. | Nov 2010 | B2 |
7946707 | McDonald, II et al. | May 2011 | B1 |
7991195 | Mathan et al. | Aug 2011 | B2 |
7991920 | Back et al. | Aug 2011 | B2 |
7996045 | Bauer et al. | Aug 2011 | B1 |
8059136 | Mathan | Nov 2011 | B2 |
8069466 | Shelton et al. | Nov 2011 | B2 |
8140973 | Sandquist et al. | Mar 2012 | B2 |
8165407 | Khosla et al. | Apr 2012 | B1 |
8209634 | Klassen et al. | Jun 2012 | B2 |
8214309 | Khosla et al. | Jul 2012 | B1 |
8217947 | Roth | Jul 2012 | B2 |
8244475 | Aguilar et al. | Aug 2012 | B2 |
8245142 | Mizrachi et al. | Aug 2012 | B2 |
8249397 | Wood et al. | Aug 2012 | B2 |
8265743 | Aguilar et al. | Sep 2012 | B2 |
8274592 | Watkins et al. | Sep 2012 | B2 |
8285052 | Bhattachaiyya et al. | Oct 2012 | B1 |
8292433 | Vertegaal | Oct 2012 | B2 |
8307282 | Saito et al. | Nov 2012 | B2 |
8335751 | Daily et al. | Dec 2012 | B1 |
8363939 | Khosla et al. | Jan 2013 | B1 |
8369652 | Khosla et al. | Feb 2013 | B1 |
8374687 | Mathan et al. | Feb 2013 | B2 |
8418057 | Knight et al. | Apr 2013 | B2 |
8458152 | Fogg et al. | Jun 2013 | B2 |
8472791 | Gargi | Jun 2013 | B2 |
8483816 | Payton et al. | Jul 2013 | B1 |
8595141 | Hao et al. | Nov 2013 | B2 |
8903174 | Maurer et al. | Dec 2014 | B2 |
9087407 | Koivusalo | Jul 2015 | B2 |
9154606 | Tseng et al. | Oct 2015 | B2 |
9299065 | Hymel et al. | Mar 2016 | B2 |
9483109 | Waldman et al. | Nov 2016 | B2 |
9552596 | Waldman et al. | Jan 2017 | B2 |
9632661 | Waldman et al. | Apr 2017 | B2 |
20020133521 | Campbell et al. | Sep 2002 | A1 |
20030038754 | Goldstein et al. | Feb 2003 | A1 |
20030184560 | Pierce | Oct 2003 | A1 |
20040004632 | Knight et al. | Jan 2004 | A1 |
20040024747 | Boguraev et al. | Feb 2004 | A1 |
20040107195 | Trepess | Jun 2004 | A1 |
20040119684 | Back et al. | Jun 2004 | A1 |
20060069562 | Adams et al. | Mar 2006 | A1 |
20070061720 | Kriger | Mar 2007 | A1 |
20080141126 | Johnson et al. | Jun 2008 | A1 |
20090066722 | Kriger et al. | Mar 2009 | A1 |
20090083621 | Kermani | Mar 2009 | A1 |
20090094105 | Gounares et al. | Apr 2009 | A1 |
20100280403 | Erdogmus et al. | Nov 2010 | A1 |
20110010611 | Ross | Jan 2011 | A1 |
20110115819 | Hanson | May 2011 | A1 |
20110117969 | Hanson | May 2011 | A1 |
20120265758 | Han et al. | Oct 2012 | A1 |
20120317498 | Logan et al. | Dec 2012 | A1 |
20130100139 | Schliesser et al. | Apr 2013 | A1 |
20130159850 | Cohn | Jun 2013 | A1 |
20130174088 | Stringer | Jul 2013 | A1 |
20140040344 | Gehring et al. | Feb 2014 | A1 |
20140186010 | Guckenberger | Jul 2014 | A1 |
20140188766 | Waldman | Jul 2014 | A1 |
20140188848 | Waldman | Jul 2014 | A1 |
20140189515 | Waldman | Jul 2014 | A1 |
20140189586 | Waldman | Jul 2014 | A1 |
20140189595 | Waldman | Jul 2014 | A1 |
20150199944 | Maurer et al. | Jul 2015 | A1 |
20160343171 | Waldman et al. | Nov 2016 | A1 |
20170199937 | Waldman et al. | Jul 2017 | A1 |
Number | Date | Country |
---|---|---|
2 619 015 | Aug 2009 | CA |
1461435 | Dec 2003 | CN |
1 162 587 | Dec 2001 | EP |
1 727 052 | Nov 2006 | EP |
2 323 357 | May 2011 | EP |
1 502 508 | Mar 1978 | GB |
S51-090237 | Aug 1976 | JP |
2007-141059 | Jun 2007 | JP |
2002-0011933 | Feb 2002 | KR |
WO 9841937 | Sep 1998 | WO |
WO 0212994 | Feb 2002 | WO |
WO 02037256 | May 2002 | WO |
WO 03019341 | Mar 2003 | WO |
WO 2006100645 | Sep 2006 | WO |
WO 2008051510 | May 2008 | WO |
Entry |
---|
Applying the Rapid Serial Visual Presentation Technique to Small Screens Karin Sicheritz Uppsala University (Year: 2000). |
Brysbaert, “Interhemispheric transfer and the processing of foveally presented stimuli,” Behavioural Brain Research, 1994, vol. 64, pp. 151-161. |
Brysbaert et al., “Visual Constraints in Written Word Recognition: Evidence From the Optimal Viewing Position Effect,” Journal of Research in Reading, 2005, vol. 28, Issue 3, pp. 216-226. |
Brysbaert et al., “Word skipping: Implications for theories of eye movement control in reading,” Eye Guidance in Reading and Scene Perception, 1998, Chapter 6, pp. 125-147. |
Clark et al., “Word Ambiguity and the Optimal Viewing Position in Reading,” Preprint submitted to Elsevier Preprint, 1998, pp. 1-50. |
Engbert et al., “SWIFT: A Dynamical Model of Saccade Generation during Reading,” Psychological Review, 2005, vol. 112, No. 4, pp. 777-813. |
Findlay et al., “Saccade target selection in visual search: the effect of information from the previous fixation,” Vision Research, 2001, vol. 41, pp. 87-95. |
Kliegl et al., “Tracking the Mind During Reading: The Influence of Past, Present, and Future Words on Fixation Durations,” J. of Exp. Psyc., 2006, vol. 135, No. 1, pp. 12-35. |
McConkie et al., “Eye Movement Control During Reading: II. Frequency of Refixating a Word,” Center for the Study of Reading, 1989, Technical Report No. 469, 30 pages. |
Nazir et al., “Letter visibility and word recognition: The optimal viewing position in printed words,” Perception & Psychophysics, 1992, vol. 52, No. 3, pp. 315-328. |
O'Regan et al., “Convenient Fixation Location Within Isolated Words of Different Length and Structure,” Journal of Experimental Psychology, 1984, vol. 10, No. 2, pp. 250-257. |
Rayner et al., “Asymmetry of the effective visual field in reading,” Perception & Psychophysics, 1980, vol. 27, No. 6, pp. 537-544. |
Rayner, “Eye Movements in Reading and Information Processing: 20 Years of Research,” Psychological Bulletin, 1998, vol. 124, No. 3, pp. 372-422. |
Schoonbaert et al., “Letter position coding in printed word perception: Effects of repeated and transposed letters,” Language and Cognitive Processes, 2004, vol. 19, No. 3, pp. 333-367. |
Staub et al., “Eye movements and on-line comprehension processes,” The Oxford handbook of Psycholinguistics, 2007, Chapter 19, pp. 327-342. |
Stevens et al., “Letter visibility and the viewing position effect in visual word recognition,” Perception & Psychophysics, 2003, vol. 65, No. 1, pp. 133-151. |
Vitu et al., “Optimal landing position in reading isolated words and continuous text,” Perception & Psychophysics, 1990, vol. 47, No. 6, pp. 583-600. |
Vitu, “The influence of parafoveal preprocessing and linguistic context on the optimal landing position effect,” Perception & Psychophysics, 1991, vol. 50, pp. 58-75. |
Bruijn et al., “RSVP Browser: Web Browsing on Small Screen Devices,” Personal and Ubiquitous Computing, 2002, vol. 6, Issue 4, 9 pages. |
International Search Report and Written Opinion issued in International Application No. PCT/US2013/050081 dated Oct. 31, 2013, 15 pages. |
International Search Report and Written Opinion issued in International Application No. PCT/US2013/077883 dated Apr. 16, 2014, 27 pages. |
Android Push Notifications, http://developer.android.com/guide/topics/ui/notifiers/notifications.html, 17 pages. |
International Search Report and Written Opinion issued in International Application No. PCT/US2014/054867 dated Jan. 2, 2015, 15 pages. |
“How to Display Closed Captions,” posted no later than Jul. 2016, http://library.med.utah.edu/neurologicexam/html/how_to_display_closed_captions.html, 6 pages. |
Extended European Search Report issued in European Patent Application No. 14842003.7 dated Jan. 5, 2017, 9 pages. |
Office Action issued in European Patent Application No. 13 816 590.7 dated Feb. 15, 2017, 5 pages. |
Lo, “Chinese character recognition: studies of complexity effect on recognition efficiency, spatial frequency characteristics, crowding and expertise,” PhD Thesis, 2013, The University of Hong Kong, 181 pages. |
Forster, “Visual perception of rapidly presented word sequences of varying complexity,” Perception and Psycholophyics, 1970, vol. 8, Issue 4, pp. 215-221. |
Office Action issued in Japanese Patent Application No. 2015-521810 dated May 16, 2017, 12 pages. |
Number | Date | Country | |
---|---|---|---|
20170228132 A1 | Aug 2017 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13730163 | Dec 2012 | US |
Child | 13973835 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13973835 | Aug 2013 | US |
Child | 15495737 | US |