1. Field of the Invention
This invention relates generally to electronic display devices, and more specifically, to enhancing the representation of image data on electronic display devices.
2. Background Discussion
Typically, reading text on an active light display appears to increase eye strain in comparison to reading print from actual paper media. In addition, for example, some users of electronic reading devices may have a personal preference for the appearance associated with an actual paper medium as opposed to the look of electronic image data (e.g., text) displayed on an electronic display such as a computer screen, PDA, E-Reader, smart phone, etc.
Thus, embodiments of the present invention are directed to enhancing the visual representation of image data on an electronic display.
Accordingly, embodiments of the present invention are directed to a method and apparatus that is related to enhancing the representation of image data that is displayed on an electronic display. Particularly, according to embodiments of the present invention, image data may be displayed on an electronic display in a manner that simulates the visual appearance of an actual paper medium (e.g., paper utilized in printed novels).
One embodiment of the present invention is directed to an apparatus including a display having a surface that displays image data. A processing device processes and provides image data to the display. A camera device is associated with the display and operatively coupled to the processing device. The camera device dynamically detects a user's head position relative to the surface of the display and determines the incident light surrounding the display. The detected head position and the incident light are then processed by the processing device for rendering the image data on the display to resemble a representation of an actual paper medium.
Another embodiment of the present invention is directed to a method of controlling the appearance on a display having a surface that displays image data. The method includes determining incident light levels surrounding the display. A user's head position is then determined relative to the surface of the display. The incident light levels and the user's head position are processed for rendering image data on the display that resembles a representation of an actual paper medium.
Yet another embodiment of the present invention is directed to determining the user's eye position, and providing, based on the user's determined eye position, enhanced lighting to a first region of the display where the user is predicted to be observing. Based on the user's determined eye position, shading to a second region of the display where the user is predicted to not be observing.
Yet another embodiment of the present invention is directed to calculating a time period corresponding to an interval between the user changing page content associated with the display and advancing to a next page of content and predicting the region of the display where the user is observing based on the calculated time period and the user's determined eye position. For example, when a user finishes reading a page of image data (e.g., text of book) and manually activates the device (e.g., e-reader) to display the next page of text, the time interval between such display events may be used to predict how long it takes for the user to complete the process of reading a page of displayed text on the display. The time it takes for the user to complete the process of reading a page may be defined as the above described time period. Further, an average value of such a time period may also be used to account for a user's decreased reading speed at the end of a reading session compared to when the user first begins to read. Using this calculated time period, an automatic function may cause the device to change the displayed text to the next page automatically without the need for the user to manually activate the device (e.g., e-reader) to display the next page of text. Also, assuming that a user reads the displayed text from the top of the display to the bottom of the display according to a substantially constant speed, the device may be able predicted and highlight what region or sentence the user is reading. Alternatively, reasonably accurate systems using infrared sources and infrared cameras are available for detecting where the user is reading.
Yet another embodiment of the present invention is directed to providing a book genre and providing simulated effects based on the provided book genre. The simulated effects may include media data that is reproduced based on the user observing a particular one or more locations on the display that are determined by the predicting of the region of the display where the user is observing.
Yet another embodiment of the present invention is directed to saving the calculated time period with user-login information associated with the user, and accessing the calculated time period upon the user entering the user-login information. The accessed time period and a further eye position determination are utilized for predicting the region of the display where the user is observing.
Yet another embodiment of the present invention is directed to providing a book genre and processing the book genre such that the rendered image data on the display resembles the representation of an actual paper medium corresponding to the provided book genre. The processing may include graphically displaying a binding at a middle location of the representation of an actual paper medium such that content data associated with the image data is enlarged in the proximity of the graphically displayed middle binding.
Yet another embodiment of the present invention is directed to a non-transitory computer-readable recording medium for storing a computer program for controlling the appearance on a display having a surface that displays image data. The program includes determining incident light levels surrounding the display; determining a user's head position relative to the surface of the display; and then processing the incident light levels and the user's head position for rendering image data on the display that resembles a representation of an actual paper medium.
Yet another embodiment of the present invention is directed an apparatus comprising a display having a surface that displays image data; a processing device for processing and providing image data to the display; and a camera device associated with the display and communicatively coupled to the processing device. The camera device dynamically detects changes in a user's head position and changes in movement of at least one of the user's eyes in order to provide a dynamic bookmark. The dynamic bookmark may include a highlighted portion of displayed text that is determined based on the processing of the detected changes in the user's head position and the changes in the movement of at least one of the user's eyes.
Other embodiments of the present invention include the methods described above but implemented using apparatus or programmed as computer code to be executed by one or more processors operating in conjunction with one or more electronic storage media.
To the accomplishment of the foregoing and related ends, certain illustrative aspects of the invention are described herein in connection with the following description and the annexed drawings. These aspects are indicative, however, of but a few of the various ways in which the principles of the invention may be employed and the present invention is intended to include all such aspects and their equivalents. Other advantages, embodiments and novel features of the invention may become apparent from the following description of the invention when considered in conjunction with the drawings. The following description, given by way of example, but not intended to limit the invention solely to the specific embodiments described, may best be understood in conjunction with the accompanying drawings, in which:
It is noted that in this disclosure and particularly in the claims and/or paragraphs, terms such as “comprises,” “comprised,” “comprising,” and the like can have the meaning attributed to it in U.S. patent law; that is, they can mean “includes,” “included,” “including,” “including, but not limited to” and the like, and allow for elements not explicitly recited. Terms such as “consisting essentially of” and “consists essentially of” have the meaning ascribed to them in U.S. patent law; that is, they allow for elements not explicitly recited, but exclude elements that are found in the prior art or that affect a basic or novel characteristic of the invention. These and other embodiments are disclosed or are apparent from and encompassed by, the following description. As used in this application, the terms “component” and “system” are intended to refer to a computer-related entity, either hardware, a combination of hardware and software, software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a server and the server can be a component. One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers.
In the context of embodiments of the present invention, the electronic display 102 may represent any powered (e.g., powered by battery, powered by a power supply adaptor, and/or powered by an alternative energy source such as solar energy) or unpowered (e.g., no internal power source) display medium that is operatively driven by a processing device (e.g., computer, PDA, cell-phone, smart-phone, e-reader, etc.). The electronic display device 102 may be, for example, an LCD, plasma or any display unit suitable to display text data, data represented by pixels or image data and/or a combination thereof. The electronic display 102 and image processing unit 104 may be integrated within a single unit such as an e-reader. Alternatively, the electronic display 102 and image processing unit 104 may be formed by separate components such as a computer tower and a computer monitor. The electronic display 102 includes one or more camera image sensors 106 (e.g., a CCD or a CMOS active-pixel sensor), and one or more additional sensor devices 108 (e.g., a microphone or accelerometer).
As shown in
The image data access device 103 includes an image data storage unit 128 and an image data reader 130. The image data storage unit 128 may include any memory storage device (e.g., Compact Flash Memory device) capable of storing the image data that is to be displayed on the electronic display 102. Image data reader 130 includes the requisite circuitry for accessing or reading the image data from the image data storage unit 128.
Once read, the image data reader 130 sends the image data directly to the image effects generating unit 114. The image data reader 130 also simultaneously sends the image data to the embedded audio visual extraction unit 118 and the genre determination unit 120 for additional processing. The image effects generating unit 114 displays the image data on the electronic display 102 in such a manner that the image data on the display 102 simulates or resembles the representation of an actual paper medium. For example, a representation of an actual paper medium may include reproducing the appearance of the pages of, for example, a paperback novel, a hardback novel, children's books, etc. The image effects generating unit 114 also displays the image data with additional visual and/or audio effects based on the processing results from the embedded audio visual extraction unit 118 and the genre determination unit 120.
The embedded audio visual extraction unit 118 searches for and extracts audio and/or visual files that are embedded in the image data that is received from the image access device 103. For example, image data associated with displaying the text of a book may include an embedded visual file that produces a customized graphical appearance associated with the printed version of the book. Alternatively, for example, the embedded visual file may produce a highlighted-link for certain textual words. By opening the highlighted-link, the user is able to obtain additional information corresponding to the textual word represented by the link. For example, if the link associated with the word is “the Cotwolds,” by selecting this link, a screen overlay may appear within the display area 138, which provides additional information regarding the Cotwolds region of England. The embedded visual file may also produce a hyperlink for certain textual words. By opening the highlighted-link, the user is able to obtain additional information corresponding to the textual word from over the internet. For example, if the hyperlink associated with the word is “the Cotwolds,” by selecting this link, a web-browser may appear on the display, which receives additional downloaded information regarding the Cotwolds region of England. According to another example, image data associated with displaying the text of the book may include an embedded audio file that provides mood music that corresponds to the particular passage of the story line. For instance, if the textual passage displayed on the electronic display 102 corresponds to a death, the audio file will include a slow melancholy playback tune.
The genre determination unit 120 determines the genre of the image data that is read from the image data reader 130. This determination may be ascertained by, for example, analyzing the text associated with the image data, or by accessing metadata that may accompany the image data and provide genre information.
The effect plug-in 122 also provides a means for updating or adding additional visual and/or audio effects to the image data that is reproduced by the image effects generating unit 114. For example, one type of plug-in option may visually generate a center binding 136 to be displayed within the display area 138 of the electronic display 102. Another type of plug-in option may, for example, visually generate a three-dimensional effect, whereby the background of any displayed text appears to drop away from the text and into a swirling depth (not shown).
The embedded audio visual extraction unit 118, the genre determination unit 120, and the effect plug-in 122 all provide additional functionality and visual effects to the image data. However, the functionality of one or more of these units 118, 120, 122 may be enabled or disabled at the discretion of the user of the apparatus 100. Even if the functionality of all of units 118, 120, and 122 is disabled, the main function of the apparatus 100, which includes the rendering of image data on an electronic display to resemble an actual paper medium, remains intact via the processing capabilities of the image effects generating unit 114. Such a rendering of image data on an electronic display provides a reduction in eye strain, where the eye strain is generally caused by, among other things, the glare generated by existing electronic display devices.
Once the image data has been processed by the image effects generating unit 114 and optionally by any one or more of the other additional units 118, 120, 122, the audio/visual display unit driver 124 formats the image data for being displayed on the electronic display 102. Additionally, the audio/visual display unit driver 124 may process any audio data that is embedded within or accompanies the image data for the purpose of playing back such audio data via any speakers (not shown) associated with the apparatus 100.
The camera image receiving unit 110 receives image frames from one or both of the camera image sensors 106. The camera image sensors 106 are operative to generate image frames of a user's head relative to the electronic display 102. The camera image sensors 106 are also adapted to provide a measure of the incident light surrounding the electronic display 102. For example, the camera image receiving unit 110 may include a data buffering device that buffers the received image frames. The camera image processing unit 112 retrieves the buffered image frames from the camera image receiving unit 110 for further digital signal processing. For example, the digital signal processing may include determining the user's head within each frame using image recognition techniques and providing a measure of the angular relationship between the user's head and the surface of the electronic display 102 (see
As shown in
The other rendering properties utilized by the image effects generating unit 114 may include the visual properties associated with the actual ink that is used on a particular paper medium, the simulation of the diffuse appearance associated with some paper media, and the simulation of various lighting conditions favoring a person's eye sight when reading a paper copy of a book.
Referring back to
For example, a signal from an accelerometer (not shown) may be transferred to the image effects generating unit 114 for the purpose of commanding the unit 114 to display the next page of rendered image data on the electronic display 102. According to another example, a signal from a temperature sensor (not shown) may be transferred to the image effects generating unit 114 for the purpose of commanding the unit 114 to temporarily freeze (i.e., pause until reactivated) the rendered image data on the electronic display 102. In this scenario, a predefined change in temperature will likely indicate that the user has moved from their current location momentarily (e.g., stepping off a train, stepping out of the house into the open air, etc.). A microphone (not shown) may facilitate receiving voice commands from the user. For example, the sensor unit 116 may receive voice command signals from the microphone and generate a corresponding command (i.e., load next page of rendered image data) using voice recognition technology. Also, a light sensor (not shown) may be utilized to detect sudden changes in the surrounding light. By receiving such changes, the sensor unit 116 may generate a compensation command to the image effects generating unit 114. For example, the compensation command may instruct the image effects generating unit 114 to momentarily (e.g., up to approximately 20 seconds) dim the electronic display in response to bright light suddenly surrounding the electronic display 102. Alternatively, for example, the compensation command may instruct the image effects generating unit 114 to momentarily (e.g., up to approximately 20 seconds) brighten the electronic display in response to a sudden drop in light surrounding the electronic display 102. Thus, as shown in
The embedded audio/visual processing unit 202 receives the audio and/or visual files or metadata that are extracted by the embedded audio/visual extraction unit (shown as element 118 in
The genre based processing unit 204 generates display artifacts and visual effects based on detected genre information received from the genre determination unit (
The icon generation unit 206 provides the option of generating one or more icons within, for example, the border of the display area (
The plug-in effect processing unit 208 receives one or more programs, files, and/or data for updating or adding additional visual and/or audio effects to the image data via the effect plug-in (
The image recognition data receiving unit 210 receives processed image frames from the camera image processing unit 112 (
For example, the image recognition data receiving unit 210 may provide additional image recognition functionality such as the detection of eye movement (e.g., iris) as the user reads a page of displayed text. For example, the movement of the iris of the eye based on reading a page from top to bottom may be used as a reference movement. The actual movement of the iris of the user's eye is correlated with this reference movement in order to determine the location on the page of where the user is reading. The image recognition data receiving unit 210 may then provide the GPU 216 with predicted coordinate data for ascertaining the position of where (e.g., line of text) the user is reading with respect to the electronic display 102 (
According to another embodiment of the present invention, the predicted coordinate data may also be used as a dynamic bookmark, whereby if the user suddenly turns or moves away from the display 102 (
The angle orientation and sensor data receiving unit 214 receives processed sensory information from the sensor unit 116 (
The GPU 216 includes a graphics shader unit 218 and an image display compensation unit 220. The graphics shader unit 218 provides the necessary instructions (e.g., software) for execution by the GPU 216. For example, the graphics shader unit 218 may include graphics software libraries such as OpenGL and Direct3D. The angle orientation and sensor data receiving unit 214 and the image recognition data receiving unit 210 provide the graphics shader unit 218 with programming and/or data associated with the user's head position, the incident light surrounding the display 102, and the angular orientation of the electronic display 102. The graphics shader unit 218 then utilizes the programming and/or data associated with the user's head position, the incident light surrounding the display 102, and the angular orientation of the electronic display 102 to render the image data resembling an actual paper medium on the electronic display, while compensating for changes in incident light levels and the user's head position relative to the electronic display (see
Referring to
Based on the changes in the traits (e.g., color, z depth, and/or alpha value) of each pixel on the display 102 as a result of incident light intensity changes and deviations from the optimal angle (i.e., θ0), the graphics shader unit 218 provides the necessary programming/commands for accordingly correcting the changed traits in each pixel via the display driver unit 124 (
The GPU 216 also includes the image display compensation unit 220. The image display compensation unit 220 provides real time compensation for the displayed images based on sudden changes in light intensity surrounding the display 102 (
At step 304, the user's head position relative to the surface of the electronic display 102 is determined by utilizing, for example, the camera image receiving unit 110, the camera image processing unit 112, and the angle orientation and sensor data receiving unit 214 (i.e., display tilt detection). Also, using the camera image receiving unit 110 and the camera image processing unit 112, the incident light surrounding the electronic display is determined (step 306).
At step 308, the graphics shader unit 218 processes the user's determined head position and the measured incident lighting conditions (i.e., light intensity) surrounding the display 102 for generating visual data that renders the image data on the electronic display to resemble the representation of an actual paper medium. It is then determined whether other visual effects are activated or enabled (step 310). If the other additional visual effects are not activated or enabled (step 310), the processed image data (step 308) resembling the representation of an actual paper medium is displayed on the electronic display 102, as shown in step 314. If, however, the other additional visual effects are activated or enabled (step 310), additional visual data is provided for rendering the image data on the electronic display 102 (step 312). The additional visual data for rendering the image data on the display 102 is illustrated and described below by referring to
If the image data does not include embedded visual and/or audio data, genre information is extracted by the genre based processing unit 204 from the image data (step 408). The graphics shader unit 218 then generates corresponding graphical effect data (e.g., a gothic display theme for a horror genre) based on the extracted genre information (step 410).
At step 412, optionally provided additional graphical data may be provided for display with the image data by the plug-in effect processing unit 208. Based on the additional graphical data provided by the plug-in effect processing unit 208, the graphics shader unit 218 generates graphical effects corresponding to the existing plug-in effect provided by unit 208 (e.g., a 3-D background effect).
At step 414, other optionally provided additional graphical data may be added to the displayed image data based on the use of image recognition techniques. For example, the image recognition data receiving unit 210 may identify at least one eye of a user and track the movement of this eye in order to predict a location (e.g., a particular line of displayed text) on the display 102 which the user is observing. Once the predicted location is determined, the graphics shader unit 218 may, for example, generate a highlighted box 502 (see
At step 416, further additional graphical data may also be added to the displayed image data in the form of graphical icons and/or highlighted (e.g., bolded) selectable (e.g., via cursor or touch screen) text. The icon generation unit 206 generates selectable icons or highlighted text based on certain words that exist in the text of the image data. Although
Another embodiment of the present invention is directed to mounting a video camera on a device, such as a PLAYSTATION® that is adapted to sample ambient lighting and to modify the display characteristics based on the sensed ambient light. Thus, the camera, in addition to sensing a user's head position, is also used to sense ambient light. The camera may also be used to track the location of the reader device, typically utilizing GPS satellite locating techniques.
It is to be understood that the present invention can be implemented in various forms of hardware, software, firmware, special purpose processes, or a combination thereof. In one embodiment, at least parts of the present invention can be implemented in software tangibly embodied on a computer readable program storage device. The application program can be downloaded to, and executed by, any device comprising a suitable architecture.
The particular embodiments disclosed above are illustrative only, as the invention may be modified and practiced in different but equivalent manners apparent to those skilled in the art having the benefit of the teachings herein. Furthermore, no limitations are intended to the details of construction or design herein shown, other than as described in the claims below. It is therefore evident that the particular embodiments disclosed above may be altered or modified and all such variations are considered within the scope and spirit of the invention. Although illustrative embodiments of the invention have been described in detail herein with reference to the accompanying drawings, it is to be understood that the invention is not limited to those precise embodiments, and that various changes and modifications can be effected therein by one skilled in the art without departing from the scope and spirit of the invention as defined by the appended claims.