Augmented reality refers to a technology platform that merges the physical and virtual worlds by augmenting real-world physical objects with virtual objects. For example, a real-world physical newspaper may be out of date the moment it is printed, but an augmented reality system may be used to recognize an article in the newspaper and to provide up-to-date virtual content related to the article. While the newspaper generally represents a static text and image-based communication medium, the virtual content need not be limited to the same medium. Indeed, in some augmented reality scenarios, the newspaper article may be augmented with audio and/or video-based content that provides the user with more meaningful information.
Some augmented reality systems operate on mobile devices, such as smart glasses, smartphones, or tablets. In such systems, the mobile device may display its camera feed, e.g., on a touchscreen display of the device, augmented by virtual objects that are superimposed in the camera feed to provide an augmented reality experience or environment. In the newspaper example above, a user may point the mobile device camera at the article in the newspaper, and the mobile device may show the camera feed (i.e., the current view of the camera, which includes the article) augmented with a video or other virtual content, e.g., in place of a static image in the article. This creates the illusion of additional or different objects than are actually present in reality.
The following detailed description references the drawings, wherein:
A “computing device” or “device” may be a desktop computer, laptop (or notebook) computer, workstation, tablet computer, mobile phone, smart phone, smart device, smart glasses, or any other processing device or equipment which may be used to provide an augmented reality experience.
Speed reading techniques may improve reading speed of a user. Some speed reading techniques are implemented on a computing device. Computing devices implementing augmented reality may provide a mechanism by which to implement speed reading techniques. In some examples, wearable augmented reality devices may allow a user to interact with traditional printed media, such as, newspapers, books, magazines, pamphlets, flyers, etc. and implement a speed reading technique on the device. However, implementing speed reading techniques via a computing device in an augmented reality experience may interfere with the display of other physical and virtual objects in the augmented reality experience.
To address these issues, in the examples described herein, a display of a speed reading pattern on a computing device in an augmented reality mode may provide an augmented reality experience in which speed reading techniques may be implemented without limiting the display of other physical and virtual objects. In such examples, the computing device may display recognized text data as pop-ups or via dedicated portions of the display. The computing device may provide indicators to a user that the display of recognized text has ended and may indicate additional images are required to continue use of the speed reading pattern.
Referring now to the drawings,
In examples described herein, a processing resource may include, for example, one processor or multiple processors included in a single computing device (as shown in
As used herein, a “machine-readable storage medium” may be any electronic, magnetic, optical, or other physical storage apparatus to contain or store information such as executable instructions, data, and the like. For example, any machine-readable storage medium described herein may be any of Random Access Memory (RAM), volatile memory, non-volatile memory, flash memory, a storage drive (e.g., a hard drive), a solid state drive, any type of storage disc (e.g., a compact disc, a DVD. etc.), and the like, or a combination thereof. Further, any machine-readable storage medium described herein may be non-transitory.
In the example of
As used herein, “text recognition” may refer to a process of identifying text data in an image file and may include translating identified text from one language to another language. In the examples described herein, the computing device 100 may perform text recognition via programming instructions executed by processing resource 110 to analyze an image and recognize text data and translate a foreign language text with the recognized text data. For example, an optical character recognition (OCR) system may be used by computing device 100 to recognize text and a separate text translation technique may be applied to the recognized text to translate recognized text in computing device 100. However, the examples are not limited thereto, and computing device 100 may recognize text by various computer vision techniques to identify words and characters in captured image data 105 and may cluster text in the captured images into groups by media items to which the text is determined to belong. For example, a captured image may include images of more than one physical object including text, such as, an image including a street sign and magazine cover. In such an example, computing device 100 may detect parallelograms in the captured image to identify a physical object which may include text therein, for example, a parallelogram may correspond to a street sign or a book. Computing device 100 may use clustering techniques in two dimensional space to group aggregates of recognized text that may be part of the same parallelogram. In such examples, computing device 100 may also apply various techniques to determine an orientation of the captured text as part of text recognition. In other examples, computing device 100 may use various other techniques to determine a location of text in a captured image, such as, Bayesian techniques, etc., and then recognize text in the determined locations.
In instructions 124, the computing device 100 may display a recognized text 107 on a display according to a speed reading pattern. As used herein “speed reading pattern” refers to a technique implemented on a display device to attempt to improve a user's speed of reading. Examples may include at least one of enlarging certain text on the display, blurring portions of the text on the display, rearranging an order of text on the display, etc. For example, a speed reading pattern may include presenting certain words from recognized text serially in a defined region on the display or by greying out, partially obscuring or shading all but certain words on the display sequentially to aide in speed reading those portions of the text. However, the examples are not limited thereto and any speed reading pattern may be implemented on the display. The display may be any display device coupled to computing device 100 directly (e.g., wired) or indirectly (e.g., wirelessly). The display may be a device coupled to computing device 100 through a wired connection (e.g., local area network (LAN), etc.) or a wireless connection (e.g., wireless local area network (WLAN), Wi-Fi, Bluetooth, etc.). For example, the display may be a display of an augmented reality device such as Google® Glasses, smartphones, mobile phones, tablets, phablets, head mounted display, etc.
In some examples, computing device 100 may display recognized text 107 contemporaneously with the captured image to provide an augmented reality experience to a user. For example, the speed reading pattern may display a digital rendering of recognized text 107 or a copy of recognized text 107 in a pop-up above the captured image of recognized text 107 and as the user reads recognized text 107, the pop-up may travel to a location over the portion of recognized text 107 being displayed in the pop-up as the user progresses through reading recognized text 107. In another example, recognized text 107 may be displayed in a separate graphical user interface (GUI) to be displayed in a certain position on the display. In such a manner, the captured image data 105 may be augmented to provide a speed reading pattern to user of computing device 100. A user may control and/or manipulate the display of the augmented captured image via any input device, such as, a keyboard, a mouse, a GUI, a motion detection sensor, gaze detection sensors, etc. to interact with the augmented reality environment provided by computing device 100.
In instructions 126, computing device 100 may determine when recognized text 107 in the captured image data 105 has been displayed via the speed reading pattern. For example, computing device 100 may determine recognized text 107 in captured image data 105 has been displayed for speed reading by analyzing the text displayed via the speed reading pattern or by tracking a user's gaze movement to determine a user's progress in reading recognized text 107. In some examples, recognized text 107 displayed via the speed reading pattern may include links to additional information about recognized text 107. For example, the additional information may include video data, image data, and text data. In such a manner, the speed reading pattern may incorporate other augmented reality data and characteristics.
In some examples, in instructions 128, computing device 100 may provide an indicator to capture additional image data in response to determining recognized text 107 in captured image data 105 has been displayed. For example, in the example of speeding reading a book, computing device 100 may provide a pop-up with instructions to a user to turn a page of the book in response to determining that all of recognized text 107 of captured image data 105 of the current page of the book has been displayed for a user. In other examples, the indicators may include providing a virtual animation on the display, such as, an arrow symbol, etc., changing a color of the displayed captured image, etc. or an auditory signal, such as, a bell, an alarm, a charm, etc. In the examples, a speed reading pattern may be implemented in computing device 100 via an augmented reality experience without distracting from other objects displayed on computing device 100.
In some examples, instructions 122, 124, 126, and 128 may be part of an installation package that, when installed, may be executed by processing resource 110 to implement the functionalities described herein in relation to instructions 122, 124, 126, and 128. In such examples, storage medium 120 may be a portable medium, such as a CD, DVD, flash drive, or a memory maintained by a computing device from which the installation package can be downloaded and installed. In other examples, instructions 122, 124, 126, and 128 may be part of an application, applications, or component already installed on computing device 100 including processing resource 110. In such examples, the storage medium 120 may include memory such as a hard drive, solid state drive, or the like. In some examples, functionalities described herein in relation to
In some examples, the instructions can be part of an installation package that, when installed, can be executed by the processing resource to implement at least engines 212, 214, 216, and 218. In such examples, the machine-readable storage medium may be a portable medium, such as a CD, DVD, or flash drive, or a memory maintained by a computing device from which the installation package can be downloaded and installed. In other examples, the instructions may be part of an application, applications, or component already installed on system 210 including the processing resource. In such examples, the machine-readable storage medium may include memory such as a hard drive, solid state drive, or the like. In other examples, the functionalities of any engines of system 210 may be implemented in the form of electronic circuitry.
In the example of
In some examples, presentation engine 214 may display recognized text 207 on display 220 according to a speed reading pattern. In some examples, speed reading patterns may include one of the text recognition patterns discussed above with respect to
Determination engine 216 may determine when recognized text 207 in the captured image has been displayed via the speed reading pattern on display 220. In some examples, determination engine may determine recognized text 207 has been displayed on display 220 by comparing the selected text displayed by presentation engine 214 with recognized text 207 determined by text recognition engine 212. In such an example, determination engine 216 may track and compare the small portion of text displayed in a GUI associated with presentation engine 214 and may determine when all or substantially all of recognized text 207 identified by text recognition engine 212 has been displayed via the speed reading pattern.
Indicator display engine 218 may display an indicator on display 220 to capture additional image data in response to determining that recognized text 207 in captured image data 205 has been displayed. In some examples, the indicator may be a visual indicator provided to a user on display 220 via a virtual object superimposed on the captured image, such as, a pop-up, a symbol, such as, an arrow, a change in background color of the displayed capture device, etc. In other examples, the indicator may be an auditory signal, such as, a bell, a chime, an alarm, etc., to indicate recognized text 207 has been displayed via presentation engine 214. In such a manner, a user of first device 200 may determine when all or substantially all of recognized text 207 has been displayed. In the example of
At 302 of method 300, computing device 100 may receive captured image data of a text-bearing physical object. In the example of
At 304, computing device 100 may perform optical character recognition to recognize a block of text in the captured image data.
At 306, computing device 100 may provide a translation of the recognized block of text. In the example of
At 308, computing device 100 may display the recognized block of text on a display according to a speed reading pattern. In the example of
At 310, computing device 100 may determine when the recognized block of text has been displayed via the speed reading pattern. Computing device 100 may determine the recognized block of text has been displayed via the speed reading pattern as discussed above with reference to
At 312, computing device 100 may provide an indicator to capture additional image data of the physical object in response to determining the recognized block of text has been displayed.
Although the flowchart of
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2014/075997 | 11/28/2014 | WO | 00 |