The present application relates to technically inventive, non-routine solutions that are necessarily rooted in computer technology and that produce concrete technical improvements.
As recognized herein, many books and other printed materials were created before the advent of modern technology. As also recognized herein, modern technology such as augmented reality (AR) systems may be employed to provide enhanced user interfaces for people. However, there are currently no adequate ways to use existing AR systems to provide electronic user interfaces for users to more robustly engage with printed materials since the printed materials are often not tailored for such AR systems and were often published before modern computing systems came to market. There are currently no adequate solutions to the foregoing computer-related, technological problem.
Accordingly, in one aspect a headset includes at least one processor, at least one display accessible to the at least one processor, and storage accessible to the at least one processor. The storage includes instructions executable by the at least one processor to identify a non-electronic printed publication held by a user, where the non-electronic printed publication includes pages with text. The instructions are also executable to access electronic content related to the non-electronic printed publication based on the identification and to present an indication regarding the electronic content on the at least one display.
The non-electronic printed publication may be a book, a magazine, or a newspaper.
Additionally, in some examples the headset may include a camera accessible to the at least one processor and the non-electronic printed publication may be identified based on input from the camera. Thus, in various examples the non-electronic printed publication may be identified based on input from the camera by one or more of: a title of the non-electronic printed publication, a cover or cover page of the non-electronic printed publication, author of the non-electronic printed publication, one or more images shown in the non-electronic printed publication, and/or universal product code (UPC) of the non-electronic printed publication.
Also in some examples, the electronic content may include a virtual three-dimensional object. Additionally or alternatively, the electronic content may include website content, audio, at least one image or video, and/or notes created by a first end-user different from a second end-user of the headset.
In some implementations, the indication of the electronic content may include the electronic content itself. Additionally or alternatively, the indication of the electronic content may include a link selectable to present the electronic content using the headset.
Also in some implementations, the indication regarding the electronic content may be presented responsive to identification of a user's line of sight as reaching a particular portion of the printed publication associated with the electronic content.
In another aspect, a method includes identifying, using a headset, a printed publication that includes physical pages with text. The method also includes accessing data related to electronic content that is associated with the printed publication based on the identifying, and then presenting an indication regarding the electronic content on an electronic display of the headset.
In some examples, the electronic content may be associated with the printed publication in a database of crowdsourced data, where the database may be accessed by the headset to present the indication.
In yet another aspect, at least one computer readable storage medium (CRSM) that is not a transitory signal may include instructions executable by at least one processor to identify, via input from a camera on a headset, printed material. The instructions may also be executable to access electronic content related to the printed material based on the identification and to present an indication regarding the electronic content on an electronic display of the headset.
Additionally, in some examples the instructions may be executable by the at least one processor to present the indication at least in part by highlighting a particular word of text in the printed material.
The details of present principles, both as to their structure and operation, can best be understood in reference to the accompanying drawings, in which like reference numerals refer to like parts, and in which:
Among other things, the present application discloses techniques to present electronic content associated with keywords in printed material. A user's headset may use augmented reality (AR) scanning to scan the printed material's title or cover page as well as its author, illustrations, etc. as the view marker. If for some reason the AR headset cannot identify the printed material based on that, if available the printed material's UPC code or other barcode may be scanned and used to identify the printed material by its unique reference identifier. That in turn may inform the AR headset of which novel, edition, year, etc. of the printed material the user is reading. For example, some novels may have the same content on different pages and so knowing which edition the user is reading will inform the AR headset of what content is on what page.
The headset may then identify key words on specific pages of the printed material to provide an AR experience along with the user reading the printed material itself. To do so, the AR headset may pick up on page numbers for particular pages of the printed material.
As an example, the AR headset may detect key words that are then highlighted in the AR platform to open as floating references. E.g., certain pre-defined words may be highlighted so users can select them and show a reference page such as an online encyclopedia page.
Another example is that teachers may map their notes to a specific book and release the notes to AR devices of their students. In this way, certain pre-defined words selected by the teacher may be highlighted to assist the students in finding key references that the teacher outlines in his or her class. E.g., a student may read a book and select a link to a text or audio note from the teacher about a specific literary term or reference the student might otherwise glance over while reading.
As yet another example, the cover or box for a compact disc, digital versatile disc (DVD), or Blu-ray disc may be scanned, and associated content may then be presented via the AR headset upon recognition of the cover/box. For example, if a DVD or Blu-ray cover for a particular movie or film being held by a user is recognized by the headset, then the associated short-length trailer for that movie or film may be automatically presented via the display and speakers of the AR headset (and/or presented at the AR headset based on user command). The foregoing might also apply when, for example, a user might be wearing an AR headset and looking at a digital DVD or digital music album cover presented on a different display (e.g., smart phone display) that is being viewed by the user through the transparent display of the AR headset.
Users may also be provided with options to interact with floating content. For example, users may save their own text or audio notes to a personal notation platform so that their notes are only presented when they read associated printed text, rather than being presented for all users that might read the text while wearing an AR headset (e.g., “save this page for later”). However, users may also provide digital text and audio notes by interacting with an AR notation platform so that the notes may in fact be made available to others. For example, an audio recording from a person may be presented via an AR device for listening by others, with the initial recording of the audio of the person being triggered by the person indicating a predefined phrase such as “take a note” as recognized by that person's AR device. The audio recording may then be associated with the particular word or sentence at which the person was identified as looking when speaking the phrase “take a note”.
Additionally, an AR device operating consistent with present principles may allow users to review their notes, collate their notes, and/or generate a notes outline via a graphical user interface presented by the AR device. For a printed book, for example, users may provide notes to the system while reading the book and then create an outline after they finish reading for a test or exam that might be administered on the book. This may apply to cloud-based content as well as content stored locally at a given end-user device.
Additionally, in some examples a user may confirm or reject a notation from someone else, e.g., if the notes are open-sourced and editable (e.g., via a graphical user interface presented via an AR device). Further still, a content management company may provide a crowd-sourcing platform allowing users to interact as discussed above to provide feedback, which may then train the system such as via machine learning to determine other electronic content (e.g., notes) to associate with other portions of printed material based on trends determined by the machine learning/artificial intelligence model. To this end, supervised or unsupervised training of one or more deep or recurrent neural networks in the AI model may occur to optimize the neural network(s) for making such inferences. For example, optimization/training may occur using one or more classification algorithms and/or regression algorithms along with inputs of user selections of electronic content to associate with certain keywords of the printed material.
Further still, another aspect of present principles in terms of note taking is that a user might be wearing an AR headset while highlighting portions of pages of a printed book using a physical, neon yellow highlighter marker. This and other behaviors may be automatically tracked using cameras on the AR headset and, for example, the headset may digitally identify and track the highlighted sentence, chapter, page, etc. so that the AR headset may later (upon user command) indicate the locations of the highlighted portions and even present the text of the highlighted portions themselves on its electronic display. Other example behaviors as referenced in the preceding sentence may include dog-earring/folding top outer corners of respective pages of the book or making a unique mark (e.g., check mark on a top outer corner of the page) for the user to easily identify that page later.
Furthermore, in some examples not just end users but companies may create their own original content for the AR platform that can be displayed along with the printed material. In this manner, companies may sell additional AR content to provide an enhanced, electronic user interface to use with pre-existing books or other printed materials. The companies may even curate the content and update the content over time as electronic content may change. Additionally, new electronic content may even be enabled for presentation only on specific or special days (e.g., weekends, Thanksgiving, the month of September, etc).
Thus, when a user is reading printed material and wearing an AR headset, electronic illustrations may automatically appear, appear based on user input, appear when certain key words are looked at, and/or appear separately on another device such as the user's smart phone.
Providing a few other examples consistent with present principles, consider an adult reading a book aloud to children. Each child may be wearing an AR headset and AR content may be displayed on each one's headset display as a throw projector or hologram, and/or may be displayed on another monitor in the room for the children to view it while the adult reads the book to them.
As another example, content companies may display their own electronic illustrations, scenes from a movie, movie clips, or other content based on the contextual information of the printed material.
As yet another example, present principles may be used for marketing. For example, the technology disclosed herein may be used to drive targeted marketing in the form of advertising. E.g., a user may read about a product or destination (or other location) in a printed publication, which could cause their AR headset produce either links or coupons for the product/destination for the user to select while reading. Thus, for example, a user might be reading about a certain dress that a character is wearing in a novel, and an advertisement for the dress from a certain retailer may pop up on his or her headset along with a special coupon for the user to purchase the dress.
Prior to delving further into the details of the instant techniques, with respect to any computer systems discussed herein, a system may include server and client components, connected over a network such that data may be exchanged between the client and server components. The client components may include one or more computing devices including televisions (e.g., smart TVs, Internet-enabled TVs), computers such as desktops, laptops and tablet computers, so-called convertible devices (e.g., having a tablet configuration and laptop configuration), and other mobile devices including smart phones. These client devices may employ, as non-limiting examples, operating systems from Apple Inc. of Cupertino Calif., Google Inc. of Mountain View, Calif., or Microsoft Corp. of Redmond, Wash. A Unix® or similar such as Linux® operating system may be used. These operating systems can execute one or more browsers such as a browser made by Microsoft or Google or Mozilla or another browser program that can access web pages and applications hosted by Internet servers over a network such as the Internet, a local intranet, or a virtual private network.
As used herein, instructions refer to computer-implemented steps for processing information in the system. Instructions can be implemented in software, firmware or hardware, or combinations thereof and include any type of programmed step undertaken by components of the system; hence, illustrative components, blocks, modules, circuits, and steps are sometimes set forth in terms of their functionality.
A processor may be any general purpose single- or multi-chip processor that can execute logic by means of various lines such as address lines, data lines, and control lines and registers and shift registers. Moreover, any logical blocks, modules, and circuits described herein can be implemented or performed with a general purpose processor, a digital signal processor (DSP), a field programmable gate array (FPGA) or other programmable logic device such as an application specific integrated circuit (ASIC), discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A processor can also be implemented by a controller or state machine or a combination of computing devices. Thus, the methods herein may be implemented as software instructions executed by a processor, suitably configured application specific integrated circuits (ASIC) or field programmable gate array (FPGA) modules, or any other convenient manner as would be appreciated by those skilled in those art. Where employed, the software instructions may also be embodied in a non-transitory device that is being vended and/or provided that is not a transitory, propagating signal and/or a signal per se (such as a hard disk drive, CD ROM or Flash drive). The software code instructions may also be downloaded over the Internet. Accordingly, it is to be understood that although a software application for undertaking present principles may be vended with a device such as the system 100 described below, such an application may also be downloaded from a server to a device over a network such as the Internet.
Software modules and/or applications described by way of flow charts and/or user interfaces herein can include various sub-routines, procedures, etc. Without limiting the disclosure, logic stated to be executed by a particular module can be redistributed to other software modules and/or combined together in a single module and/or made available in a shareable library.
Logic when implemented in software, can be written in an appropriate language such as but not limited to C# or C++, and can be stored on or transmitted through a computer-readable storage medium (that is not a transitory, propagating signal per se) such as a random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), compact disk read-only memory (CD-ROM) or other optical disk storage such as digital versatile disc (DVD), magnetic disk storage or other magnetic storage devices including removable thumb drives, etc.
In an example, a processor can access information over its input lines from data storage, such as the computer readable storage medium, and/or the processor can access information wirelessly from an Internet server by activating a wireless transceiver to send and receive data. Data typically is converted from analog signals to digital by circuitry between the antenna and the registers of the processor when being received and from digital to analog when being transmitted. The processor then processes the data through its shift registers to output calculated data on output lines, for presentation of the calculated data on the device.
Components included in one embodiment can be used in other embodiments in any appropriate combination. For example, any of the various components described herein and/or depicted in the Figures may be combined, interchanged or excluded from other embodiments.
“A system having at least one of A, B, and C” (likewise “a system having at least one of A, B, or C” and “a system having at least one of A, B, C”) includes systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.
The term “circuit” or “circuitry” may be used in the summary, description, and/or claims. As is well known in the art, the term “circuitry” includes all levels of available integration, e.g., from discrete logic circuits to the highest level of circuit integration such as VLSI, and includes programmable logic components programmed to perform the functions of an embodiment as well as general-purpose or special-purpose processors programmed with instructions to perform those functions.
Now specifically in reference to
As shown in
In the example of
The core and memory control group 120 include one or more processors 122 (e.g., single core or multi-core, etc.) and a memory controller hub 126 that exchange information via a front side bus (FSB) 124. As described herein, various components of the core and memory control group 120 may be integrated onto a single processor die, for example, to make a chip that supplants the “northbridge” style architecture.
The memory controller hub 126 interfaces with memory 140. For example, the memory controller hub 126 may provide support for DDR SDRAM memory (e.g., DDR, DDR2, DDR3, etc.). In general, the memory 140 is a type of random-access memory (RAM). It is often referred to as “system memory.”
The memory controller hub 126 can further include a low-voltage differential signaling interface (LVDS) 132. The LVDS 132 may be a so-called LVDS Display Interface (LDI) for support of a display device 192 (e.g., a CRT, a flat panel, a projector, a touch-enabled light emitting diode display or other video display, etc.). A block 138 includes some examples of technologies that may be supported via the LVDS interface 132 (e.g., serial digital video, HDMI/DVI, display port). The memory controller hub 126 also includes one or more PCI-express interfaces (PCI-E) 134, for example, for support of discrete graphics 136. Discrete graphics using a PCI-E interface has become an alternative approach to an accelerated graphics port (AGP). For example, the memory controller hub 126 may include a 16-lane (x16) PCI-E port for an external PCI-E-based graphics card (including, e.g., one of more GPUs). An example system may include AGP or PCI-E for support of graphics.
In examples in which it is used, the I/O hub controller 150 can include a variety of interfaces. The example of
The interfaces of the I/O hub controller 150 may provide for communication with various devices, networks, etc. For example, where used, the SATA interface 151 provides for reading, writing or reading and writing information on one or more drives 180 such as HDDs, SDDs or a combination thereof, but in any case the drives 180 are understood to be, e.g., tangible computer readable storage mediums that are not transitory, propagating signals. The I/O hub controller 150 may also include an advanced host controller interface (AHCI) to support one or more drives 180. The PCI-E interface 152 allows for wireless connections 182 to devices, networks, etc. The USB interface 153 provides for input devices 184 such as keyboards (KB), mice and various other devices (e.g., cameras, phones, storage, media players, etc.).
In the example of
The system 100, upon power on, may be configured to execute boot code 190 for the BIOS 168, as stored within the SPI Flash 166, and thereafter processes data under the control of one or more operating systems and application software (e.g., stored in system memory 140). An operating system may be stored in any of a variety of locations and accessed, for example, according to instructions of the BIOS 168.
The system 100 may further include an audio receiver/microphone 195 that provides input from the microphone 195 to the processor 122 based on audio that is detected, such as via a user providing audible input to the microphone 195 consistent with present principles. Still further, the system 100 may include a camera 193 that gathers one or more images and provides input related thereto to the processor 122. The camera 193 may be a thermal imaging camera, an infrared (IR) camera, a digital camera such as a webcam, a three-dimensional (3D) camera, and/or a camera otherwise integrated into the system 100 and controllable by the processor 122 to gather pictures/images and/or video.
Additionally, though not shown for simplicity, in some embodiments the system 100 may include a gyroscope that senses and/or measures the orientation of the system 100 and provides input related thereto to the processor 122, as well as an accelerometer that senses acceleration and/or movement of the system 100 and provides input related thereto to the processor 122. Also, the system 100 may include a GPS transceiver that is configured to communicate with at least one satellite to receive/identify geographic position information and provide the geographic position information to the processor 122. However, it is to be understood that another suitable position receiver other than a GPS receiver may be used in accordance with present principles to determine the location of the system 100.
It is to be understood that an example client device or other machine/computer may include fewer or more features than shown on the system 100 of
Turning now to
Now describing
Still further, note that the headset 216 may include still other components not shown for simplicity, such as a network interface for communicating over a network such as the Internet and a battery for powering components of the headset 216. Additionally, note that while the headset 216 is illustrated as computerized smart glasses, the headset 216 may also be established by another type of augmented reality (AR) headset, or even a virtual reality (VR) headset in some examples that may not have a transparent display but is still be able to present electronic content such as virtual AR objects along with a real-world, real-time camera feed of an environment imaged by one or more of the cameras 310, 312 to provide an AR experience to the user. Also note that electronic contact lenses with their own respective heads up displays may also be used consistent with present principles.
Now in reference to
Additionally or alternatively, the AR headset may also identify the magazine and/or particular issue via the magazine's title 404, author(s) 406, images 408, cover page text 410, and even the magazine's universal product code (UPC) 412. Quick response codes that might be disposed on the magazine may also be used.
Before moving on to the description of
Now describing
As an example, the first word of the page (“Lenovo”) has been highlighted via a link 504 represented on the AR headset as integrated with the printed text for the word and appearing at the real-world current location of the printed text. The link 504 may indicate associated electronic content and be selectable by the user 414 to command the AR headset to present the electronic content itself. For example, selection of the link 504 may command the AR headset to access an Internet webpage from an online, crowdsourced encyclopedia website that is associated with the word and then present the webpage on the AR headset's transparent display, either over top of the user's view of the page 500 or off to the side such that it does not obstruct the user's view of the page 500. E.g., the webpage may be presented within the user's field of view but to the right of the page 500 so that it might obstruct the view of one of the hands of the user 414 that are shown in
Note that the link 504 itself may be selectable based on execution of eye tracking to identify the user 414 as staring at the link for a threshold non-zero period of time, such as three seconds. As another example, the link 504 may be selectable based on voice command.
Then as the user 414 progresses in reading the text on the page 500, the AR headset may track the user's line of sight using an eye tracking algorithm and input from a camera on the headset that is imaging the user's eyes. Then when the user 414 is identified as beginning to read the word “laptop” on the second line of the page 500, the word may be highlighted using the AR headset by presentation on its display of a circle shown in
Then as the user 414 continues to read the page 500 and his/her line of sight arrives at the word “opens”, a video 510 may either be automatically presented on the AR headset's display or presented responsive to selection of the word “opens” if represented as a link using the AR headset's display. In this example, the video 510 demonstrates movement of a laptop to open it. Also note that in other examples, an audio recording of a person speaking about laptops or an image from the Internet of a laptop may be presented instead of the video 510.
As the user 414 continues to read even farther down the page 500 and his or her line of sight arrives at the word “hinges”, highlighting 512 of the word may be presented and indicate associated electronic content that is available for viewing. Additionally, an electronic representation 514 of cursive notes provided electronically by another user may be presented, either automatically upon the user's line of sight reaching the word “hinges” or upon the user 414 selecting the word if represented as a link using the AR headset's display.
As also shown in
In any case, as shown in
Additionally, in some examples the GUI 600 may include an input box 608. The box 608 may be selected and then the user may either use a stylus to handwrite notes into the box 608, speak words that may be recognized via voice recognition for insertion into the box 608, and/or type words into the box 608 using a keyboard. The notes provided to box 608 may then be associated with the word “Lenovo” based on selection of the submit selector 610, as may any of the other electronic content indicated via the boxes 602, 604.
However, note that before the user selects the selector 610, the user may choose to select the check box shown for option 612 to associate the electronic content(s) the user has indicated not just with the particular word as selected from a given page of the magazine but to associate the indicated electronic content with the word wherever it appears in the magazine.
Continuing the detailed description in reference to
Beginning at block 700, the device may receive input from one or more cameras on the AR headset. The logic may then move to block 702 where the device may analyze the input using object recognition to, at block 704, identify a printed publication or other printed material being held by a user consistent with present principles. Image recognition and/or optical character recognition may also be used, as may other recognition engines for identifying the printed publication. In any case, as set forth above the printed publication may be identified and even the particular edition, issue, volume, etc. As also set forth above, the printed publication may be identified by its title, an image of its cover page or cover (e.g., book cover), its author(s), images shown on the cover or other portions of the printed publication, and even UPC code. The printed publication may also be identified from other information, such as publisher information or copyright text that might be presented on an inside page of the printed publication. A segment of the text of the corpus of the printed publication may even be used to identify it.
After block 704 the logic may then proceed to block 706 where the device may access electronic content associated with all of the printed publication and/or certain portions or words. The electronic content may be accessed via a publicly available website, database, cloud storage area, etc. which may associate the publication or particular portions thereof with various pieces of electronic content and even store the associated electronic content itself. Again note that the electronic content may include a virtual 3D object, audio content (e.g., a teacher speaking to students), a web page, an image or video, and notes created by other end-users or other crowdsourced information.
From block 706 the logic may then proceed to block 708. At block 708 the device may track the user's line of sight using eye tracking as the user reads a page of the printed publication associated with the electronic content that is accessed at block 706. From block 708 the logic may then proceed to block 710 where the device may present one or more indications on the AR headset's display of the electronic content when the user looks at the word or phrase in the printed publication that has been associated with it.
However, note that in other embodiments the device may not wait until the user's line of sight reaches the associated word or phrase. Instead, the device may highlight respective words or phrases by presenting the respective indications for the various words or phrases responsive to the user turning to or starting to read the beginning of the associated page of the printed publication.
Now in reference
As shown in
The GUI 800 may also include an option 804 that may be selectable to configure the AR headset to present electronic content associated with printed material to the side of the user's view of the printed material so as to not obstruct the user's view of the printed material with electronic content. The GUI 800 may further include an option 806 that may be selectable to block electronic content provided by an advertiser, with the understanding that in some examples advertisements and even coupons for certain products may establish electronic content accessed by and presented at the AR headset but that a user might not wish to see such content. Additionally, the GUI 800 may include an option 808 that may be selected to configure the AR headset to only present indications of electronic content and/or the electronic content itself when the user's line of sight reaches an associated word or phrase rather than, e.g., when the user initially turns to a page having the associated word or phrase.
It may now be appreciated that present principles provide for an improved computer-based user interface that improves the functionality and ease of use of the devices disclosed herein. The disclosed concepts are rooted in computer technology for computers to carry out their functions.
It is to be understood that whilst present principals have been described with reference to some example embodiments, these are not intended to be limiting, and that various alternative arrangements may be used to implement the subject matter claimed herein. Components included in one embodiment can be used in other embodiments in any appropriate combination. For example, any of the various components described herein and/or depicted in the Figures may be combined, interchanged or excluded from other embodiments.