Augmented reality refers to a technology platform that merges the physical and virtual worlds by augmenting real-world physical objects with virtual objects. For example, a real-world physical newspaper may be out of date the moment it is printed, but an augmented reality system may be used to recognize an article in the newspaper and to provide up-to-date virtual content related to the article. While the newspaper generally represents a static text and image-based communication medium, the virtual content need not be limited to the same medium. Indeed, in some augmented reality scenarios, the newspaper article may be augmented with audio and/or video-based content that provides the user with more meaningful information.
Some augmented reality systems operate on mobile devices, such as smart glasses, smartphones, or tablets. In such systems, the mobile device may display its camera feed, e.g., on a touchscreen display of the device, augmented by virtual objects that are superimposed in the camera feed to provide an augmented reality experience or environment. In the newspaper example above, a user may point the mobile device camera at the article in the newspaper, and the mobile device may show the camera feed (i.e., the current view of the camera, which includes the article) augmented with a video or other virtual content, e.g., in place of a static image in the article. This creates the illusion of additional or different objects than are actually present in reality.
The following detailed description references the drawings, wherein:
In the following discussion and in the claims, the term “couple” or “couples” is intended to include suitable indirect and/or direct connections. Thus, if a first component is described as being coupled to a second component, that coupling may, for example, be: (1) through a direct electrical or mechanical connection, (2) through an indirect electrical or mechanical connection via other devices and connections, (3) through an optical electrical connection, (4) through a wireless electrical connection, and/or (5) another suitable coupling.
A “computing device” or “device” may be a desktop computer, laptop (or notebook) computer, workstation, tablet computer, mobile phone, smart phone, smart device, smart glasses, or any other processing device or equipment which may be used to provide an augmented reality experience. As used herein an “augmented reality device” refers to a computing device to provide augmented reality content related to images or sounds of physical objects captured by a camera, a microphone, or other sensors coupled to the computing device. In some examples, the augmented reality content may be displayed on a display coupled to the augmented reality device.
The display of augmented reality content is triggered by or related to the recognition of objects in the field of view of a camera capturing the real-world. As the speed and capability of cameras and sensors improves, the amount of information which may be gathered about physical objects may increase. However, processing this increased information to determine whether augmented reality content is available for each physical object increases a processing load on the augmented reality providing device. When capturing audio or video content in the field of view of the capturing camera, this processing load is particularly increased. Furthermore, there may be concerns about capturing portions of copyright protected content via the camera to provide augmented reality content.
To address this issue, in the examples described herein, a system to increase the speed of providing augmented reality content related to a captured physical object (e.g., audio or video data) is provided. In such an example, the system may receive extracted features of the captured physical object to determine an augmented reality content related to the captured object. In some examples, the system may provide a feature extractor to a device providing the captured physical object (e.g., audio or video data) via a wireless connection between the device and the system. In such examples, the system may improve the speed of providing augmented reality data by reducing the processing load to determine augmented reality content related to the captured physical object. In some examples, the feature extractor may prevent the capture of copyright protected material from the physical object (e.g., audio and/or video data).
Referring now to the drawings,
In some examples, the instructions can be part of an installation package that, when installed, can be executed by the processing resource to implement at least engines 112, 114, and 116. In such examples, the machine-readable storage medium may be a portable medium, such as a CD, DVD, or flash drive, or a memory maintained by a computing device from which the installation package can be downloaded and installed. In other examples, the instructions may be part of an application, applications, or component already installed on system 110 including the processing resource. In such examples, the machine-readable storage medium may include memory such as a hard drive, solid state drive, or the like. In other examples, the functionalities of any engines of system 110 may be implemented in the form of electronic circuitry.
In the example of
A feature extraction engine 114 may be an engine to generate a feature extractor according to first device 150. The feature extractor may be provided to connection engine 112 to be provided to first device 150 via connection 105 between first device 150 and connection engine 112. In an example, the feature extractor may be specified according to characteristics of first device 150, such as device type, manufacturer, programming language, etc. The feature extractor may be instructions to extract features from the content being provided by first device 150. The extracted feature(s) maybe transformed into a code to be provided to connection engine 112 via connection 105.
In an example, the instructions to extract feature(s) from the content may include at least one of objection recognition, text recognition, and/or audio recognition instructions of the content being provided by first device 150 and/or meta-data associated with the content being provided by first device 150. For example, when the content is audio content, the feature extractor may be instructions to extract at least one of the title, author, artists, producer, distributor, current time stamp, duration, lyrics, closed captioning, etc. of the content being provided and/or meta-data associated with the content being provided. In such an example, when the audio content includes a song the extracted features may be a code including the title (e.g., “Lips Are Movin?”), artists (e.g., “Meghan Trainor”), song time stamp (e.g., “1:57”), song duration (e.g., “3:04”). The extracted features may be provided to the connection engine 112 by the first device 150 via connection 105. In another example, when the content is video content, the extracted content may be at least one of a title, a network, a director, a producer, a distributor, a time stamp, and a duration of the video content, closed captioning, and/or meta-data associated with the video content. For example, when the video content is an episode of a television series, the extracted features may be a title (e.g., “Friends®”), a network (e.g., “NBC®”), a time stamp (e.g., “0:15”), and a duration (e.g.,)“23:04”) of the video content and/or meta-data associated therewith. In an example, the extracted feature may be converted into a code which does not contain any copyright protected content from the content being provided by first device 150. The copyright protected content may include, for example, the lyrics of a song, the melody of a song, scenes of a television show, dialog from a television show, etc.
In an example, the feature extractor generated by feature extraction engine 114 may periodically extract features of the content being provided by the first device 150. For example, the feature extractor may extract features from the content being provided by the first device 150 every fifteen (15) seconds. In such an example, when the content being provided by the first device 150 is an episode of the series “Friends®,” the periodically extracted features may provide additional information about the episode. For example, if the initial capture of content bye camera 140 and/or a microphone 145 of system 110 occurred during the opening credits of the episode, it may be difficult to determine the exact episode being displayed. In such an example, periodically capturing extracted features from the episode may provide additional information to determine the episode being displayed. In an example, the feature extractor may include instructions to perform object recognition, text recognition, and audio recognition. In such an example, the feature extractor may recognize objects and/or persons in the video content. In such a manner, additional information about the content being provided by first device 150 may be determined without capturing the content, thereby reducing the strain on memory storage devices and processors of the system 110. Furthermore, the system 110 may be able to provide augmented reality content without capturing copyright protected material in a storage device. Although the periodically extracted feature is described as being captured every fifteen (15) seconds, the examples are not limited thereto and the interval between the periodically extracted features may be any time or may be randomly assigned after each interval has been completed.
An augmented reality generation engine 116 may generate augmented reality content according to the extracted feature(s) provided by first device 150. The augmented reality content may be related to the content being provided by first device 150. For example, the augmented reality content may be a link to an advertisement for HP®. Inc. which uses the song “Lips Are Movin?” when the captured content is the song “Lips Are Movin?” In an example, the augmented reality content may be displayed on a display of system 110 when camera 140 and/or microphone 145 of system 110 captures the content being provided by first device 150. In some examples, camera 140 and/or microphone 145 may not be a component of system 110 but rather coupled thereto. As used herein augmented reality content may be referred to as being “triggered” by captured content when it is related to audio and/or video content captured by camera 140 and/or microphone 145 of system 110 that is to be provided by first device 150 at a time in the future. For example, augmented reality content may be triggered to be displayed on a display of system 110, thirty-five (35) seconds after the initial capturing of the captured content. In such an example, the triggered augmented reality content may be related to the content being provided by first device 150 thirty-five (35) seconds in the future. In an example when the content being provided by first device 150 is a television broadcast of the series “Friends®,” the triggered content may be related to a scene being displayed thirty-five (35) seconds after the initial capture of content. For example, the content may be advertisement for a furniture store selling reclining chairs similar to chairs in the apartment of characters on the show “Friends®” which appear on screen 35 seconds after the initial capture of content.
In examples described herein, a processing resource may include, for example, one processor or multiple processors included in a single computing device (as shown in
As used herein, a “machine-readable storage medium” may be any electronic, magnetic, optical, or other physical storage apparatus to contain or store information such as executable instructions, data, and the like. For example, any machine-readable storage medium described herein may be any of Random Access Memory (RAM), volatile memory, non-volatile memory, flash memory, a storage drive (e.g., a hard drive), a solid state drive, any type of storage disc (e.g., a compact disc, a DVD, etc.), and the like, or a combination thereof. Further, any machine-readable storage medium described herein may be non-transitory.
In the example of
In instructions 224, computing device 200 may generate a feature extractor according to the first device. The feature extractor may be a feature extractor as described above with respect to
In instructions 226, computing device 200 may provide the feature extractor to the first device via the first connection. In the example of
In instructions 228, computing device 200 may receive an extracted feature of the content being provided by first device 230 via the first connection. In the example of
In instructions 230, computing device 200 may generate augmented reality content on a display of computing device 200 while a camera of computing device 200 is capturing a screen. In the example,
In instructions 232, computing device 200 may display the generated augmented reality content on a display of computing device 200 according to the extracted feature of the video content. For example, the generated augmented reality content may be triggered by the extracted feature of the video content to be overlaid on a display of the computing device 200 at a specific time. In such an example, the generated augmented reality content may be generated by computing device 200 according to the extracted feature ahead of the specified time. In other examples, the augmented reality content may be displayed as a three-dimensional object in a field of a view of a user of computing device 200 without being overlaid on the display of the video content according to the extracted feature. In yet other examples, the augmented reality content may be provided as links within the display of the video content captured by the camera of computing device 200.
In some examples, instructions 222, 224, 226, 228, 230, and 232 may be part of an installation package that, when installed, may be executed by processing resource 110 to implement the functionalities described herein in relation to instructions 222, 224, 226, 228, 230, and 232. In such examples, storage medium 220 may be a>portable medium, such as a CD, DVD, flash drive, or a memory maintained by a computing device from which the installation package can be downloaded and installed. In other examples, instructions 222, 224, 226, 228, 230, and 232 may be part of an application, applications, or component already installed on computing device 200 including processing resource 210. In such examples, the storage medium 220 may include memory such as a hard drive, solid state drive, or the like. In some examples, functionalities described herein in relation to
At 302 of method 300, a media player (e.g., first device 150) may connect to an augmented reality device (e.g., system 110) via a wireless connection. The wireless connection may be at least one of a Bluetooth® connection, a Wi-Fi® connection, an Insteon® connection, Infrared Data Association® (IrDA) connection, Wireless USB connection, Z-Wave® connection, ZigBee® connection, a cellular network connection, a Global System for Mobile Communications (GSM), Personal Communications Service (PCS) connection, Digital Advanced Mobile Phone Service connection, a general packet radio service (GPRS) network connection, and body area network (BAN) connection. In some examples, the wireless connection between the media player and the augmented reality device may be established when the media player and the augmented reality device are in physical proximity (e.g., physical proximity 100) with each other. In such an example, the physical proximity may be any distance up to which the wireless connection between the media player and the augmented reality device may be established.
At 304, the media player (e.g., first device 150) may receive a feature extractor from the augmented reality device (e.g., system 110) in the media player via the wireless connection. In the example of
At 306, the media player (e.g., first device 150) may extract a feature from a content being provided by the media player (e.g., first device 150) according to the feature extractor.
At 308, the media player (e.g., first device 150) may provide the extracted feature to the augmented reality device (e.g., system 110) via the wireless connection.
Although the flowchart of
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2015/051862 | 1/29/2015 | WO | 00 |