Embodiments pertain to Internet technologies. Some embodiments relate to presenting, on a device, alternate visual presentations of real-world objects.
Currently, if a mobile device user encounters text written in a foreign language, the user may have access to a handful of apps that may use the mobile device's camera to capture an image of the text, perform optical character recognition on the text, and then translate the text into the user's language. However, the optical character recognition is computationally expensive and the automatic translation is usually not error-checked. Furthermore, every device that wants to do a particular translation will repeat the same work as all previous devices that performed the same translation. Finally, the translations are limited to the contents of the original text.
The following description and the drawings illustrate specific embodiments to enable those skilled in the art to practice them. Other embodiments may incorporate structural, logical, electrical, process, and other changes. Portions and features of some embodiments may be included in, or substituted for, those of other embodiments. Embodiments set forth in the claims encompass all available equivalents of those claims.
In some embodiments, alternate visual presentations may be static, such as a picture. In other embodiments, alternate visual presentations may be dynamic, such as a video. An example embodiment of an alternate visual presentation may be alternate text of street sign text in another language. Another example embodiment of an alternate visual presentation may be an avatar of a person. Other alternate visual presentations may also or alternatively be included in various embodiments.
In some embodiments, a wireless device 104 may be proximate to objects 102 that have alternate visual presentations. The wireless device 104 may broadcast a message indicating which objects 102 in the area have alternate visual presentations. Client device 110 may discover objects 102 having alternate visual presentations by receiving the broadcast messages from wireless device 104. Such a broadcast message may also, or alternatively, include data about those objects 102. Alternatively, the wireless device 104 may simply respond to discovery requests from client device 110 regarding which objects 102 in the area have alternate visual presentations. Such a response typically includes the data about the objects 102. In some embodiments, the wireless device 104 may communicate with client device 110 using Wi-Fi®, LTE®, WiMax®, or another suitable wireless technology depending on the requirements and environmental factors of the particular embodiment that may affect wireless data communication.
In some embodiments, a radio-frequency identification (RFID) tag 106 may be proximate to one or more objects 102 having alternate visual presentations. In one such embodiment, client device 110 may discover one or more objects 102 by broadcasting an encoded radio signal to interrogate RFID tags 106 in the area. An RFID tag 106 may respond to the encoded radio signal by transmitting an encoded message containing data about and identifying the object(s) 102, with which the RFID tag 106 is associated. Client device 110 may then receive the encoded message response from RFID tag 106, decode the message, and thereby discover the one or more objects 102 associated with RFID tag 106 and having alternate visual presentations.
In some embodiments, a visual tag 108, such as a two-dimensional barcode, may be affixed to, proximate to, or otherwise associated with an object 102 that has alternate visual presentations. Client device 110 may decode the visual tag 108 to obtain a Uniform Resource Identifier (URI). Client device 110 may then submit, via a network interface to a network 114 such as the Internet, a request to the address of an alternate visual presentation server 116 for the object 102. In some embodiments, the address of the alternate visual presentation server 116 is at least partially, if not wholly determined by the URI. The alternate visual presentation server 116 may then reply to client device 110 with a response containing data about object 102, the response transmitted via a network 114 such as the Internet.
Examples, as described herein, may include, or may operate on, logic or a number of components, modules, or mechanisms. Modules are tangible entities capable of performing specified operations and may be configured or arranged in a certain manner. In an example, circuits may be arranged (e.g., internally or with respect to external entities such as other circuits) in a specified manner as a module. In an example, the whole or part of one or more computer systems (e.g., a standalone, client or server computer system) or one or more hardware processors may be configured by firmware or software (e.g., instructions, an application portion, or an application) as a module that operates to perform specified operations. In an example, the software may reside (1) on a non-transitory machine-readable medium or (2) in a transmission signal. In an example, the software, when executed by the underlying hardware of the module, causes the hardware to perform the specified operations.
Accordingly, the term “module” is understood to encompass a tangible entity, be that an entity that is physically constructed, specifically configured (e.g., hardwired), or temporarily (e.g., transitorily) configured (e.g., programmed) to operate in a specified manner or to perform part or all of any operation described herein. Considering examples in which modules are temporarily configured, each of the modules need not be instantiated at any one moment in time. For example, where the modules comprise a general-purpose hardware processor configured using software, the general-purpose hardware processor may be configured as respective different modules at different times. Software may accordingly configure a hardware processor, for example, to constitute a particular module at one instance of time and to constitute a different module at a different instance of time.
Machine (e.g., computer system) 200 may include a hardware processor 202 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), a hardware processor core, or any combination thereof), a main memory 204 and a static memory 206, some or all of which may communicate with each other via a bus 208. The machine 200 may further include a display unit 210, an alphanumeric input device 212 (e.g., a keyboard), and a user interface (UI) navigation device 211 (e.g., a mouse). In an example, the display unit 210, input device 217 and UI navigation device 914 may be a touch screen display. The machine 200 may additionally include a storage device (e.g., drive unit) 216, a signal generation device 218 (e.g., a speaker), a network interface device 220, and one or more sensors 221, such as a global positioning system (GPS) sensor, compass, accelerometer, or other sensor. The machine 200 may include an output controller 228, such as a serial (e.g., universal serial bus (USB), parallel, or other wired or wireless (e.g., infrared (IR), Wi-Fi) connection to communicate.
The storage device 216 may include a machine-readable medium 222 on which is stored one or more sets of data structures or instructions 224 (e.g., software) embodying or utilized by any one or more of the techniques or functions described herein. The instructions 224 may also reside, completely or at least partially, within the main memory 204, within static memory 206, or within the hardware processor 202 during execution thereof by the machine 200. In an example, one or any combination of the hardware processor 202, the main memory 204, the static memory 206, or the storage device 216 may constitute machine-readable media.
While the machine-readable medium 222 is illustrated as a single medium, the term “machine-readable medium” may include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that configured to store the one or more instructions 224.
The term “machine-readable medium” may include one or more of virtually any tangible medium that is capable of storing, encoding, or carrying instructions for execution by the machine 200 and that cause the machine 200 to perform any one or more of the techniques of the present disclosure, or that is capable of storing, encoding or carrying data structures used by or associated with such instructions. Machine-readable medium examples may include, but are not limited to, solid-state memories, and optical and magnetic media. Specific examples of machine-readable media may include: non-volatile memory, such as semiconductor memory devices (e.g., Electrically Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM)) and flash memory devices; magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.
The instructions 224 may further be transmitted or received over a communications network 226 using a transmission medium via the network interface device 220 utilizing any one of a number of transfer protocols (e.g., frame relay, internet protocol (IP), transmission control protocol (TCP), user datagram protocol (UDP), hypertext transfer protocol (HTTP), etc.). Example communication networks may include a local area network (LAN), a wide area network (WAN), a packet data network (e.g., the Internet), mobile telephone networks (e.g., cellular networks), Plain Old Telephone (POTS) networks, and wireless data networks (e.g., Institute of Electrical and Electronics Engineers (IEEE) 802.11 family of standards known as Wi-Fi®, IEEE 802.16 family of standards known as WiMax®), peer-to-peer (P2P) networks, among others. In an example, the network interface device 220 may include one or more physical jacks (e.g., Ethernet, coaxial, or phone jacks) or one or more antennas to connect to the communications network 226. In an example, the network interface device 220 may include a plurality of antennas to communicate wirelessly using at least one of single-input multiple-output (SIMO), multiple-input multiple-output (MIMO), or multiple-input single-output (MISO) techniques. The term “transmission medium” shall be taken to include any intangible medium that is capable of storing, encoding or carrying instructions for execution by the machine 200, and includes digital or analog communications signals or other intangible medium to facilitate communication of such software.
In some embodiments, when discovery 306 of objects 102 having alternate visual presentations is complete, the client device 302 may request 308 a list of attributes supported by an object 102 having alternate visual presentations from the alternate visual presentation proxy 304 for the object 102. In some embodiments, the request 308 may contain security credentials. The client device 302 and the alternate visual presentation proxy 304 may use the security credentials to encrypt and decrypt communications between one another. The alternate visual presentation proxy 304 may use the security credentials in determining the list of attributes to send to the client device 302. The alternate visual presentation proxy 304 may then respond by sending the determined list of attributes supported by object 102 to the client device 302. The determined list of attributes supported by the object may be a full set of attributes, a default set of attributes, a set of attributes determined based on the supplied security credentials, or another set of attributes based on the particular embodiment. The client device 302 then receives 310 the attribute list.
Possible attributes received by the client device may include the data identifying a physical location of the object 102, a size of the object 102, plain text associated with the object 102, an image of the object 102, a three dimensional model of the object 102, additional two or three dimensional alternate visual presentations of the object 102, and other attributes depending on the particular embodiment. In some embodiments, language attributes may also be associated with the object 102 and be received by the client device 302. Such language attributes may include data specifying a human language into which the text of object 102 has been translated. In some embodiments, attributes of the object 102 may exist that describe the contents of an alternate visual presentation of the object 102.
Client device 302 may store preferences 312 regarding which alternate visual presentation attributes are of interest to user 112. In some embodiments, upon receiving 310 the attribute list for the object 102 from the alternate visual presentation proxy 304, the client device 302 may use the combination of the attribute list received and the stored preferences 312 to determine automatically 314 the attributes from the received attribute list that are of interest to the user 112. In other embodiments, the user 112 may manually determine 314 and specify, through interaction with the client device 302, the attributes from the received attribute list that are of interest to the user 112.
Upon determining 314 the attributes of interest, the client device 302 may request 316, from the alternate visual presentation proxy 304, data for alternate visual presentations associated with the attributes of interest. The alternate visual presentation proxy 304 may then respond by sending data to the client device 302 for the alternate visual presentations associated with the attributes of interest. Upon receiving 318 the data for alternate visual presentations associated with the attributes of interest, the client device 302 generates and displays 320 the alternate visual presentations on the client device 110 based on the received data. In some embodiments, displaying 320 the alternate visual presentations may include rending at least one image on a view of the environment. The rendered image in some embodiments may include a moving image, which may include an audio track. In other embodiments, the displaying 320 may include playing an audio file associated with the object 102.
An additional example embodiment of alternate visual presentations may include a mobile computing device that discovers, via communication with a second computing device, a set of data objects associated with objects present in an environment, each data object in the set supporting at least one alternate presentation of a respective object present in the environment. The mobile computing device may then choose a data object from the set of discovered data objects and a set of alternate presentations associated with the data object. The mobile computing device may then retrieve, via a network interface device, data representing the set of alternate presentations associated with the data object. The mobile computing device may then present the chosen alternate presentations in association with a respective object present in the environment.
Another additional example embodiment of alternate visual presentations may include having the second computing device proximate to a set of objects supporting alternate presentations in the environment, and the discovering may comprise receiving a message by the mobile computing device from the second computing device, the message including data associated with the data objects supporting alternate presentations.
Another example embodiment of alternate visual presentations may include a radio-frequency identification tag as the second computing device.
Another example embodiment of alternate visual presentations may include having the second computing device further configured to periodically broadcast the message.
Another example embodiment of alternate visual presentations may include having the mobile computing device configured to discover by decoding a tag proximate to an object to obtain a Uniform Resource Identifier encoded in the tag. The mobile computing device may then submit a request via a network interface device, the request based on the obtained Uniform Resource Identifier. The mobile computing device may then receive, via the network interface device and in response to the request, data related to each data object of the set of data objects, each data object in the set supporting at least one alternate presentation of a respective object present in the environment.
An additional example embodiment of alternate visual presentations may include having the mobile computing device configured to choose at least one alternate presentation included in the chosen data object by requesting, from the second computing device, a set of supported attributes of the chosen object, each attribute associated with at least one alternate presentation of the chosen object. The mobile computing device may then receive, from the second computing device, the set of supported attributes of the chosen object. The mobile computing device may then determine, based on the set of supported attributes of the chosen object received from the second computing device, a set of alternate presentations of the chosen object that are of interest to the mobile computing device. The mobile computing device may then request, from the second computing device, the set of alternate presentations of the chosen object that are of interest to the mobile computing device. The mobile computing device may then receive the set of alternate presentations of interest from the second computing device.
Another example embodiment of alternate visual presentations may include the mobile computing device sending security credentials as part of its request to the second computing device.
Another example embodiment of alternate visual presentations may include having the mobile computing device configured to present by displaying alternate visual presentations on a display of the mobile computing device.
An additional example embodiment of alternate visual presentations may include displaying alternate visual presentations on augmenting/mediating reality glasses.
Another example embodiment of alternate visual presentations may include having the alternate visual presentation proxy inform a mobile computing device of objects having alternate visual presentations in an environment by transmitting a message from the alternate visual presentation proxy to the mobile computing device. The message may identify a set of objects in an environment, each object in the set supporting at least one alternate presentation.
Another example embodiment of alternate visual presentations may include having the alternate visual presentation proximate to each object identified in the message.
Another example embodiment of alternate visual presentations may include having the message periodically broadcast from the alternate visual presentation proxy.
Another example embodiment of alternate visual presentations may include the alternate visual presentation receiving, from the mobile computing device, a request for a set of supported attributes of the chosen object, each attribute associated with at least one alternate presentation of the chosen object, the request including security credentials. The alternate visual presentation proxy may then send to the mobile computing device a subset of the set of supported attributes of the chosen object, the subset based on security credentials send by the first device. The alternate visual presentation proxy may then receive from the mobile computing device a request for a set of alternate presentations of the chosen object that are of interest to the mobile computing device, the request based on the subset of the set of supported attributes of the chosen object sent to the mobile computing device. The alternate visual presentation proxy may then send the set of alternate presentations of interest to the mobile computing device.
The above detailed description includes references to the accompanying drawings, which form a part of the detailed description. The drawings show, by way of illustration, specific embodiments in which the invention may be practiced. These embodiments are also referred to herein as “examples.” Such examples may include elements in addition to those shown or described. However, the present inventors also contemplate examples in which only those elements shown or described are provided. Moreover, the present inventors also contemplate examples using any combination or permutation of those elements shown or described (or one or more aspects thereof), either with respect to a particular example (or one or more aspects thereof), or with respect to other examples (or one or more aspects thereof) shown or described herein.
In this document, the terms “a” or “an” are used, as is common in patent documents, to include one or more than one, independent of any other instances or usages of “at least one” or “one or more.” In this document, the term “or” is used to refer to a nonexclusive or, such that “A or B” includes “A but not B,” “B but not A,” and “A and B,” unless otherwise indicated. In the appended claims, the terms “including” and “in which” are used as the plain-English equivalents of the respective terms “comprising” and “wherein.” Also, in the following claims, the terms “including” and “comprising” are open-ended, that is, a system, device, article, or process that includes elements in addition to those listed after such a term in a claim are still deemed to fall within the scope of that claim. Moreover, in the following claims, the terms “first,” “second,” and “third,” etc. are used merely as labels, and are not intended to impose numerical requirements on their objects.
The above description is intended to be illustrative, and not restrictive. For example, the above-described examples (or one or more aspects thereof) may be used in combination with each other. Other embodiments may be used, such as by one of ordinary skill in the art upon reviewing the above description.
The Abstract is provided to comply with 37 C.F.R. Section 1.72(b) requiring an abstract that will allow the reader to ascertain the nature and gist of the technical disclosure. It is submitted with the understanding that it will not be used to limit or interpret the scope or meaning of the claims. Also, in the above Detailed Description, various features may be grouped together to streamline the disclosure. This should not be interpreted as intending that an unclaimed disclosed feature is essential to any claim. Rather, inventive subject matter may lie in less than all features of a particular disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment. The scope of the invention should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. The following claims are hereby incorporated into the detailed description, with each claim standing on its own as a separate embodiment.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/US11/67636 | 12/28/2011 | WO | 00 | 11/15/2013 |