Electronic devices such as smartphones with high-quality cameras are nearly ubiquitous, and the Internet of Things market continues to grow rapidly. However, the interactions between a smartphone and the world around it (e.g., electronic devices, objects that are not inherently electronic, and people) are still limited and rarely take advantage of the high-quality camera that comes standard in almost every smartphone. Neighbor Awareness Networking (NAN) has improved the ability to share information, particularly between electronic devices, by using peer-to-peer Wi-Fi, but the interactions using NAN are typically limited to file sharing, and after the file sharing is complete, the connection ends. Accordingly, any interaction between a person's smartphone and various other devices and/or people after the devices are connected by NAN is not part of the current NAN communication.
The “background” description provided herein is for the purpose of generally presenting the context of the disclosure. Work of the presently named inventors, to the extent it is described in this background section, as well as aspects of the description which may not otherwise qualify as prior art at the time of filing, are neither expressly or impliedly admitted as prior art against the present invention.
According to aspects of the disclosed subject matter, an electronic device is configured to enhance a Neighbor Awareness Networking connection between the electronic device and an aware object using augmented reality. The electronic device includes a display and an imaging device. Additionally, the electronic device includes circuitry configured to receive information, including a recognition image, from each of one or more aware objects within a predetermined distance of the electronic device, recognize an aware object in a field of view of the imaging device based on the recognition image, establish a wireless connection with the recognized aware object, display an augmented reality composite image corresponding to the recognized aware object, and receive input at the display corresponding to interaction with the recognized aware object.
The foregoing paragraphs have been provided by way of general introduction, and are not intended to limit the scope of the following claims. The described embodiments, together with further advantages, will be best understood by reference to the following detailed description taken in conjunction with the accompanying drawings.
A more complete appreciation of the disclosure and many of the attendant advantages thereof will be readily obtained as the same becomes better understood by reference to the following detailed description when considered in connection with the accompanying drawings, wherein:
The description set forth below in connection with the appended drawings is intended as a description of various embodiments of the disclosed subject matter and is not necessarily intended to represent the only embodiment(s). In certain instances, the description includes specific details for the purpose of providing an understanding of the disclosed subject matter. However, it will be apparent to those skilled in the art that embodiments may be practiced without these specific details. In some instances, well-known structures and components may be shown in block diagram form in order to avoid obscuring the concepts of the disclosed subject matter.
Reference throughout the specification to “one embodiment” or “an embodiment” means that a particular feature, structure, characteristic, operation, or function described in connection with an embodiment is included in at least one embodiment of the disclosed subject matter. Thus, any appearance of the phrases “in one embodiment” or “in an embodiment” in the specification is not necessarily referring to the same embodiment. Further, the particular features, structures, characteristics, operations, or functions may be combined in any suitable manner in one or more embodiments. Further, it is intended that embodiments of the disclosed subject matter can and do cover modifications and variations of the described embodiments.
It must be noted that, as used in the specification and the appended claims, the singular forms “a,” “an,” and “the” include plural referents unless the context clearly dictates otherwise. That is, unless clearly specified otherwise, as used herein the words “a” and “an” and the like carry the meaning of “one or more.” Additionally, it is to be understood that terms such as “left,” “right,” “top,” “bottom,” “front,” “rear,” “side,” “height,” “length,” “width,” “upper,” “lower,” “interior,” “exterior,” “inner,” “outer,” and the like that may be used herein, merely describe points of reference and do not necessarily limit embodiments of the disclosed subject matter to any particular orientation or configuration. Furthermore, terms such as “first,” “second,” “third,” etc., merely identify one of a number of portions, components, points of reference, operations and/or functions as described herein, and likewise do not necessarily limit embodiments of the disclosed subject matter to any particular configuration or orientation.
Referring now to the drawings, wherein like reference numerals designate identical or corresponding parts throughout the several views:
The device 100 can include processing circuitry 105, a database 110, a camera 115, and a display 120. The device 100 can be a smartphone, a computer, a laptop, a tablet, a PDA, an augmented reality headset (e.g., helmet, visor, glasses, etc.), a virtual reality headset, a server, a drone, a partially or fully autonomous vehicle, and the like. Further, the aforementioned components can be electrically connected or in electrical or electronic communication with each other as diagrammatically represented by
Generally speaking, in one or more aspects of the disclosed subject matter, the device 100 including the processing circuitry 105, the database 110, the camera 115, and the display 120 can be implemented as various apparatuses that use augmented reality to enhance Neighbor Awareness Networking (NAN) (e.g., Wi-Fi Aware™, AirDrop, etc.). For example, the device 100 can recognize various objects and/or people using the camera 115. After the device 100 recognizes an object and/or person in a field of view of the camera 115, the device 100 can use augmented reality to enhance a user's view and/or interaction with the recognized object and/or person. In other words, the augmented reality aspect can include one or more computer-generated images and/or animations superimposed on a user's view of the real world, thus creating a composite image. Additionally, the user can interact with one or more of the augmented reality images and/or animations.
It should be appreciated that the implementations of the enhanced NAN can use a software application as a platform for the enhanced view and/or interaction with the display 120 of the device 105. For example, in one aspect, the device 105 can be a smartphone and the software application can be a smartphone application. In other words, the software application can be programmed to recognize objects and/or people in a field of view of the camera 115 via image processing, display the augmented reality composite image, receive interactions (e.g., touch input at the display 120), and perform any other processing further described herein.
More specifically, the device 100 can be configured to connect with other devices via Neighbor Awareness Networking (NAN). Wi-Fi Aware™ and AirDrop are implementations of Neighbor Awareness Networking and for the purposes of this description, NAN, Wi-Fi Aware™, and AirDrop can be used interchangeably. NAN can extend Wi-Fi capability with quick discovery, connection, and data exchange with other Wi-Fi devices—without the need for a traditional network infrastructure, internet connection, or GPS signal. Accordingly, NAN can provide rich here-and-now experiences by establishing independent, peer-to-peer Wi-Fi connections based on a user's immediate location and preferences. In one or more aspects of the disclosed subject matter, the device 100 can take advantage of NAN by receiving information from another device (e.g., an aware object), recognizing the aware object in a field of view of the camera 115, and wirelessly connecting with the aware object. After the device 100 connects with the aware object, the device 100 can use augmented reality to provide an enhanced view and/or interaction with the user's view of the real world. The aware object can correspond to another electronic device configured to connect with the device 105 in one or more of a passive, active, and social use case. The aware object can be (or can be associated with) any other electronic device with NAN capability. Examples of aware objects are further described herein.
The processing circuitry 105 can carry out instructions to perform or cause performance of various functions, operations, steps, or processes of the device 100. In other words, the processor/processing circuitry 105 can be configured to receive output from and transmit instructions to the one or more other components in the device 100 to operate the device 100 to use augmented reality in neighbor awareness networking.
The database 110 can represent one or more databases connected to the camera 115 and the display 120 by the processing circuitry 105. The database 110 can correspond to a memory of the device 100. Alternatively, or additionally, the database 110 can be an internal and/or external database (e.g., communicably coupled to the device 105 via a network). The database 110 can store information from one or more aware objects as further described herein.
The camera 115 can represent one or more cameras connected to the database 110 and the display 120 by the processing circuitry 105. The camera 115 can correspond to an imaging device configured to capture images and/or videos. Additionally, the camera 115 can be configured to work with augmented reality such that a composite image is displayed on the display 120 so that one or more computer-generated images and/or animations can be superimposed on a user's view of the real world (e.g., the view the user sees on the display 120 via the camera 115).
The display 120 can represent one or more displays connected to the database 110 and the camera 115 via the processing circuitry 120. The display 120 can be configured to detect interaction (e.g., touch input, input from peripheral devices, etc.) with the one or more augmented reality images and/or animations and the processing circuitry 105 can update the display based on the detected interaction. Alternatively, or additionally, the device 105 can be configured to receive voice commands to interact with the augmented reality features of the device 105.
Additionally, the search phase can begin when the device 100 is within a predetermined distance of the one or more aware objects 205, 210, and 215. For example, although other methods of identifying a distance between two electronic devices can be contemplated, the predetermined distance can be established using a geofence so that when the device 100 enters the geofenced area, the one or more aware objects in the geofenced area can transmit information to the device 100. Further, it should be appreciated that having three aware objects 205, 210, and 215 is exemplary and any number of aware objects can be used.
Additionally, in an example where multiple aware objects (or people) are in the field of view 235 and recognized by comparing the aware objects and/or people in the field of view 235 with the corresponding recognition images stored in the database 110, the device 100 can be configured to receive a selection of one of the one or more recognized objects and/or people on the display 120. For example, the display 120 can be configured to receive touch input on the display 120 corresponding to a selection of one of the one or more aware objects and/or people on the display 120 to establish a wireless connection with for the communication phase. In other words, if all aware objects 205, 210, and 215 were in the field of view 235 of the camera 115 and recognized, a user can select the aware object 210 by touching the aware object 210 on the display 120, and in response to receiving the touch input, the device 100 can establish communication (e.g., NAN) with the aware device 210.
The example illustrated in
The passive aware object 330 can also have a corresponding geofence 315 establishing a geofenced area surrounding the aware object 330. Having a geofenced area for the aware object 330 can prevent unnecessary communication between the aware object and other electronic devices that are not within a predetermined distance from the aware object 330. Accordingly, when the devices 320a, 320b, and 320c cross the geofence 315, thereby entering the geofenced area for the aware object 330, the aware object 330 can transmit information 335 to the devices 320a, 320b, and 320c via a network 325 (e.g., Wi-Fi, Bluetooth, cellular, etc.). The information 335 can include a recognition image and details including additional information about the aware object 330 (e.g., a docent video), for example. After the aware object 330 transmits the information to the devices 320a, 320b, and 320c, the devices 320a, 320b, and 320c can enter the recognition phase 305.
The social aware object 530 can be a person, for example. While the person here is being referred to as the social aware object 530, the person herself is not an electronic device. Instead, the person can be associated with an electronic device such that a combination of the person and the associated electronic device can be referred to as the aware object (and in this case the social aware object) where the person is the object recognized by the device 100 during the recognition phase, but the associated electronic device handles the electronic communication between the aware object and the device 100. In one example, the associated electronic device associated with the person can be another electronic device 100 (e.g., the person's smartphone or tablet). Alternatively, or additionally, the associated electronic device can be any electronic device capable of transmitting information, receiving information, and storing information via a NAN (e.g., network 245 corresponding to peer-to-peer Wi-Fi).
Additionally, a share button can be displayed in augmented reality so that the device 520b and the social aware object 530 can exchange data including files, images, videos, and the like (e.g., another instance of communication 540). Further, in response to the connection being established, the device 520b can display an augmented reality animation that was received as part of the information 535. In other words, a user of the device 530b can interact and/or communicate with the social aware object 530, and thus the person that is part of the social aware object 530, via augmented reality displayed on the display 120 of the device 420b.
An advantage of establishing the communication between the device 100 and the active aware object 610 can include enhancing an interaction with the active aware object 610. The interaction can be enhanced in several ways. For example, in an embodiment where the device 100 is a smartphone, using the device 100 to control an active aware object like a printer can make it easier to identify manufacturing information, change the printer settings, and control the printer's functionality from the display of the smartphone. Additionally, the interaction can be further enhanced by adding augmented reality. The augmented reality buttons and/or animations can provide a better user experience. For example, the augmented reality buttons can be more intuitive. Many devices like printers include manufacturing and settings information, but it is often hidden behind layers of other information displayed on the small screen of a device like a printer, which can take several interactions (e.g., touch interactions, mouse clicks, etc.) to find. To the contrary, the buttons displayed via augmented reality can include an information button and a settings button to navigate a user directly to the information they need. Additionally, printers like the active aware object 610 can often include printing, faxing, copying, and scanning functionality. Another advantage of controlling the functionality of the active aware object 610 can include more intuitive selection of the desired functionality even without any familiarity with the active aware device 610 that is being interacted with. For example, different brands of printers often have different menus and different methods for navigating and using the functionality of the device. In other words, a user can operate any brand of printer by only being familiar with the device 100, effectively unifying interaction with different brands of any electronic devices. Accordingly, connecting the device 100 and the active aware object 610 via NAN can provide an enhanced user experience, and the user experience can be further enhanced with augmented reality.
It should be appreciated that the discussion of the advantages with respect to the active aware object as a printer is exemplary and any other electronic devices can be contemplated because the advantages can similarly easily be applied to any other electronic device that could fall into the category of an active aware object. Further, the same advantages can be applied to passive aware objects and social aware objects because similar advantages can be applied to enhancing the user experience.
In S705, the device 100 can determine if any aware objects are nearby. It should be appreciated that aware objects can refer to passive, active, and social aware objects. In other words, it can be determined if aware objects are nearby based on entering a geofenced area, for example. Similarly, the aware objects can determine if the device 100 is within a predetermined distance when the device 100 enters the geofenced area. In other words, it can be determined that aware objects are nearby when at least one aware object is within a predetermined distance from the device 100. In response to a determination that no aware objects are nearby, the process can continue checking for any aware objects nearby. However, in response to a determination that there are aware objects nearby, the device 100 can receive information from the one or more aware objects that are within a predetermined distance of the device 100 in S710.
In S710, the device 100 can receive information from the one or more aware objects. For example, when the device 100 enters a geofenced area associated with one or more aware objects, each the one or more aware objects in the geofenced area can transmit its own information to the device 100. The information can change depending on the type of aware object (e.g., passive, active, social) as has been described herein, but can at least include a recognition image used to recognize the aware object. The device 100 can store the information received from the aware objects in memory (e.g., the database 110).
In S715, the device 100 can recognize an aware object in a field of view of the camera (e.g., camera 115) of the device 100. For example, the image of the field of view of the camera 115 is simultaneously displayed on the display 120 of the device 100. As has been described herein, when an aware object is in the field of view, the device 100 (via the processing circuitry 105) can recognize the aware object based on a match between the aware object in the field of view and the recognition image received in the information in S710.
In S720, the device 100 can establish a wireless connection with the recognized aware object. For example, the device 100 can establish communication with an aware object that was recognized in S715. The communication can be based on NAN (e.g., via network 245 based on peer-to-peer Wi-Fi) so that the device 100 and the recognized aware object can interact as has been described in
In S725, in response to establishing the communication between the device 100 and the aware object in S720, the device 100 can display an augmented reality composite image corresponding to the recognized aware object. For example, the augmented reality can be based at least in part on the information received in S710. In one embodiment, the augmented reality displayed can include one or more buttons for further interaction with the aware object, an augmented reality animation, an identification bubble, and the like.
In S730, the device 100 can receive input at the display (e.g., the display 120) corresponding to interaction with the recognized aware object. For example, while other similar interactions can be contemplated in other embodiments, the interaction can correspond to touch input on a smartphone. The input received at the display can allow various interactions with the aware object that significantly improve the user experience. For example, a user can press one of the augmented reality buttons to view options to control the aware object via the device 100. For example, if the aware device is a printer, the user can select a print functionality via one of the augmented reality buttons displayed on the device 100 based on the NAN connection between the device 100 and the aware object.
Accordingly, the user experience can be improved by the augmented reality features because they can be more intuitive, make the information and features of the aware device more accessible, and the like.
In the above description of
Next, a hardware description of an electronic device (e.g., the device 100) according to exemplary embodiments is described with reference to
Further, the claimed advancements may be provided as a utility application, background daemon, or component of an operating system, or combination thereof, executing in conjunction with CPU 800 and an operating system such as Microsoft Windows, UNIX, Solaris, LINUX, Apple MAC-OS and other systems known to those skilled in the art.
The hardware elements in order to achieve the device 100 may be realized by various circuitry elements. Further, each of the functions of the above described embodiments may be implemented by circuitry, which includes one or more processing circuits. A processing circuit includes a particularly programmed processor, for example, processor (CPU) 800, as shown in
In
Alternatively, or additionally, the CPU 800 may be implemented on an FPGA, ASIC, PLD or using discrete logic circuits, as one of ordinary skill in the art would recognize. Further, CPU 800 may be implemented as multiple processors cooperatively working in parallel to perform the instructions of the inventive processes described above.
The device 100 in
The device 100 further includes a display controller 808, such as a graphics card or graphics adaptor for interfacing with display 810, such as a monitor. A general purpose I/O interface 812 interfaces with a keyboard and/or mouse 814 as well as a touch screen panel 816 on or separate from display 810. General purpose I/O interface also connects to a variety of peripherals 818 including printers and scanners.
A sound controller 820 is also provided in the device 100 to interface with speakers/microphone 822 thereby providing sounds and/or music.
The general-purpose storage controller 824 connects the storage medium disk 804 with communication bus 826, which may be an ISA, EISA, VESA, PCI, or similar, for interconnecting all of the components of the device 100. A description of the general features and functionality of the display 810, keyboard and/or mouse 814, as well as the display controller 808, storage controller 824, network controller 806, sound controller 820, and general purpose I/O interface 812 is omitted herein for brevity as these features are known.
The exemplary circuit elements described in the context of the present disclosure may be replaced with other elements and structured differently than the examples provided herein. Moreover, circuitry configured to perform features described herein may be implemented in multiple circuit units (e.g., chips), or the features may be combined in circuitry on a single chipset.
The functions and features described herein may also be executed by various distributed components of a system. For example, one or more processors may execute these system functions, wherein the processors are distributed across multiple components communicating in a network. The distributed components may include one or more client and server machines, which may share processing, in addition to various human interface and communication devices (e.g., display monitors, smart phones, tablets, personal digital assistants (PDAs)). The network may be a private network, such as a LAN or WAN, or may be a public network, such as the Internet. Input to the system may be received via direct user input and received remotely either in real-time or as a batch process. Additionally, some implementations may be performed on modules or hardware not identical to those described. Accordingly, other implementations are within the scope that may be claimed.
Having now described embodiments of the disclosed subject matter, it should be apparent to those skilled in the art that the foregoing is merely illustrative and not limiting, having been presented by way of example only. Thus, although particular configurations have been discussed herein, other configurations can also be employed. Numerous modifications and other embodiments (e.g., combinations, rearrangements, etc.) are enabled by the present disclosure and are within the scope of one of ordinary skill in the art and are contemplated as falling within the scope of the disclosed subject matter and any equivalents thereto. Features of the disclosed embodiments can be combined, rearranged, omitted, etc., within the scope of the invention to produce additional embodiments. Furthermore, certain features may sometimes be used to advantage without a corresponding use of other features. Accordingly, Applicant(s) intend(s) to embrace all such alternatives, modifications, equivalents, and variations that are within the spirit and scope of the disclosed subject matter.