SYSTEM AND METHOD FOR PROVIDING A HUMAN READABLE REPRESENTATION OF AN EVENT AND A HUMAN READABLE ACTION IN RESPONSE TO THAT EVENT

Information

  • Patent Application
  • 20150256385
  • Publication Number
    20150256385
  • Date Filed
    February 24, 2015
    9 years ago
  • Date Published
    September 10, 2015
    9 years ago
Abstract
Methods and systems for mapping events to actions among heterogeneous devices are disclosed. An exemplary method may include obtaining at least one human-readable-event-descriptor from each of a plurality of event-emitting devices and obtaining at least one human-readable-action-descriptor from each of a plurality of action-effectuating devices. The human-readable-event-descriptors and the human-readable-action-descriptors are displayed on a display of the computing device, and user inputs are detected at the computing device that associate each of at least one of the human-readable-event-descriptors with at least one of the human-readable-action-descriptors to create a selected association between the human-readable-event-descriptors and the human-readable-action-descriptors. The selected association between the human-readable-event-descriptors and the human-readable-action-descriptors is stored in an event-action-association datastore on the computing device to enable one or more actions to be carried out when an event occurs.
Description
BACKGROUND

1. Field


The present disclosure relates generally to intercommunication between distributed communication devices, and more specifically to improving human interaction with communication devices.


2. Background


The Internet is a global system of interconnected computers and computer networks that use a standard Internet protocol suite (e.g., the Transmission Control Protocol (TCP) and Internet Protocol (IP)) to communicate with each other. The Internet of Things (IoT) is based on the idea that everyday objects, not just computers and computer networks, can be readable, recognizable, locatable, addressable, and controllable via an IoT communications network (e.g., an ad-hoc system or the Internet).


A number of market trends are driving development of IoT devices. For example, increasing energy costs are driving governments' strategic investments in smart grids and support for future consumption, such as for electric vehicles and public charging stations. Increasing health care costs and aging populations are driving development for remote/connected health care and fitness services. A technological revolution in the home is driving development for new “smart” services, including consolidation by service providers marketing ‘N’ play (e.g., data, voice, video, security, energy management, etc.) and expanding home networks. Buildings are getting smarter and more convenient as a means to reduce operational costs for enterprise facilities.


There are a number of key applications for the IoT. For example, in the area of smart grids and energy management, utility companies can optimize delivery of energy to homes and businesses while customers can better manage energy usage. In the area of home and building automation, smart homes and buildings can have centralized control over virtually any device or system in the home or office, from appliances to plug-in electric vehicle (PEV) security systems. In the field of asset tracking, enterprises, hospitals, factories, and other large organizations can accurately track the locations of high-value equipment, patients, vehicles, and so on. In the area of health and wellness, doctors can remotely monitor patients' health while people can track the progress of fitness routines.


Accordingly, in the near future, increasing development in IoT technologies will lead to numerous IoT devices surrounding a user at home, in vehicles, at work, and many other locations. As more and more devices become network-aware, problems that relate to configuring devices will therefore become more acute.


In particular, existing mechanisms to configure devices to access wireless networks tend to suffer from various drawbacks and limitations, which include a complex user experience among other things. For example, to create automated machine-to-machine (M2M) systems requires a detailed semantic definition or specification agreed to a priori by all actors. For example, in order for a sensor to turn on a light without human intervention, it would require a detailed control specification for the light. More particularly, it would need to be agreed upon and implemented by all manufacturers of lights. The sensor would need to implement a framework based on that standard to control the lights. These types of standards are very complex and take a long time to develop because they require support from a multitude of actors. In very complex internet of everything (IoE) systems (e.g., home automation) the challenge of getting all actors to agree will likely take years.


SUMMARY

The following presents a simplified summary relating to one or more aspects and/or embodiments disclosed herein. As such, the following summary should not be considered an extensive overview relating to all contemplated aspects and/or embodiments, nor should the following summary be regarded to identify key or critical elements relating to all contemplated aspects and/or embodiments or to delineate the scope associated with any particular aspect and/or embodiment. Accordingly, the following summary has the sole purpose to present certain concepts relating to one or more aspects and/or embodiments relating to the mechanisms disclosed herein in a simplified form to precede the detailed description presented below.


According to several aspects, the difficulty with enabling automated interactions between devices in M2M systems is addressed by enabling a user to program these interactions without requiring pre-defined semantics. More specifically, discoverable, human readable descriptors, referred to herein as event descriptors, are added to event signals that propagate between devices of the network. The associated events are occurrences of notable actions happening in the system, which are emitted from nodes in the network, and the device OEM and/or end user may determine what events to emit and what the human readable descriptor for that event should be.


According to one exemplary aspect, discoverable peer-to-peer (P2P) services may be used to allow mapping of events to actions on a computing device. More specifically, at least one human-readable-event-descriptor from each of a plurality of event-emitting devices may be received to obtain a plurality of human-readable-event-descriptors. Similarly, at least one human-readable-action-descriptor from each of a plurality of action-effectuating devices may be received to obtain a plurality of human-readable-action-descriptors. The human-readable-event-descriptors and the human-readable-action-descriptors are displayed on the computing device and user inputs are detected that associate each of at least one of the human-readable-event-descriptors with at least one of the human-readable-action-descriptors to create a selected association between the human-readable-event-descriptors and the human-readable-action-descriptors. The selected association between the human-readable-event-descriptors and the human-readable-action-descriptors is then stored.


According to another aspect, an apparatus for mapping events to actions on a computing device is disclosed. The apparatus may include a wireless transceiver to communicate with a wireless network, a display, and a peer-to-peer platform. In addition, the apparatus includes an event-picker application that is configured to obtain, via the peer-to-peer platform, at least one human-readable-event-descriptor from each of a plurality of event-emitting devices to obtain a plurality of human-readable-event-descriptors. The event-picker application is also configured to obtain, via the peer-to-peer platform, at least one human-readable-action-descriptor from each of a plurality of action-effectuating devices to obtain a plurality of human-readable-action-descriptors and display the human-readable-event-descriptors and the human-readable-action-descriptors on the display of the computing device. User inputs that associate each of at least one of the human-readable-event-descriptors with at least one of the human-readable-action-descriptors are detected to create a selected association between the human-readable-event-descriptors and the human-readable-action-descriptors. The selected association between the human-readable-event-descriptors and the human-readable-action-descriptors is then stored.


Other objects and advantages associated with the aspects and embodiments disclosed herein will be apparent to those skilled in the art based on the accompanying drawings and detailed description.





BRIEF DESCRIPTION OF THE DRAWINGS

A more complete appreciation of aspects of the disclosure and many of the attendant advantages thereof will be readily obtained as the same becomes better understood by reference to the following detailed description when considered in connection with the accompanying drawings which are presented solely for illustration and not limitation of the disclosure, and in which:



FIG. 1A illustrates a high-level system architecture of a wireless communications system in accordance with an aspect of the disclosure.



FIG. 1B illustrates a high-level system architecture of a wireless communications system in accordance with another aspect of the disclosure.



FIG. 1C illustrates a high-level system architecture of a wireless communications system in accordance with an aspect of the disclosure.



FIG. 1D illustrates a high-level system architecture of a wireless communications system in accordance with an aspect of the disclosure.



FIG. 1E illustrates a high-level system architecture of a wireless communications system in accordance with an aspect of the disclosure.



FIG. 2A illustrates an exemplary Internet of Things (IoT) device in accordance with aspects of the disclosure, while FIG. 2B illustrates an exemplary passive IoT device in accordance with aspects of the disclosure.



FIG. 3 illustrates a communication device that includes logic configured to perform functionality in accordance with an aspect of the disclosure.



FIG. 4 illustrates an exemplary server according to various aspects of the disclosure.



FIG. 5 illustrates a wireless communication network that may support discoverable peer-to-peer (P2P) services, in accordance with one aspect of the disclosure.



FIG. 6 illustrates an exemplary environment in which discoverable P2P services may be used to establish a proximity-based distributed bus over which various devices may communicate, in accordance with one aspect of the disclosure.



FIG. 7 illustrates an exemplary message sequence in which discoverable P2P services may be used to establish a proximity-based distributed bus over which various devices may communicate, in accordance with one aspect of the disclosure.



FIG. 8 illustrates a system in which discoverable event descriptors and action descriptors may be used to enable automated interactions between devices to be programmed without requiring pre-defined semantics.



FIG. 9 depicts an example of different types of devices in a system in which discoverable event descriptors and action descriptors may be used to enable automated interactions between devices to be programmed.



FIG. 10 illustrates a method in which discoverable event descriptors and action descriptors may be used to enable automated interactions between devices to be programmed.



FIG. 11 illustrates a user interface that may be utilized in connection with associating human-readable-event-descriptors with at least one of the human-readable-action-descriptors.



FIG. 12 is a block diagram that may correspond to a device that uses discoverable event descriptors and action descriptors to communicate over a proximity-based distributed bus, in accordance with one aspect of the disclosure.





DETAILED DESCRIPTION

Various aspects are disclosed in the following description and related drawings to show specific examples relating to exemplary embodiments. Alternate embodiments will be apparent to those skilled in the pertinent art upon reading this disclosure, and may be constructed and practiced without departing from the scope or spirit of the disclosure. Additionally, well-known elements will not be described in detail or may be omitted so as to not obscure the relevant details of the aspects and embodiments disclosed herein.


The word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any embodiment described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments. Likewise, the term “embodiments” does not require that all embodiments include the discussed feature, advantage or mode of operation.


The terminology used herein describes particular embodiments only and should be construed to limit any embodiments disclosed herein. As used herein, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises,” “comprising,” “includes,” and/or “including,” when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.


Further, many aspects are described in terms of sequences of actions to be performed by, for example, elements of a computing device. It will be recognized that various actions described herein can be performed by specific circuits (e.g., an application specific integrated circuit (ASIC)), by program instructions being executed by one or more processors, or by a combination of both. Additionally, these sequence of actions described herein can be considered to be embodied entirely within any form of computer readable storage medium having stored therein a corresponding set of computer instructions that upon execution would cause an associated processor to perform the functionality described herein. Thus, the various aspects of the disclosure may be embodied in a number of different forms, all of which have been contemplated to be within the scope of the claimed subject matter. In addition, for each of the aspects described herein, the corresponding form of any such aspects may be described herein as, for example, “logic configured to” perform the described action.


As used herein, the term “Internet of Things device” (or “IoT device”) may refer to any object (e.g., an appliance, a sensor, etc.) that has an addressable interface (e.g., an Internet protocol (IP) address, a Bluetooth identifier (ID), a near-field communication (NFC) ID, etc.) and can transmit information to one or more other devices over a wired or wireless connection. An IoT device may have a passive communication interface, such as a quick response (QR) code, a radio-frequency identification (RFID) tag, an NFC tag, or the like, or an active communication interface, such as a modem, a transceiver, a transmitter-receiver, or the like. An IoT device can have a particular set of attributes (e.g., a device state or status, such as whether the IoT device is on or off, open or closed, idle or active, available for task execution or busy, and so on, a cooling or heating function, an environmental monitoring or recording function, a light-emitting function, a sound-emitting function, etc.) that can be embedded in and/or controlled/monitored by a central processing unit (CPU), microprocessor, ASIC, or the like, and configured for connection to an IoT network such as a local ad-hoc network or the Internet. For example, IoT devices may include, but are not limited to, refrigerators, toasters, ovens, microwaves, freezers, dishwashers, dishes, hand tools, clothes washers, clothes dryers, furnaces, air conditioners, thermostats, televisions, light fixtures, vacuum cleaners, sprinklers, electricity meters, gas meters, etc., so long as the devices are equipped with an addressable communications interface for communicating with the IoT network. IoT devices may also include cell phones, desktop computers, laptop computers, tablet computers, personal digital assistants (PDAs), etc. Accordingly, the IoT network may be comprised of a combination of “legacy” Internet-accessible devices (e.g., laptop or desktop computers, cell phones, etc.) in addition to devices that do not typically have Internet-connectivity (e.g., dishwashers, etc.).



FIG. 1A illustrates a high-level system architecture of a wireless communications system 100A in accordance with an aspect of the disclosure. The wireless communications system 100A contains a plurality of IoT devices, which include a television 110, an outdoor air conditioning unit 112, a thermostat 114, a refrigerator 116, and a washer and dryer 118.


Referring to FIG. 1A, IoT devices 110-118 are configured to communicate with an access network (e.g., an access point 125) over a physical communications interface or layer, shown in FIG. 1A as air interface 108 and a direct wired connection 109. The air interface 108 can comply with a wireless Internet protocol (IP), such as IEEE 802.11. Although FIG. 1A illustrates IoT devices 110-118 communicating over the air interface 108 and IoT device 118 communicating over the direct wired connection 109, each IoT device may communicate over a wired or wireless connection, or both.


The Internet 175 includes a number of routing agents and processing agents (not shown in FIG. 1A for the sake of convenience). The Internet 175 is a global system of interconnected computers and computer networks that uses a standard Internet protocol suite (e.g., the Transmission Control Protocol (TCP) and IP) to communicate among disparate devices/networks. TCP/IP provides end-to-end connectivity specifying how data should be formatted, addressed, transmitted, routed and received at the destination.


In FIG. 1A, a computer 120, such as a desktop or personal computer (PC), is shown as connecting to the Internet 175 directly (e.g., over an Ethernet connection or Wi-Fi or 802.11-based network). The computer 120 may have a wired connection to the Internet 175, such as a direct connection to a modem or router, which, in an example, can correspond to the access point 125 itself (e.g., for a Wi-Fi router with both wired and wireless connectivity). Alternatively, rather than being connected to the access point 125 and the Internet 175 over a wired connection, the computer 120 may be connected to the access point 125 over air interface 108 or another wireless interface, and access the Internet 175 over the air interface 108. Although illustrated as a desktop computer, computer 120 may be a laptop computer, a tablet computer, a PDA, a smart phone, or the like. The computer 120 may be an IoT device and/or contain functionality to manage an IoT network/group, such as the network/group of IoT devices 110-118.


The access point 125 may be connected to the Internet 175 via, for example, an optical communication system, such as FiOS, a cable modem, a digital subscriber line (DSL) modem, or the like. The access point 125 may communicate with IoT devices 110-120 and the Internet 175 using the standard Internet protocols (e.g., TCP/IP).


Referring to FIG. 1A, an IoT server 170 is shown as connected to the Internet 175. The IoT server 170 can be implemented as a plurality of structurally separate servers, or alternately may correspond to a single server. In an aspect, the IoT server 170 is optional (as indicated by the dotted line), and the group of IoT devices 110-120 may be a peer-to-peer (P2P) network. In such a case, the IoT devices 110-120 can communicate with each other directly over the air interface 108 and/or the direct wired connection 109. Alternatively, or additionally, some or all of IoT devices 110-120 may be configured with a communication interface independent of air interface 108 and direct wired connection 109. For example, if the air interface 108 corresponds to a Wi-Fi interface, one or more of the IoT devices 110-120 may have Bluetooth or NFC interfaces for communicating directly with each other or other Bluetooth or NFC-enabled devices.


In a peer-to-peer network, service discovery schemes can multicast the presence of nodes, their capabilities, and group membership. The peer-to-peer devices can establish associations and subsequent interactions based on this information.


In accordance with an aspect of the disclosure, FIG. 1B illustrates a high-level architecture of another wireless communications system 100B that contains a plurality of IoT devices. In general, the wireless communications system 100B shown in FIG. 1B may include various components that are the same and/or substantially similar to the wireless communications system 100A shown in FIG. 1A, which was described in greater detail above (e.g., various IoT devices, including a television 110, outdoor air conditioning unit 112, thermostat 114, refrigerator 116, and washer and dryer 118, that are configured to communicate with an access point 125 over an air interface 108 and/or a direct wired connection 109, a computer 120 that directly connects to the Internet 175 and/or connects to the Internet 175 through access point 125, and an IoT server 170 accessible via the Internet 175, etc.). As such, for brevity and ease of description, various details relating to certain components in the wireless communications system 100B shown in FIG. 1B may be omitted herein to the extent that the same or similar details have already been provided above in relation to the wireless communications system 100A illustrated in FIG. 1A.


Referring to FIG. 1B, the wireless communications system 100B may include a supervisor device 130, which may alternatively be referred to as an IoT manager 130 or IoT manager device 130. As such, where the following description uses the term “supervisor device” 130, those skilled in the art will appreciate that any references to an IoT manager, group owner, or similar terminology may refer to the supervisor device 130 or another physical or logical component that provides the same or substantially similar functionality.


In one embodiment, the supervisor device 130 may generally observe, monitor, control, or otherwise manage the various other components in the wireless communications system 100B. For example, the supervisor device 130 can communicate with an access network (e.g., access point 125) over air interface 108 and/or a direct wired connection 109 to monitor or manage attributes, activities, or other states associated with the various IoT devices 110-120 in the wireless communications system 100B. The supervisor device 130 may have a wired or wireless connection to the Internet 175 and optionally to the IoT server 170 (shown as a dotted line). The supervisor device 130 may obtain information from the Internet 175 and/or the IoT server 170 that can be used to further monitor or manage attributes, activities, or other states associated with the various IoT devices 110-120. The supervisor device 130 may be a standalone device or one of IoT devices 110-120, such as computer 120. The supervisor device 130 may be a physical device or a software application running on a physical device. The supervisor device 130 may include a user interface that can output information relating to the monitored attributes, activities, or other states associated with the IoT devices 110-120 and receive input information to control or otherwise manage the attributes, activities, or other states associated therewith. Accordingly, the supervisor device 130 may generally include various components and support various wired and wireless communication interfaces to observe, monitor, control, or otherwise manage the various components in the wireless communications system 100B.


The wireless communications system 100B shown in FIG. 1B may include one or more passive IoT devices 105 (in contrast to the active IoT devices 110-120) that can be coupled to or otherwise made part of the wireless communications system 100B. In general, the passive IoT devices 105 may include barcoded devices, Bluetooth devices, radio frequency (RF) devices, RFID tagged devices, infrared (IR) devices, NFC tagged devices, or any other suitable device that can provide its identifier and attributes to another device when queried over a short range interface. Active IoT devices may detect, store, communicate, act on, and/or the like, changes in attributes of passive IoT devices.


For example, passive IoT devices 105 may include a coffee cup and a container of orange juice that each have an RFID tag or barcode. A cabinet IoT device and the refrigerator IoT device 116 may each have an appropriate scanner or reader that can read the RFID tag or barcode to detect when the coffee cup and/or the container of orange juice passive IoT devices 105 have been added or removed. In response to the cabinet IoT device detecting the removal of the coffee cup passive IoT device 105 and the refrigerator IoT device 116 detecting the removal of the container of orange juice passive IoT device, the supervisor device 130 may receive one or more signals that relate to the activities detected at the cabinet IoT device and the refrigerator IoT device 116. The supervisor device 130 may then infer that a user is drinking orange juice from the coffee cup and/or likes to drink orange juice from a coffee cup.


Although the foregoing describes the passive IoT devices 105 as having some form of RFID tag or barcode communication interface, the passive IoT devices 105 may include one or more devices or other physical objects that do not have such communication capabilities. For example, certain IoT devices may have appropriate scanner or reader mechanisms that can detect shapes, sizes, colors, and/or other observable features associated with the passive IoT devices 105 to identify the passive IoT devices 105. In this manner, any suitable physical object may communicate its identity and attributes and become part of the wireless communication system 100B and be observed, monitored, controlled, or otherwise managed with the supervisor device 130. Further, passive IoT devices 105 may be coupled to or otherwise made part of the wireless communications system 100A in FIG. 1A and observed, monitored, controlled, or otherwise managed in a substantially similar manner.


In accordance with another aspect of the disclosure, FIG. 1C illustrates a high-level architecture of another wireless communications system 100C that contains a plurality of IoT devices. In general, the wireless communications system 100C shown in FIG. 1C may include various components that are the same and/or substantially similar to the wireless communications systems 100A and 100B shown in FIGS. 1A and 1B, respectively, which were described in greater detail above. As such, for brevity and ease of description, various details relating to certain components in the wireless communications system 100C shown in FIG. 1C may be omitted herein to the extent that the same or similar details have already been provided above in relation to the wireless communications systems 100A and 100B illustrated in FIGS. 1A and 1B, respectively.


The communications system 100C shown in FIG. 1C illustrates exemplary peer-to-peer communications between the IoT devices 110-118 and the supervisor device 130. As shown in FIG. 1C, the supervisor device 130 communicates with each of the IoT devices 110-118 over an IoT supervisor interface. Further, IoT devices 110 and 114, IoT devices 112, 114, and 116, and IoT devices 116 and 118, communicate directly with each other.


The IoT devices 110-118 make up an IoT group 160. An IoT device group 160 is a group of locally connected IoT devices, such as the IoT devices connected to a user's home network. Although not shown, multiple IoT device groups may be connected to and/or communicate with each other via an IoT SuperAgent 140 connected to the Internet 175. At a high level, the supervisor device 130 manages intra-group communications, while the IoT SuperAgent 140 can manage inter-group communications. Although shown as separate devices, the supervisor device 130 and the IoT SuperAgent 140 may be, or reside on, the same device (e.g., a standalone device or an IoT device, such as computer 120 in FIG. 1A). Alternatively, the IoT SuperAgent 140 may correspond to or include the functionality of the access point 125. As yet another alternative, the IoT SuperAgent 140 may correspond to or include the functionality of an IoT server, such as IoT server 170. The IoT SuperAgent 140 may encapsulate gateway functionality 145.


Each IoT device 110-118 can treat the supervisor device 130 as a peer and transmit attribute/schema updates to the supervisor device 130. When an IoT device needs to communicate with another IoT device, it can request the pointer to that IoT device from the supervisor device 130 and then communicate with the target IoT device as a peer. The IoT devices 110-118 communicate with each other over a peer-to-peer communication network using a common messaging protocol (CMP). As long as two IoT devices are CMP-enabled and connected over a common communication transport, they can communicate with each other. In the protocol stack, the CMP layer 154 is below the application layer 152 and above the transport layer 156 and the physical layer 158.


In accordance with another aspect of the disclosure, FIG. 1D illustrates a high-level architecture of another wireless communications system 100D that contains a plurality of IoT devices. In general, the wireless communications system 100D shown in FIG. 1D may include various components that are the same and/or substantially similar to the wireless communications systems 100A-C shown in FIGS. 1-C, respectively, which were described in greater detail above. As such, for brevity and ease of description, various details relating to certain components in the wireless communications system 100D shown in FIG. 1D may be omitted herein to the extent that the same or similar details have already been provided above in relation to the wireless communications systems 100A-C illustrated in FIGS. 1A-C, respectively.


The Internet 175 is a “resource” that can be regulated using the concept of the IoT. However, the Internet 175 is just one example of a resource that is regulated, and any resource could be regulated using the concept of the IoT. Other resources that can be regulated include, but are not limited to, electricity, gas, storage, security, and the like. An IoT device may be connected to the resource and thereby regulate it, or the resource could be regulated over the Internet 175. FIG. 1D illustrates several resources 180, such as natural gas, gasoline, hot water, and electricity, wherein the resources 180 can be regulated in addition to and/or over the Internet 175.


IoT devices can communicate with each other to regulate their use of a resource 180. For example, IoT devices such as a toaster, a computer, and a hairdryer may communicate with each other over a Bluetooth communication interface to regulate their use of electricity (the resource 180). As another example, IoT devices such as a desktop computer, a telephone, and a tablet computer may communicate over a Wi-Fi communication interface to regulate their access to the Internet 175 (the resource 180). As yet another example, IoT devices such as a stove, a clothes dryer, and a water heater may communicate over a Wi-Fi communication interface to regulate their use of gas. Alternatively, or additionally, each IoT device may be connected to an IoT server, such as IoT server 170, which has logic to regulate their use of the resource 180 based on information received from the IoT devices.


In accordance with another aspect of the disclosure, FIG. 1E illustrates a high-level architecture of another wireless communications system 100E that contains a plurality of IoT devices. In general, the wireless communications system 100E shown in FIG. 1E may include various components that are the same and/or substantially similar to the wireless communications systems 100A-D shown in FIGS. 1-D, respectively, which were described in greater detail above. As such, for brevity and ease of description, various details relating to certain components in the wireless communications system 100E shown in FIG. 1E may be omitted herein to the extent that the same or similar details have already been provided above in relation to the wireless communications systems 100A-D illustrated in FIGS. 1A-D, respectively.


The communications system 100E includes two IoT device groups 160A and 160B. Multiple IoT device groups may be connected to and/or communicate with each other via an IoT SuperAgent connected to the Internet 175. At a high level, an IoT SuperAgent may manage inter-group communications among IoT device groups. For example, in FIG. 1E, the IoT device group 160A includes IoT devices 116A, 122A, and 124A and an IoT SuperAgent 140A, while IoT device group 160B includes IoT devices 116B, 122B, and 124B and an IoT SuperAgent 140B. As such, the IoT SuperAgents 140A and 140B may connect to the Internet 175 and communicate with each other over the Internet 175 and/or communicate with each other directly to facilitate communication between the IoT device groups 160A and 160B. Furthermore, although FIG. 1E illustrates two IoT device groups 160A and 160B communicating with each other via IoT SuperAgents 140A and 140B, those skilled in the art will appreciate that any number of IoT device groups may suitably communicate with each other using IoT SuperAgents.



FIG. 2A illustrates a high-level example of an IoT device 200A in accordance with aspects of the disclosure. While external appearances and/or internal components can differ significantly among IoT devices, most IoT devices will have some sort of user interface, which may comprise a display and a means for user input. IoT devices without a user interface can be communicated with remotely over a wired or wireless network, such as air interface 108 in FIGS. 1A-B.


As shown in FIG. 2A, in an example configuration for the IoT device 200A, an external casing of IoT device 200A may be configured with a display 226, a power button 222, and two control buttons 224A and 224B, among other components, as is known in the art. The display 226 may be a touchscreen display, in which case the control buttons 224A and 224B may not be necessary. While not shown explicitly as part of IoT device 200A, the IoT device 200A may include one or more external antennas and/or one or more integrated antennas that are built into the external casing, including but not limited to Wi-Fi antennas, cellular antennas, satellite position system (SPS) antennas (e.g., global positioning system (GPS) antennas), and so on.


While internal components of IoT devices, such as IoT device 200A, can be embodied with different hardware configurations, a basic high-level configuration for internal hardware components is shown as platform 202 in FIG. 2A. The platform 202 can receive and execute software applications, data and/or commands transmitted over a network interface, such as air interface 108 in FIGS. 1A-B and/or a wired interface. The platform 202 can also independently execute locally stored applications. The platform 202 can include one or more transceivers 206 configured for wired and/or wireless communication (e.g., a Wi-Fi transceiver, a Bluetooth transceiver, a cellular transceiver, a satellite transceiver, a GPS or SPS receiver, etc.) operably coupled to one or more processors 208, such as a microcontroller, microprocessor, application specific integrated circuit, digital signal processor (DSP), programmable logic circuit, or other data processing device, which will be generally referred to as processor 208. The processor 208 can execute application programming instructions within a memory 212 of the IoT device. The memory 212 can include one or more of read-only memory (ROM), random-access memory (RAM), electrically erasable programmable ROM (EEPROM), flash cards, or any memory common to computer platforms. One or more input/output (I/O) interfaces 214 can be configured to allow the processor 208 to communicate with and control from various I/O devices such as the display 226, power button 222, control buttons 224A and 224B as illustrated, and any other devices, such as sensors, actuators, relays, valves, switches, and the like associated with the IoT device 200A.


Accordingly, an aspect of the disclosure can include an IoT device (e.g., IoT device 200A) including the ability to perform the functions described herein. As will be appreciated by those skilled in the art, the various logic elements can be embodied in discrete elements, software modules executed on a processor (e.g., processor 208) or any combination of software and hardware to achieve the functionality disclosed herein. For example, transceiver 206, processor 208, memory 212, and I/O interface 214 may all be used cooperatively to load, store and execute the various functions disclosed herein and thus the logic to perform these functions may be distributed over various elements. Alternatively, the functionality could be incorporated into one discrete component. Therefore, the features of the IoT device 200A in FIG. 2A are to be considered merely illustrative and the disclosure is not limited to the illustrated features or arrangement.



FIG. 2B illustrates a high-level example of a passive IoT device 200B in accordance with aspects of the disclosure. In general, the passive IoT device 200B shown in FIG. 2B may include various components that are the same and/or substantially similar to the IoT device 200A shown in FIG. 2A, which was described in greater detail above. As such, for brevity and ease of description, various details relating to certain components in the passive IoT device 200B shown in FIG. 2B may be omitted herein to the extent that the same or similar details have already been provided above in relation to the IoT device 200A illustrated in FIG. 2A.


The passive IoT device 200B shown in FIG. 2B may generally differ from the IoT device 200A shown in FIG. 2A in that the passive IoT device 200B may not have a processor, internal memory, or certain other components. Instead, in one embodiment, the passive IoT device 200B may only include an I/O interface 214 or other suitable mechanism that allows the passive IoT device 200B to be observed, monitored, controlled, managed, or otherwise known within a controlled IoT network. For example, in one embodiment, the I/O interface 214 associated with the passive IoT device 200B may include a barcode, Bluetooth interface, radio frequency (RF) interface, RFID tag, IR interface, NFC interface, or any other suitable I/O interface that can provide an identifier and attributes associated with the passive IoT device 200B to another device when queried over a short range interface (e.g., an active IoT device, such as IoT device 200A, that can detect, store, communicate, act on, or otherwise process information relating to the attributes associated with the passive IoT device 200B).


Although the foregoing describes the passive IoT device 200B as having some form of RF, barcode, or other I/O interface 214, the passive IoT device 200B may comprise a device or other physical object that does not have such an I/O interface 214. For example, certain IoT devices may have appropriate scanner or reader mechanisms that can detect shapes, sizes, colors, and/or other observable features associated with the passive IoT device 200B to identify the passive IoT device 200B. In this manner, any suitable physical object may communicate its identity and attributes and be observed, monitored, controlled, or otherwise managed within a controlled IoT network.



FIG. 3 illustrates a communication device 300 that includes logic configured to perform functionality. The communication device 300 can correspond to any of the above-noted communication devices, including but not limited to IoT devices 110-120, IoT device 200A, any components coupled to the Internet 175 (e.g., the IoT server 170), and so on. Thus, communication device 300 can correspond to any electronic device that is configured to communicate with (or facilitate communication with) one or more other entities over the wireless communications systems 100A-B of FIGS. 1A-B.


Referring to FIG. 3, the communication device 300 includes logic configured to receive and/or transmit information 305. In an example, if the communication device 300 corresponds to a wireless communications device (e.g., IoT device 200A and/or passive IoT device 200B), the logic configured to receive and/or transmit information 305 can include a wireless communications interface (e.g., Bluetooth, Wi-Fi, Wi-Fi Direct, Long-Term Evolution (LTE) Direct, etc.) such as a wireless transceiver and associated hardware (e.g., an RF antenna, a MODEM, a modulator and/or demodulator, etc.). In another example, the logic configured to receive and/or transmit information 305 can correspond to a wired communications interface (e.g., a serial connection, a USB or Firewire connection, an Ethernet connection through which the Internet 175 can be accessed, etc.). Thus, if the communication device 300 corresponds to some type of network-based server (e.g., the application 170), the logic configured to receive and/or transmit information 305 can correspond to an Ethernet card, in an example, that connects the network-based server to other communication entities via an Ethernet protocol. In a further example, the logic configured to receive and/or transmit information 305 can include sensory or measurement hardware by which the communication device 300 can monitor its local environment (e.g., an accelerometer, a temperature sensor, a light sensor, an antenna for monitoring local RF signals, etc.). The logic configured to receive and/or transmit information 305 can also include software that, when executed, permits the associated hardware of the logic configured to receive and/or transmit information 305 to perform its reception and/or transmission function(s). However, the logic configured to receive and/or transmit information 305 does not correspond to software alone, and the logic configured to receive and/or transmit information 305 relies at least in part upon hardware to achieve its functionality.


Referring to FIG. 3, the communication device 300 further includes logic configured to process information 310. In an example, the logic configured to process information 310 can include at least a processor. Example implementations of the type of processing that can be performed by the logic configured to process information 310 includes but is not limited to performing determinations, establishing connections, making selections between different information options, performing evaluations related to data, interacting with sensors coupled to the communication device 300 to perform measurement operations, converting information from one format to another (e.g., between different protocols such as .wmv to .avi, etc.), and so on. For example, the processor included in the logic configured to process information 310 can correspond to a general purpose processor, a DSP, an ASIC, a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration). The logic configured to process information 310 can also include software that, when executed, permits the associated hardware of the logic configured to process information 310 to perform its processing function(s). However, the logic configured to process information 310 does not correspond to software alone, and the logic configured to process information 310 relies at least in part upon hardware to achieve its functionality.


Referring to FIG. 3, the communication device 300 further includes logic configured to store information 315. In an example, the logic configured to store information 315 can include at least a non-transitory memory and associated hardware (e.g., a memory controller, etc.). For example, the non-transitory memory included in the logic configured to store information 315 can correspond to RAM, flash memory, ROM, erasable programmable ROM (EPROM), EEPROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. The logic configured to store information 315 can also include software that, when executed, permits the associated hardware of the logic configured to store information 315 to perform its storage function(s). However, the logic configured to store information 315 does not correspond to software alone, and the logic configured to store information 315 relies at least in part upon hardware to achieve its functionality.


Referring to FIG. 3, the communication device 300 further optionally includes logic configured to present information 320. In an example, the logic configured to present information 320 can include at least an output device and associated hardware. For example, the output device can include a video output device (e.g., a display screen, a port that can carry video information such as USB, HDMI, etc.), an audio output device (e.g., speakers, a port that can carry audio information such as a microphone jack, USB, HDMI, etc.), a vibration device and/or any other device by which information can be formatted for output or actually outputted by a user or operator of the communication device 300. For example, if the communication device 300 corresponds to the IoT device 200A as shown in FIG. 2A and/or the passive IoT device 200B as shown in FIG. 2B, the logic configured to present information 320 can include the display 226. In a further example, the logic configured to present information 320 can be omitted for certain communication devices, such as network communication devices that do not have a local user (e.g., network switches or routers, remote servers, etc.). The logic configured to present information 320 can also include software that, when executed, permits the associated hardware of the logic configured to present information 320 to perform its presentation function(s). However, the logic configured to present information 320 does not correspond to software alone, and the logic configured to present information 320 relies at least in part upon hardware to achieve its functionality.


Referring to FIG. 3, the communication device 300 further optionally includes logic configured to receive local user input 325. In an example, the logic configured to receive local user input 325 can include at least a user input device and associated hardware. For example, the user input device can include buttons, a touchscreen display, a keyboard, a camera, an audio input device (e.g., a microphone or a port that can carry audio information such as a microphone jack, etc.), and/or any other device by which information can be received from a user or operator of the communication device 300. For example, if the communication device 300 corresponds to the IoT device 200A as shown in FIG. 2A and/or the passive IoT device 200B as shown in FIG. 2B, the logic configured to receive local user input 325 can include the buttons 222, 224A, and 224B, the display 226 (if a touchscreen), etc. In a further example, the logic configured to receive local user input 325 can be omitted for certain communication devices, such as network communication devices that do not have a local user (e.g., network switches or routers, remote servers, etc.). The logic configured to receive local user input 325 can also include software that, when executed, permits the associated hardware of the logic configured to receive local user input 325 to perform its input reception function(s). However, the logic configured to receive local user input 325 does not correspond to software alone, and the logic configured to receive local user input 325 relies at least in part upon hardware to achieve its functionality.


Referring to FIG. 3, while the configured logics of 305 through 325 are shown as separate or distinct blocks in FIG. 3, it will be appreciated that the hardware and/or software by which the respective configured logic performs its functionality can overlap in part. For example, any software used to facilitate the functionality of the configured logics of 305 through 325 can be stored in the non-transitory memory associated with the logic configured to store information 315, such that the configured logics of 305 through 325 each performs their functionality (i.e., in this case, software execution) based in part upon the operation of software stored by the logic configured to store information 315. Likewise, hardware that is directly associated with one of the configured logics can be borrowed or used by other configured logics from time to time. For example, the processor of the logic configured to process information 310 can format data into an appropriate format before being transmitted by the logic configured to receive and/or transmit information 305, such that the logic configured to receive and/or transmit information 305 performs its functionality (i.e., in this case, transmission of data) based in part upon the operation of hardware (i.e., the processor) associated with the logic configured to process information 310.


Generally, unless stated otherwise explicitly, the phrase “logic configured to” as used throughout this disclosure is intended to invoke an aspect that is at least partially implemented with hardware, and is not intended to map to software-only implementations that are independent of hardware. Also, it will be appreciated that the configured logic or “logic configured to” in the various blocks are not limited to specific logic gates or elements, but generally refer to the ability to perform the functionality described herein (either via hardware or a combination of hardware and software). Thus, the configured logics or “logic configured to” as illustrated in the various blocks are not necessarily implemented as logic gates or logic elements despite sharing the word “logic.” Other interactions or cooperation between the logic in the various blocks will become clear to one of ordinary skill in the art from a review of the aspects described below in more detail.


The various embodiments may be implemented on any of a variety of commercially available server devices, such as server 400 illustrated in FIG. 4. In an example, the server 400 may correspond to one example configuration of the IoT server 170 described above. In FIG. 4, the server 400 includes a processor 401 coupled to volatile memory 402 and a large capacity nonvolatile memory, such as a disk drive 403. The server 400 may also include a floppy disc drive, compact disc (CD) or DVD disc drive 406 coupled to the processor 401. The server 400 may also include network access ports 404 coupled to the processor 401 for establishing data connections with a network 407, such as a local area network coupled to other broadcast system computers and servers or to the Internet. In context with FIG. 3, it will be appreciated that the server 400 of FIG. 4 illustrates one example implementation of the communication device 300, whereby the logic configured to transmit and/or receive information 305 corresponds to the network access points 404 used by the server 400 to communicate with the network 407, the logic configured to process information 310 corresponds to the processor 401, and the logic configuration to store information 315 corresponds to any combination of the volatile memory 402, the disk drive 403 and/or the disc drive 406. The optional logic configured to present information 320 and the optional logic configured to receive local user input 325 are not shown explicitly in FIG. 4 and may or may not be included therein. Thus, FIG. 4 helps to demonstrate that the communication device 300 may be implemented as a server, in addition to an IoT device implementation as in FIG. 2A.


In general, user equipment (UE) such as telephones, tablet computers, laptop and desktop computers, certain vehicles, etc., can be configured to connect with each other either locally (e.g., Bluetooth, local Wi-Fi, etc.) or remotely (e.g., via cellular networks, through the Internet, etc.). Furthermore, certain UEs may also support proximity-based peer-to-peer (P2P) communication using certain wireless networking technologies (e.g., Wi-Fi, Bluetooth, Wi-Fi Direct, etc.) that enable devices to make a one-to-one connection or simultaneously connect to a group that includes several devices in order to directly communicate with one another. To that end, FIG. 5 illustrates an exemplary wireless communication network or WAN 500 that may support discoverable P2P services. For example, in one embodiment, the wireless communication network 500 may comprise an LTE network or another suitable WAN that includes various base stations 510 and other network entities. For simplicity, only three base stations 510a, 510b and 510c, one network controller 530, and one Dynamic Host Configuration Protocol (DHCP) server 540 are shown in FIG. 5. A base station 510 may be an entity that communicates with devices 520 and may also be referred to as a Node B, an evolved Node B (eNB), an access point, etc. Each base station 510 may provide communication coverage for a particular geographic area and may support communication for the devices 520 located within the coverage area. To improve network capacity, the overall coverage area of a base station 510 may be partitioned into multiple (e.g., three) smaller areas, wherein each smaller area may be served by a respective base station 510. In 3GPP, the term “cell” can refer to a coverage area of a base station 510 and/or a base station subsystem 510 serving this coverage area, depending on the context in which the term is used. In 3GPP2, the term “sector” or “cell-sector” can refer to a coverage area of a base station 510 and/or a base station subsystem 510 serving this coverage area. For clarity, the 3GPP concept of “cell” may be used in the description herein.


A base station 510 may provide communication coverage for a macro cell, a pico cell, a femto cell, and/or other cell types. A macro cell may cover a relatively large geographic area (e.g., several kilometers in radius) and may allow unrestricted access by devices 520 with service subscription. A pico cell may cover a relatively small geographic area and may allow unrestricted access by devices 520 with service subscription. A femto cell may cover a relatively small geographic area (e.g., a home) and may allow restricted access by devices 520 having association with the femto cell (e.g., devices 520 in a Closed Subscriber Group (CSG)). In the example shown in FIG. 5, wireless network 500 includes macro base stations 510a, 510b and 510c for macro cells. Wireless network 500 may also include pico base stations 510 for pico cells and/or home base stations 510 for femto cells (not shown in FIG. 5).


Network controller 530 may couple to a set of base stations 510 and may provide coordination and control for these base stations 510. Network controller 530 may be a single network entity or a collection of network entities that can communicate with the base stations via a backhaul. The base stations may also communicate with one another, e.g., directly or indirectly via wireless or wireline backhaul. DHCP server 540 may support P2P communication, as described below. DHCP server 540 may be part of wireless network 500, external to wireless network 500, run via Internet Connection Sharing (ICS), or any suitable combination thereof. DHCP server 540 may be a separate entity (e.g., as shown in FIG. 5) or may be part of a base station 510, network controller 530, or some other entity. In any case, DHCP server 540 may be reachable by devices 520 desiring to communicate peer-to-peer.


Devices 520 may be dispersed throughout wireless network 500, and each device 520 may be stationary or mobile. A device 520 may also be referred to as a node, user equipment (UE), a station, a mobile station, a terminal, an access terminal, a subscriber unit, etc. A device 520 may be a cellular phone, a personal digital assistant (PDA), a wireless modem, a wireless communication device, a handheld device, a laptop computer, a cordless phone, a wireless local loop (WLL) station, a smart phone, a netbook, a smartbook, a tablet, etc. A device 520 may communicate with base stations 510 in the wireless network 500 and may further communicate peer-to-peer with other devices 520. For example, as shown in FIG. 5, devices 520a and 520b may communicate peer-to-peer, devices 520c and 520d may communicate peer-to-peer, devices 520e and 520f may communicate peer-to-peer, and devices 520g, 520h, and 520i may communicate peer-to-peer, while remaining devices 520 may communicate with base stations 510. As further shown in FIG. 5, devices 520a, 520d, 520f, and 520h may also communicate with base stations 500, e.g., when not engaged in P2P communication or possibly concurrent with P2P communication.


In the description herein, WAN communication may refer to communication between a device 520 and a base station 510 in wireless network 500, e.g., for a call with a remote entity such as another device 520. A WAN device is a device 520 that is interested or engaged in WAN communication. P2P communication refers to direct communication between two or more devices 520, without going through any base station 510. A P2P device is a device 520 that is interested or engaged in P2P communication, e.g., a device 520 that has traffic data for another device 520 within proximity of the P2P device. Two devices may be considered to be within proximity of one another, for example, if each device 520 can detect the other device 520. In general, a device 520 may communicate with another device 520 either directly for P2P communication or via at least one base station 510 for WAN communication.


In one embodiment, direct communication between P2P devices 520 may be organized into P2P groups. More particularly, a P2P group generally refers to a group of two or more devices 520 interested or engaged in P2P communication and a P2P link refers to a communication link for a P2P group. Furthermore, in one embodiment, a P2P group may include one device 520 designated a P2P group owner (or a P2P server) and one or more devices 520 designated P2P clients that are served by the P2P group owner. The P2P group owner may perform certain management functions such as exchanging signaling with a WAN, coordinating data transmission between the P2P group owner and P2P clients, etc. For example, as shown in FIG. 5, a first P2P group includes devices 520a and 520b under the coverage of base station 510a, a second P2P group includes devices 520c and 520d under the coverage of base station 510b, a third P2P group includes devices 520e and 520f under the coverage of different base stations 510b and 510c, and a fourth P2P group includes devices 520g, 520h and 520i under the coverage of base station 510c. Devices 520a, 520d, 520f, and 520h may be P2P group owners for their respective P2P groups and devices 520b, 520c, 520e, 520g, and 520i may be P2P clients in their respective P2P groups. The other devices 520 in FIG. 5 may be engaged in WAN communication.


In one embodiment, P2P communication may occur only within a P2P group and may further occur only between the P2P group owner and the P2P clients associated therewith. For example, if two P2P clients within the same P2P group (e.g., devices 520g and 520i) desire to exchange information, one of the P2P clients may send the information to the P2P group owner (e.g., device 520h) and the P2P group owner may then relay transmissions to the other P2P client. In one embodiment, a particular device 520 may belong to multiple P2P groups and may behave as either a P2P group owner or a P2P client in each P2P group. Furthermore, in one embodiment, a particular P2P client may belong to only one P2P group or belong to multiple P2P group and communicate with P2P devices 520 in any of the multiple P2P groups at any particular moment. In general, communication may be facilitated via transmissions on the downlink and uplink. For WAN communication, the downlink (or forward link) refers to the communication link from base stations 510 to devices 520, and the uplink (or reverse link) refers to the communication link from devices 520 to base stations 510. For P2P communication, the P2P downlink refers to the communication link from P2P group owners to P2P clients and the P2P uplink refers to the communication link from P2P clients to P2P group owners. In certain embodiments, rather than using WAN technologies to communicate P2P, two or more devices may form smaller P2P groups and communicate P2P on a wireless local area network (WLAN) using technologies such as Wi-Fi, Bluetooth, or Wi-Fi Direct. For example, P2P communication using Wi-Fi, Bluetooth, Wi-Fi Direct, or other WLAN technologies may enable P2P communication between two or more mobile phones, game consoles, laptop computers, or other suitable communication entities.


According to one aspect of the disclosure, FIG. 6 illustrates an exemplary environment 600 in which discoverable P2P services may be used to establish a proximity-based distributed bus over which various devices 610, 630, 640 may communicate. For example, in one embodiment, communications between applications and the like, on a single platform may be facilitated using an interprocess communication protocol (IPC) framework over the distributed bus 625, which may comprise a software bus used to enable application-to-application communications in a networked computing environment where applications register with the distributed bus 625 to offer services to other applications and other applications query the distributed bus 625 for information about registered applications. Such a protocol may provide asynchronous notifications and remote procedure calls (RPCs) in which signal messages (e.g., notifications) may be point-to-point or broadcast, method call messages (e.g., RPCs) may be synchronous or asynchronous, and the distributed bus 625 (e.g., a “daemon” bus process) may handle message routing between the various devices 610, 630, 640.


In one embodiment, the distributed bus 625 may be supported by a variety of transport protocols (e.g., Bluetooth, TCP/IP, Wi-Fi, CDMA, GPRS, UMTS, etc.). For example, according to one aspect, a first device 610 may include a distributed bus node 612 and one or more local endpoints 614, wherein the distributed bus node 612 may facilitate communications between local endpoints 614 associated with the first device 610 and local endpoints 634 and 644 associated with a second device 630 and a third device 640 through the distributed bus 625 (e.g., via distributed bus nodes 632 and 642 on the second device 630 and the third device 640). As will be described in further detail below with reference to FIG. 7, the distributed bus 625 may support symmetric multi-device network topologies and may provide a robust operation in the presence of device drops-outs. As such, the virtual distributed bus 625, which may generally be independent from any underlying transport protocol (e.g., Bluetooth, TCP/IP, Wi-Fi, etc.) may allow various security options, from unsecured (e.g., open) to secured (e.g., authenticated and encrypted), wherein the security options can be used while facilitating spontaneous connections with among the first device 610, the second device 630, and the third device 640 without intervention when the various devices 610, 630, 640 come into range or proximity to each other.


According to one aspect of the disclosure, FIG. 7 illustrates an exemplary message sequence 700 in which discoverable P2P services may be used to establish a proximity-based distributed bus over which a first device (“Device A”) 710 and a second device (“Device B”) 730 may communicate. Generally, Device A 710 may request to communicate with Device B 730, wherein Device A 710 may a include local endpoint 714 (e.g., a local application, service, etc.), which may make a request to communicate in addition to a bus node 712 that may assist in facilitating such communications. Further, Device B 730 may include a local endpoint 734 with which the local endpoint 714 may be attempting to communicate in addition to a bus node 732 that may assist in facilitating communications between the local endpoint 714 on the Device A 710 and the local endpoint 734 on Device B 730.


In one embodiment, the bus nodes 712 and 732 may perform a suitable discovery mechanism at message sequence step 754. For example, mechanisms for discovering connections supported by Bluetooth, TCP/IP, UNIX, or the like may be used. At message sequence step 756, the local endpoint 714 on Device A 710 may request to connect to an entity, service, endpoint etc, available through bus node 712. In one embodiment, the request may include a request-and-response process between local endpoint 714 and bus node 712. At message sequence step 758, a distributed message bus may be formed to connect bus node 712 to bus node 732 and thereby establish a P2P connection between Device A 710 and Device B 730. In one embodiment, communications to form the distributed bus between the bus nodes 712 and 732 may be facilitated using a suitable proximity-based P2P protocol (e.g., the AllJoyn™ software framework designed to enable interoperability among connected products and software applications from different manufacturers to dynamically create proximal networks and facilitate proximal P2P communication). Alternatively, in one embodiment, a server (not shown) may facilitate the connection between the bus nodes 712 and 732. Furthermore, in one embodiment, a suitable authentication mechanism may be used prior to forming the connection between bus nodes 712 and 732 (e.g., SASL authentication in which a client may send an authentication command to initiate an authentication conversation). Still further, during message sequence step 758, bus nodes 712 and 732 may exchange information about other available endpoints (e.g., local endpoints 644 on Device C 640 in FIG. 6). In such embodiments, each local endpoint that a bus node maintains may be advertised to other bus nodes, wherein the advertisement may include unique endpoint names, transport types, connection parameters, or other suitable information.


In one embodiment, at message sequence step 760, bus node 712 and bus node 732 may use obtained information associated with the local endpoints 734 and 714, respectively, to create virtual endpoints that may represent the real obtained endpoints available through various bus nodes. In one embodiment, message routing on the bus node 712 may use real and virtual endpoints to deliver messages. Further, there may one local virtual endpoint for every endpoint that exists on remote devices (e.g., Device A 710). Still further, such virtual endpoints may multiplex and/or de-multiplex messages sent over the distributed bus (e.g., a connection between bus node 712 and bus node 732). In one aspect, virtual endpoints may receive messages from the local bus node 712 or 732, just like real endpoints, and may forward messages over the distributed bus. As such, the virtual endpoints may forward messages to the local bus nodes 712 and 732 from the endpoint multiplexed distributed bus connection. Furthermore, in one embodiment, virtual endpoints that correspond to virtual endpoints on a remote device may be reconnected at any time to accommodate desired topologies of specific transport types. In such an aspect, UNIX based virtual endpoints may be considered local and as such may not be considered candidates for reconnection. Further, TCP-based virtual endpoints may be optimized for one hop routing (e.g., each bus node 712 and 732 may be directly connected to each other). Still further, Bluetooth-based virtual endpoints may be optimized for a single pico-net (e.g., one master and n slaves) in which the Bluetooth-based master may be the same bus node as a local master node.


At message sequence step 762, the bus node 712 and the bus node 732 may exchange bus state information to merge bus instances and enable communication over the distributed bus. For example, in one embodiment, the bus state information may include a well-known to unique endpoint name mapping, matching rules, routing group, or other suitable information. In one embodiment, the state information may be communicated between the bus node 712 and the bus node 732 instances using an interface with local endpoints 714 and 734 communicating with using a distributed bus based local name. In another aspect, bus node 712 and bus node 732 may each may maintain a local bus controller responsible for providing feedback to the distributed bus, wherein the bus controller may translate global methods, arguments, signals, and other information into the standards associated with the distributed bus. At message sequence step 764, the bus node 712 and the bus node 732 may communicate (e.g., broadcast) signals to inform the respective local endpoints 714 and 734 about any changes introduced during bus node connections, such as described above. In one embodiment, new and/or removed global and/or translated names may be indicated with name owner changed signals. Furthermore, global names that may be lost locally (e.g., due to name collisions) may be indicated with name lost signals. Still further, global names that are transferred due to name collisions may be indicated with name owner changed signals and unique names that disappear if and/or when the bus node 712 and the bus node 732 become disconnected may be indicated with name owner changed signals.


As used above, well-known names may be used to uniquely describe local endpoints 714 and 734. In one embodiment, when communications occur between Device A 710 and Device B 730, different well-known name types may be used. For example, a device local name may exist only on the bus node 712 associated with Device A 710 to which the bus node 712 directly attaches. In another example, a global name may exist on all known bus nodes 712 and 732, where only one owner of the name may exist on all bus segments. In other words, when the bus node 712 and bus node 732 are joined and any collisions occur, one of the owners may lose the global name. In still another example, a translated name may be used when a client is connected to other bus nodes associated with a virtual bus. In such an aspect, the translated name may include an appended end (e.g., a local endpoint 714 with well-known name “org.foo” connected to the distributed bus with Globally Unique Identifier “1234” may be seen as “G1234.org.foo”).


At message sequence step 766, the bus node 712 and the bus node 732 may communicate (e.g., broadcast) signals to inform other bus nodes of changes to endpoint bus topologies. Thereafter, traffic from local endpoint 714 may move through virtual endpoints to reach intended local endpoint 734 on Device B 730. Further, in operation, communications between local endpoint 714 and local endpoint 734 may use routing groups. In one aspect, routing groups may enable endpoints to receive signals, method calls, or other suitable information from a subset of endpoints. As such, a routing name may be determined by an application connected to a bus node 712 or 732. For example, a P2P application may use a unique, well-known routing group name built into the application. Further, bus nodes 712 and 732 may support registering and/or de-registering of local endpoints 714 and 734 with routing groups. In one embodiment, routing groups may have no persistence beyond a current bus instance. In another aspect, applications may register for their preferred routing groups each time they connect to the distributed bus. Still further, groups may be open (e.g., any endpoint can join) or closed (e.g., only the creator of the group can modify the group). Yet further, a bus node 712 or 732 may send signals to notify other remote bus nodes or additions, removals, or other changes to routing group endpoints. In such embodiments, the bus node 712 or 732 may send a routing group change signal to other group members whenever a member is added and/or removed from the group. Further, the bus node 712 or 732 may send a routing group change signal to endpoints that disconnect from the distributed bus without first removing themselves from the routing group.


According to one aspect of the disclosure, FIG. 8 is a diagram depicting a system in which discoverable human-readable-event-descriptors and human-readable-action-descriptors may be used to enable automated interactions between devices in machine-to-machine (M2M) systems by enabling a user to program these interactions without requiring pre-defined semantics. As shown in FIG. 8, the system includes an event-emitting device 802, an action-effectuating device 804, and a control device 806 that are connected via a distributed bus 808. As shown, the event-emitting device 802 includes an event service 810 coupled to event metadata 812, and the event service 810 is shown sending an event signal 813 that is received by the control device 806. As depicted, the control device 806 includes a user interface 814 (e.g., that includes a touchscreen display), an event picker application 816, and an event-action-association datastore 818. As shown, the control device 806 is shown sending an action method call 820 to the action-effectuating device 804. The action-effectuating device 804 in this embodiment includes an action service 822 and action metadata 824.


Although a single device may include the functionality of the event-emitting device 802, the action-effectuating device 804, and the control device 806, the depiction of the system in FIG. 8 is intended merely to facilitate a disclosure of the types of functions that communication devices may include—it is not intended to convey the variety of different types of devices that may include these functions. For example, a single device may simultaneously operate as the event-emitting device 802 and the control device 806; a single device may operate as the event-emitting device 802 and the action-effectuating device 804; a single device may operate as the action-effectuating device 804 and the control device 806; and a single device may operate as the event-emitting device 802, the control device 806, and the action-effectuating device 804.


As discussed above, prior systems creating automated machine-to-machine (M2M) systems required a detailed semantic definition or specification agreed to a priori by all actors. For example, in order for a carbon monoxide sensor to turn on a fan without human intervention, it would require a detailed control specification for the fan. More particularly, it would need to be agreed upon and implemented by all manufacturers of fans. The sensor would need to implement a framework based on that standard to control the fans. These types of standards are very complex and take a long time to develop because they require support from a multitude of actors. In very complex internet of everything (IoE) systems (e.g., home automation) the challenge of getting all actors to agree will likely take years.


According to several aspects, the difficulty with enabling automated interactions between devices in M2M systems is addressed by the system depicted in FIG. 8 by enabling a user to program these interactions without requiring pre-defined semantics. More specifically, as depicted in FIG. 8, discoverable, human readable descriptors, referred to herein as human-readable-event-descriptors, are included with the event metadata 812 that is stored in the event-emitting device 802, and in response to a particular detected event (e.g., detected via a sensor), a particular human-readable-event-descriptor is added to the event signal 813 that propagates between devices of the network. In many instances, detected events are notable occurrences happening in an environment of the system. Some examples of events that may be detected (e.g., by corresponding sensors) are a temperature exceeding or falling below a threshold, movement of a person, a light turning on, a laundry cycle completing, a door opening, coffee being ready to consume, etc. Event signals are emitted from event-emitting devices operating as nodes in the network, and an OEM of the event-emitting device 802 and/or a user may determine what events prompt the emission of event signals, and the human-readable-event-descriptor that is emitted for each event.


In general, event-emitting devices such as the event-emitting device 802, emit asynchronous signals that notify other nodes (e.g., the action-effectuating device 804 and the control device 806) when something of significance occurs in the network. The event-emitting device 802 simply lets the “world” know something happened, but it has no knowledge of which other nodes might be interested in the event or if/how they might take action.


What constitutes a significant occurrence and warrants the event signal 813 being sent may be left up to the device manufacturer to determine. For example, a smart light manufacturer may decide to emit an event signal every time the light turns on. The manufacturer of a security camera with motion detector might emit an event signal every time the camera is activated.


As discussed above, the event signal 813 contains a discoverable human-readable-event-descriptor. A smart light event, for example, might contain the human-readable-event-descriptor “Light Turned on” and a camera event may contain the human-readable-event-descriptor “Security Camera Activated.”


The benefits of utilizing event signals (as described herein) may be fully realized in connection with a corresponding action framework (that the action-effectuating device 804 is part of) and the event picker application 816 that allows humans to program actions that should be taken when an event occurs. As used herein, the term “action” refers to action method calls on an object or asynchronous signals in response to the event signal 813.


Another aspect includes adding discoverable, human-readable-action-descriptors to associated actions. As depicted in FIG. 8, the human-readable-action-descriptors may be stored in action metadata 824 of the action-effectuating device 804. As discussed further herein, human-readable-action-descriptors are added to the method call 820 on an object or asynchronous signal in response to an event. By making these events and actions discoverable, and by adding a human-readable descriptors, it will be possible for humans to program (e.g., utilizing the event picker application 816) an action to be executed on a device B (e.g., the action-effectuating device 804) when an event is emitted from device A (e.g., the event-emitting device 802). There is no semantic definition required and no prior agreement between device manufacturers.


As discussed further herein, the event-picker application 816 may discover all event-emitting devices (e.g., the event-emitting device 802) on the network that emit event signals and display the human-readable-event-descriptors in the user-interface (UI) 814 (e.g., a graphical display in connection with a touch screen). The event picker application 816 may also discover all available actions in the network and display the human readable action descriptors in the UI 814. As a consequence, the user is able to very simply map events to actions, for example, by creating a rule that dictates when event type X occurs, take action Y. Once programmed, that rule may be persisted in the form of event-action association data in the event-action association datastore 818, which may be accessed in response to receiving a human-readable-event-descriptor in an event signal. Although the event-action association data is depicted in the control device 806, in many instances the event-action association data is sent to one or more other devices (e.g., a router, personal computer, or other devices that remain in close proximity with event-emitting and action-effectuating devices).


Referring next to FIG. 9, it is a diagram that depicts a union of distributed, heterogeneous devices in a system in which discoverable human-readable-event-descriptors and human-readable-action-descriptors may be used to enable automated interactions between the heterogeneous devices to be programmed. Here, “heterogeneous” devices include passive and active devices, devices of different manufacturing and vending origin, and devices to perform any purpose. The “union” of the heterogeneous devices refers generally to the interaction of any or all of the devices in a distributed manner using the peer-to-peer platform. While referring to FIG. 9, simultaneous reference is made to FIG. 10, which depicts a method in which human-readable-event-descriptors and human-readable-action-descriptors may be used to enable automated interactions between the heterogeneous devices.


As shown, the system depicted in FIG. 9 depicts a plurality of heterogeneous devices that include embedded event-emitting devices 902, embedded action-effectuating devices 904, an access point 905, a control device 906, and a sensing-actuating device 907. All of the depicted heterogeneous devices are connected directly or indirectly via a peer-to-peer network (e.g., via the AllJoyn™ software framework mentioned above). In the system depicted in FIG. 9, the control device 906 is utilized by a user to create rules that are carried out in response to detectable events occurring within the environment of the system. More specifically, the control device 906 includes an event discovery component 932 that operates to discover the human-readable-event-descriptors that the event emitting devices in the system advertise, and the control device 906 includes an action discovery component 934 that operates to discover the human-readable-action-descriptors that the action-effectuating devices in the system advertise. As discussed above, the event picker application 816 enables a user of the control device 906 to map the discovered human-readable-event-descriptors to one or more of the human-readable-action-descriptors to create the rules that govern what actions are effectuated by an action execution component 936 in response to events occurring within the system.


The embedded event-emitting devices 902 and embedded action-effectuating devices 904 are communication devices that are embedded in other devices such as, for example, light switches, thermostats, air conditioners, vent dampers, smoke detectors, motion detectors, humidity detectors, microphones, speaker, and earphones among others. Although not required, the event-emitting devices 902 may include sensors such as audio transducers, accelerometers, temperature sensors, humidity sensors, pressure sensors, etc. Alternatively, instead of a sensor detecting an event, event emitting devices 902 may receive an indication of an event from another source. For example, a switch changing state from off to on may provide a signal indicative of the state change. The action-effectuating devices 904 may include, for example, actuators such as motors, switches, linear-motors, audio-transducers (e.g., speakers), etc.


The access point 905 may be a router, for example, capable of operating a peer-to-peer platform 930, in many instances, including memory to store association data (e.g., rules) associating particular events with particular actions in a human readable format. The control device 906 may be a device (e.g., a smartphone, netbook, Ultrabook, laptop, desktop computer, etc.) that includes a display (not shown) and hardware, or hardware in connection with software, to provide the peer-to-peer platform and the event picker application 816. The sensing-actuating device 907 may be both an event-emitting device and an action-effectuating device, and it may be realized by a variety of devices that include both sensors and actuators. For example, an air conditioning unit may include both, an event-emitting device associated with a temperature sensor and an action-effectuating device associated with a compressor and fan.


As depicted in FIG. 10, the event discovery component 932 of the control device 906 may first discover the event-emitting devices that are connected to the peer-to-peer network (Block 1002), and then present a listing of the event-emitting devices to the user (Block 1004). As a part of this discovery process, an event-service (e.g., event-service 810) on each of the event-emitting devices introspects the corresponding event-emitting devices to obtain human-readable-event-descriptors stored as event metadata (e.g. the event metadata 812) in a memory of the event-emitting devices, and the event discovery component 932 discovers the human-readable-event-descriptors when they are advertised by the event-service. The event picker application 816 may then display a listing of the human-readable-event-descriptors for the user on a display of the control device 906 (Block 1006).


For example, a company (“Company A”) may produce a specialized crib motion detector that includes an event service operating in connection with the peer-to-peer network. Company A may provide a human-readable-event-descriptor named BabyRolledOver stored in the device's event metadata (e.g., the event metadata 812) that is emitted in connection with an event signal (e.g., the event signal 813) every time motion in a baby's crib is detected. When the user installs the motion detector in baby's room and onboards the motion detector, the user may optionally provide “friendly names” for a location and for the baby's name such as: “Zoe's Room” and “Zoe.” These friendly names may be added as metadata that can be “discovered” during introspection of motion detector service interfaces.


As shown, action-effectuating devices are also discovered by the action discovery component 934 (Block 1008) and listed for the user (Block 1010), and an action service (e.g., action service 822) on each of the action-effectuating devices introspects the corresponding action-effectuating device to enable human-readable-action-descriptors to be discovered by the action discovery component 934 and displayed on the control device 906 (Block 1012). As an example, a company (“Company B”) may produce a specialized wireless-controlled lamp that includes an event service and peer-to-peer interface. Company B may provide a human-readable-action-descriptor named “BlinkThreeTimes” that is associated with an action that causes the lamp to blink red three times when invoked (e.g., using a method call). The user may install the lamp in the master bedroom, onboard the lamp to the peer-to-peer network, and provide friendly names for the location and the lamp such as: “Master Bedroom” and “Zoe Needs Attention.” These friendly names may be added to the action metadata that can be “discovered” during introspection of the lamp service interface.


In an embodiment, such as the example shown in FIG. 11, the human-readable-event-descriptors may be displayed simultaneously with the human-readable-action-descriptors. A user may simply use a touch screen of the control device (or utilize a pointing device such as a mouse or other simple entry means) to associate the human-readable-event-descriptors to the human-readable-action descriptors. The user inputs are detectable using constructs well known to one of skill in the art to enable the user inputs to be converted to persistent rules that create an association between the human-readable-event-descriptors and the human-readable-action-descriptors that is stored in the event-action association datastore 818.


Continuing the examples above, the user may map the BabyRolledOver human-readable-event-descriptor with the BlinkThreeTimes human-readable-action-descriptor, and in response, a rule may be created that associates the detection of the baby's movement with the action that causes the lamp to blink three times. Although the rule may be created and stored on the control device 906, it may also be provided to other devices. For example, the event-action association rule may be provided to the access point 905 so that the access point 905 may initiate a method call to an action service (e.g., action service 822) in response to receiving an associated event signal.


According to an aspect of the disclosure, FIG. 12 illustrates an exemplary communications device 1200 that may correspond to one or more devices that may use discoverable P2P services to communicate over a distributed bus, as described in further detail above (e.g., the event-emitting device 802, the action-effectuating device 804, the control device 806, etc.). In particular, as shown in FIG. 12, communications device 1200 may comprise a receiver 1202 that may receive a signal from, for instance, a receive antenna (not shown), perform typical actions on the received signal (e.g., filtering, amplifying, downconverting, etc.), and digitize the conditioned signal to obtain samples. The receiver 1202 can comprise a demodulator 1204 that can demodulate received symbols and provide them to a processor 1206 for channel estimation. The processor 1206 can be a processor dedicated to analyzing information received by the receiver 1202 and/or generating information for transmission by a transmitter 1220, a processor that controls one or more components of communications device 1200, and/or a processor that both analyzes information received by receiver 1202, generates information for transmission by transmitter 1220, and controls one or more components of communications device 1200.


Communications device 1200 can additionally comprise a memory 1208 that is operatively coupled to processor 1206 and that can store data to be transmitted, received data, information related to available channels, data associated with analyzed signal and/or interference strength, information related to an assigned channel, power, rate, or the like, and any other suitable information for estimating a channel and communicating via the channel. In one aspect, the memory 1208 is a non-transitory medium that includes processor-executable instructions such as local endpoint applications 1210, which may seek to communicate with endpoint applications, services etc., on communications device 1200 and/or other communications devices 1200 associated through distributed bus module 1230. For example, the memory 1208 may include processor-executable instructions that effectuate aspects of the event picker application 816, the event discovery component 932, the action discovery component 934, and the action execution component 936. The memory may also include processor-executable instructions to carry out the event and action services described herein. Thus many embodiments may be realized, at least in part, by hardware in connection with software. The memory 1208 can additionally store protocols and/or algorithms associated with estimating and/or utilizing a channel (e.g., performance based, capacity based, etc.).


It will be appreciated that the datastores described herein can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. By way of illustration, and not limitation, nonvolatile memory can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable PROM (EEPROM), or flash memory. Volatile memory can include random access memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in many forms such as synchronous RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), Synchlink DRAM (SLDRAM), and direct Rambus RAM (DRRAM). Memory 1208 of the subject systems and methods may comprise, without being limited to, these and any other suitable types of memory.


Communications device 1200 can further include distributed bus module 1230 to facilitate establishing connections with other devices, such as communications device 1200. Distributed bus module 1230 may further comprise bus node module 1232 to assist distributed bus module 1230 managing communications between multiple devices. In one aspect, a bus node module 1232 may further include object naming module 1234 to assist bus node module 1232 in communicating with endpoint applications 1210 associated with other devices. Still further, distributed bus module 1230 may include endpoint module 1236 to assist local endpoints in communicating with other local endpoints and/or endpoints accessible on other devices through an established distributed bus. In another aspect, distributed bus module 1230 may facilitate inter-device and/or intra-device communications over multiple available transports (e.g., Bluetooth, UNIX domain-sockets, TCP/IP, Wi-Fi, etc.).


Additionally, in one embodiment, communications device 1200 may include a user interface 1240, which may include one or more input mechanisms 1242 for generating inputs into communications device 1200, and one or more output mechanisms 1244 for generating information for consumption by the user of the communications device 1200. For example, input mechanism 1242 may include a mechanism such as a key or keyboard, a mouse, a touch-screen display, a microphone, etc. Further, for example, output mechanism 1244 may include a display, an audio speaker, a haptic feedback mechanism, a Personal Area Network (PAN) transceiver etc. In the illustrated aspects, the output mechanism 1244 may include an audio speaker operable to render media content in an audio form, a display operable to render media content in an image or video format and/or timed metadata in a textual or visual form, or other suitable output mechanisms. However, in one embodiment, a headless communications device 1200 may not include certain input mechanisms 1242 and/or output mechanisms 1244 because headless devices generally refer to computer systems or device that have been configured to operate without a monitor, keyboard, and/or mouse.


Additional details that relate to the aspects and embodiments disclosed herein are described and illustrated in the Appendices attached hereto, the contents of which are expressly incorporated herein by reference in their entirety as part of this disclosure.


Those skilled in the art will appreciate that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.


Further, those skilled in the art will appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the aspects disclosed herein may be implemented as electronic hardware or hardware in connection with computer software. Blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or hardware in connection with software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted to depart from the scope of the present disclosure.


Although FIG. 12 depicts an embodiment that utilizes a processor in connection with memory and non-transitory processor executable instructions, the various illustrative logical blocks, modules, and circuits described in connection with the aspects disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration).


The methods, sequences and/or algorithms described in connection with the aspects disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM, flash memory, ROM, EPROM, EEPROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary non-transitory storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in an IoT device. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.


While the foregoing disclosure shows illustrative aspects of the disclosure, it should be noted that various changes and modifications could be made herein without departing from the scope of the disclosure as defined by the appended claims. The functions, steps and/or actions of the method claims in accordance with the aspects of the disclosure described herein need not be performed in any particular order. Furthermore, although elements of the disclosure may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated.

Claims
  • 1. A method for mapping events to actions on a computing device, the method comprising: obtaining, at the computing device, at least one human-readable-event-descriptor from each of a plurality of event-emitting devices to obtain a plurality of human-readable-event-descriptors;obtaining, at the computing device, at least one human-readable-action-descriptor from each of a plurality of action-effectuating devices to obtain a plurality of human-readable-action-descriptors;displaying the human-readable-event-descriptors and the human-readable-action-descriptors on a display of the computing device;detecting user inputs at the computing device that associate each of at least one of the human-readable-event-descriptors with at least one of the human-readable-action-descriptors to create a selected association between the human-readable-event-descriptors and the human-readable-action-descriptors; andstoring the selected association between the human-readable-event-descriptors and the human-readable-action-descriptors in an event-action-association datastore on the computing device to enable one or more actions to be carried out when an event associated with the one or more actions occurs.
  • 2. The method of claim 1 including: discovering the event-emitting devices;presenting a list of the event-emitting devices on the display of the computing device;discovering the action-effectuating devices; andpresenting a list of the action-effectuating devices on the display of the computing device.
  • 3. The method of claim 1, including: simultaneously displaying the human-readable-event-descriptors and the human-readable-action-descriptors on the display of the computing device;detecting user inputs to a touch screen display that indicate a user is touching the touch screen display;displaying a line connecting a particular human-readable-event-descriptor to a particular human-readable-action-descriptor to provide the user with a graphical display depicting the association between the particular human-readable-event-descriptor and the particular human-readable-action-descriptor.
  • 4. The method of claim 1, including: receiving an event signal from one of the event-emitting devices, the event signal indicating an event has occurred, and the event signal includes a human-readable-event-descriptor for the event;accessing, in response to receiving the human-readable-event-descriptor in the event signal, the event-action-association datastore to identify an action associated with the event; andsending an action method call to one or more action-effectuating devices to prompt the one or more action-effectuating devices to carry out the action associated with the event.
  • 5. The method of claim 1, including: coupling the computing device via a peer-to-peer network to the event-emitting devices and the action-effectuating devices.
  • 6. A system for interacting with heterogeneous devices in a communication network, the system comprising: an event-emitting device including: event metadata stored in nonvolatile memory, the event metadata including one or more human-readable-event-descriptors for each of one or more events that the event-emitting device is capable of detecting;an event service configured to detect an event and initiate an event signal that includes a particular human-readable-event-descriptor associated with the event; anda transmitter to transmit the particular human-readable-event-descriptor in connection with an event signal;an action-effectuating device including: action metadata stored in non-volatile memory, the action metadata including one or more human-readable-action-descriptors for each of one or more actions the action-effectuating device is capable of executing;a receiver to receive action method calls; andan action service configured to initiate the execution of an action in response to an action method call;a control device including:a transceiver to receive the event signal and to transmit the action method call;an event service discovery component to discover the one or more human-readable-event-descriptors;an action discovery component to discover the one or more human-readable-action-descriptors;an event picker component configured to prompt a user to generate event-action association data by associating the one or more human-readable-event-descriptors with selected ones of the human-readable-action-descriptors; andan action execution component to initiate the action method call by accessing the event-action association data to identify a particular action corresponding to the human-readable-event-descriptor sent with the event signal.
  • 7. The system of claim 6 including a plurality of event-emitting devices and a plurality of action-effectuating devices.
  • 8. The system of claim 7, wherein at least a portion of the event-emitting devices are embedded event emitters and at least a portion of the action-effectuating devices are embedded action-effectuating devices.
  • 9. The system of claim 7, wherein at least a portion of the event-emitting devices include a sensor to sense an occurrence of an event, wherein the sensors are selected from the group of sensors including audio transducers, accelerometers, temperature sensors, humidity sensors, pressure sensors.
  • 10. The system of claim 7, wherein at least a portion of the event-emitting devices include an actuator to effectuate an action, wherein the actuator is selected from the group consisting of motors, switches, linear-motors, audio-transducers.
  • 11. A non-transitory, tangible processor readable storage medium, encoded with processor readable instructions to map events to actions on a computing device, the method comprising: obtaining, at the computing device, at least one human-readable-event-descriptor from each of a plurality of event-emitting devices to obtain a plurality of human-readable-event-descriptors;obtaining, at the computing device, at least one human-readable-action-descriptor from each of a plurality of action-effectuating devices to obtain a plurality of human-readable-action-descriptors;displaying the human-readable-event-descriptors and the human-readable-action-descriptors on a display of the computing device;detecting user inputs at the computing device that associate each of at least one of the human-readable-event-descriptors with at least one of the human-readable-action-descriptors to create a selected association between the human-readable-event-descriptors and the human-readable-action-descriptors; andstoring the selected association between the human-readable-event-descriptors and the human-readable-action-descriptors in an event-action-association datastore on the computing device to enable one or more actions to be carried out when an event associated with the one or more actions occurs.
  • 12. The non-transitory, tangible processor readable storage medium of claim 11, the method including: discovering the event-emitting devices;presenting a list of the event-emitting devices on the display of the computing device;discovering the action-effectuating devices; andpresenting a list of the action-effectuating devices on the display of the computing device.
  • 13. The non-transitory, tangible processor readable storage medium of claim 11, the method including: simultaneously displaying the human-readable-event-descriptors and the human-readable-action-descriptors on the display of the computing device;detecting user inputs to a touch screen display that indicate a user is touching the touch screen display;displaying a line connecting a particular human-readable-event-descriptor to a particular human-readable-action-descriptor to provide the user with a graphical display depicting the association between the particular human-readable-event-descriptor and the particular human-readable-action-descriptor.
  • 14. The non-transitory, tangible processor readable storage medium of claim 11, the method including: receiving an event signal from one of the event-emitting devices, the event signal indicating an event has occurred, and the event signal includes a human-readable-event-descriptor for the event;accessing, in response to receiving the human-readable-event-descriptor in the event signal, the event-action-association datastore to identify an action associated with the event; andsending an action method call to one or more action-effectuating devices to prompt the one or more action-effectuating devices to carry out the action associated with the event.
  • 15. The non-transitory, tangible processor readable storage medium of claim 11, the method including: coupling the computing device via a peer-to-peer network to the event-emitting devices and the action-effectuating devices.
CLAIM OF PRIORITY UNDER 35 U.S.C. §119

The present Application for Patent claims priority to Provisional Application No. 61/948,010 entitled “System and Method for Providing a Human Readable Representation of an Event and a Human Readable Action in Response to that Event” filed Mar. 4, 2014, and assigned to the assignee hereof and hereby expressly incorporated by reference herein.

Provisional Applications (1)
Number Date Country
61948010 Mar 2014 US