This application relates generally to device configuration. More specifically, this application relates to a system of universal networked device communication and configuration.
The drawings, when considered in connection with the following description, are presented for the purpose of facilitating an understanding of the subject matter sought to be protected.
While the present disclosure is described with reference to several illustrative embodiments described herein, it should be clear that the present disclosure should not be limited to such embodiments. Therefore, the description of the embodiments provided herein is illustrative of the present disclosure and should not limit the scope of the disclosure as claimed. In addition, while following description references particular devices like limited embedded device and master device, it will be appreciated that the disclosure may be used with other types of high level systems such as computers, cloud computing, applications, and the like.
Briefly described, a device and a method are disclosed including a computer network coupling multiple network ordinary devices and master devices communicating via a middle layer software that is device-independent. In some embodiments, the master devices are programmable by a user via a Universal Programming Application (UPA) or app, that is installed on the devices. Ordinary devices include sensors or home or industrial accessories, smart TV, smart Refrigerator, smart oven, smart coffee makers, industrial or non-industrial machine or component, a software application, and generally any other device that is capable of connecting to a network to exchange data. A device may include events and services. Events are distinct changes in status or configuration of a device and may include receipt or change of data, alarms, powering down or up, and the like, that may be multicast on the network or be reported to another device, or a master device on command or periodically. Services are functions a device may perform, such as measurement of a parameter, or performance of a calculation, reporting of data, or other functions that may be requested by a master device or other network devices. In some embodiments, the set of events and services defined for a device may be available via a catalog service. A master device may listen for various events on the network and take appropriate action according to its own stored program. Events are outputs only from devices, while services may return a value to the issuing device. Events and services may be local to the master device or be remote from other devices across the network. A user may query a list of devices via a master device through the UPA interface.
With the ubiquity of users' internet access there is an ever increasing demand for expanded services, functionality, online storage, sharing capabilities, and the like. In addition to web-based services offered to human users, the Internet Of Things (IOT) has been gaining in commercial popularity to reduce cost and increase functionality in an efficient, automated manner. Many simple devices and systems, such as refrigerators, television sets, premises alarm systems, thermostats, garage door openers, video cameras, lights, sprinkler systems, heating and cooling systems, various factory machinery, and the like may be controlled remotely and automatically by using IOT or other network-based data communications. Each of the many devices connected to a computer network such as the Internet, have to be configured, programmed, set up, or otherwise be prepared to communicate with other devices, report events, and perform services. The configuration of such devices and updating the configuration as needed is a challenge, especially given the myriad manufacturers, technologies, models, functionalities, and other variables, which affect how these devices are configured or programmed. A uniform method of configuring or programming network-connected devices may be a significant advantage.
One embodiment of a computing device usable as one of client computing devices 112-118 is described in more detail below with respect to
Client devices 112-118 typically range widely in terms of capabilities and features. For example, a cell phone may have a numeric keypad and a few lines of monochrome LCD display on which only text may be displayed. In another example, a web-enabled client device may have a touch sensitive screen, a stylus, and several lines of color LCD display in which both text and graphic may be displayed.
A web-enabled client device may include a browser application that is configured to receive and to send web pages, web-based messages, or the like. The browser application may be configured to receive and display graphic, text, multimedia, or the like, employing virtually any web based language, including a wireless application protocol messages (WAP), or the like. In one embodiment, the browser application may be enabled to employ one or more of Handheld Device Markup Language (HDML), Wireless Markup Language (WML), WMLScript, JavaScript, Standard Generalized Markup Language (SMGL), HyperText Markup Language (HTML), eXtensible Markup Language (XML), or the like, to display and send information.
Client computing devices 12-118 also may include at least one other client application that is configured to receive content from another computing device, including, without limit, server computing devices 102-104. The client application may include a capability to provide and receive textual content, multimedia information, or the like. The client application may further provide information that identifies itself, including a type, capability, name, or the like. In one embodiment, client devices 112-118 may uniquely identify themselves through any of a variety of mechanisms, including a phone number, Mobile Identification Number (MIN), an electronic serial number (ESN), mobile device identifier, network address, such as IP (Internet Protocol) address, Media Access Control (MAC) layer identifier, or other identifier. The identifier may be provided in a message, or the like, sent to another computing device.
Client computing devices 112-118 may also be configured to communicate a message, such as through email, Short Message Service (SMS), Multimedia Message Service (MMS), instant messaging (IM), internet relay chat (IRC), Mardam-Bey's IRC (mIRC), Jabber, or the like, to another computing device. However, the present disclosure is not limited to these message protocols, and virtually any other message protocol may be employed.
Client devices 112-118 may further be configured to include a client application that enables the user to log into a user account that may be managed by another computing device. Such user account, for example, may be configured to enable the user to receive emails, send/receive IM messages, SMS messages, access selected web pages, download scripts, applications, or a variety of other content, or perform a variety of other actions over a network. However, managing of messages or otherwise accessing and/or downloading content, may also be performed without logging into the user account. Thus, a user of client devices 112-118 may employ any of a variety of client applications to access content, read web pages, receive/send messages, or the like. In one embodiment, for example, the user may employ a browser or other client application to access a web page hosted by a Web server implemented as server computing device 102. In one embodiment, messages received by client computing devices 112-118 may be saved in non-volatile memory, such as flash and/or PCM, across communication sessions and/or between power cycles of client computing devices 112-118.
Wireless network 110 may be configured to couple client devices 114-118 to network 106. Wireless network 110 may include any of a variety of wireless sub-networks that may further overlay stand-alone ad-hoc networks, and the like, to provide an infrastructure-oriented connection for client devices 114-118. Such sub-networks may include mesh networks, Wireless LAN (WLAN) networks, cellular networks, and the like. Wireless network 110 may further include an autonomous system of terminals, gateways, routers, and the like connected by wireless radio links, and the like. These connectors may be configured to move freely and randomly and organize themselves arbitrarily, such that the topology of wireless network 110 may change rapidly.
Wireless network 110 may further employ a plurality of access technologies including 2nd (2G), 3rd (3G) generation radio access for cellular systems, WLAN, Wireless Router (WR) mesh, and the like. Access technologies such as 2G, 3G, and future access networks may enable wide area coverage for mobile devices, such as client devices 114-118 with various degrees of mobility. For example, wireless network 110 may enable a radio connection through a radio network access such as Global System for Mobil communication (GSM), General Packet Radio Services (GPRS), Enhanced Data GSM Environment (EDGE), WEDGE, Bluetooth, Bluetooth Low Energy (LE), High Speed Downlink Packet Access (HSDPA), Universal Mobile Telecommunications System (UMTS), Wi-Fi, Zigbee, Wideband Code Division Multiple Access (WCDMA), and the like. In essence, wireless network 110 may include virtually any wireless communication mechanism by which information may travel between client devices 102-104 and another computing device, network, and the like.
Network 106 is configured to couple one or more servers depicted in
In various embodiments, the arrangement of system 100 includes components that may be used in and constitute various networked architectures. Such architectures may include peer-to-peer, client-server, two-tier, three-tier, or other multi-tier (n-tier) architectures, MVC (Model-View-Controller), and MVP (Model-View-Presenter) architectures among others. Each of these are briefly described below.
Peer to peer architecture entails use of protocols, such as P2PP (Peer To Peer Protocol), for collaborative, often symmetrical, and independent communication and data transfer between peer client computers without the use of a central server or related protocols.
Client-server architectures includes one or more servers and a number of clients which connect and communicate with the servers via certain predetermined protocols. For example, a client computer connecting to a web server via a browser and related protocols, such as HTTP, may be an example of a client-server architecture. The client-server architecture may also be viewed as a 2-tier architecture.
Two-tier, three-tier, and generally, n-tier architectures are those which separate and isolate distinct functions from each other by the use of well-defined hardware and/or software boundaries. An example of the two-tier architecture is the client-server architecture as already mentioned. In a 2-tier architecture, the presentation layer (or tier), which provides user interface, is separated from the data layer (or tier), which provides data contents. Business logic, which processes the data may be distributed between the two tiers.
A three-tier architecture, goes one step farther than the 2-tier architecture, in that it also provides a logic tier between the presentation tier and data tier to handle application data processing and logic. Business applications often fall in and are implemented in this layer.
MVC (Model-View-Controller) is a conceptually many-to-many architecture where the model, the view, and the controller entities may communicate directly with each other. This is in contrast with the 3-tier architecture in which only adjacent layers may communicate directly.
MVP (Model-View-Presenter) is a modification of the MVC model, in which the presenter entity is analogous to the middle layer of the 3-tier architecture and includes the applications and logic.
Communication links within LANs typically include twisted wire pair or coaxial cable, while communication links between networks may utilize analog telephone lines, full or fractional dedicated digital lines including T1, T2, T3, and T4, Integrated Services Digital Networks (ISDNs), Digital Subscriber Lines (DSLs), wireless links including satellite links, or other communications links known to those skilled in the art. Furthermore, remote computers and other related electronic devices could be remotely connected to either LANs or WANs via a modem and temporary telephone link. Network 106 may include any communication method by which information may travel between computing devices. Additionally, communication media typically may enable transmission of computer-readable instructions, data structures, program modules, or other types of content, virtually without limit. By way of example, communication media includes wired media such as twisted pair, coaxial cable, fiber optics, wave guides, and other wired media and wireless media such as acoustic, RF, infrared, and other wireless media.
Illustrative Computing Device Configuration
With continued reference to
Optical storage device 202 may include optical drives for using optical media, such as CD (Compact Disc), DVD (Digital Video Disc), and the like. Optical storage devices 202 may provide inexpensive ways for storing information for archival and/or distribution purposes.
Central Processing Unit (CPU) 204 may be the main processor for software program execution in computing device 200. CPU 204 may represent one or more processing units that obtain software instructions from memory module 206 and execute such instructions to carry out computations and/or transfer data between various sources and destinations of data, such as hard disk 232, I/O processor 220, display interface 214, input devices 218, non-volatile memory 224, and the like.
Memory module 206 may include RAM (Random Access Memory), ROM (Read Only Memory), and other storage means, mapped to one addressable memory space. Memory module 206 illustrates one of many types of computer storage media for storage of information such as computer readable instructions, data structures, program modules or other data. Memory module 206 may store a basic input/output system (BIOS) for controlling low-level operation of computing device 200. Memory module 206 may also store OS 208 for controlling the general operation of computing device 200. It will be appreciated that OS 208 may include a general- purpose operating system such as a version of UNIX, or LINUX™, or a specialized client-side and/or mobile communication operating system such as Windows Mobile™, Android®, or the Symbian® operating system. OS 208 may, in turn, include or interface with a Java virtual machine (JVM) module that enables control of hardware components and/or operating system operations via Java application programs.
Memory module 206 may further include one or more distinct areas (by address space and/or other means), which can be utilized by computing device 200 to store, among other things, applications and/or other data. For example, one area of memory module 206 may be set aside and employed to store information that describes various capabilities of computing device 200, a device identifier, and the like. Such identification information may then be provided to another device based on any of a variety of events, including being sent as part of a header during a communication, sent upon request, or the like. One common software application is a browser program that is generally used to send/receive information to/from a web server. In one embodiment, the browser application is enabled to employ Handheld Device Markup Language (HDML), Wireless Markup Language (WML), WMLScript, JavaScript, Standard Generalized Markup Language (SMGL), HyperText Markup Language (HTML), eXtensible Markup Language (XML), and the like, to display and send a message. However, any of a variety of other web based languages may also be employed. In one embodiment, using the browser application, a user may view an article or other content on a web page with one or more highlighted portions as target objects.
Display interface 214 may be coupled with a display unit (not shown), such as liquid crystal display (LCD), gas plasma, light emitting diode (LED), or any other type of display unit that may be used with computing device 200. Display units coupled with display interface 214 may also include a touch sensitive screen arranged to receive input from an object such as a stylus or a digit from a human hand. Display interface 214 may further include interface for other visual status indicators, such Light Emitting Diodes (LED), light arrays, and the like. Display interface 214 may include both hardware and software components. For example, display interface 214 may include a graphic accelerator for rendering graphic-intensive outputs on the display unit. In one embodiment, display interface 214 may include software and/or firmware components that work in conjunction with CPU 204 to render graphic output on the display unit.
Audio interface 216 is arranged to produce and receive audio signals such as the sound of a human voice. For example, audio interface 216 may be coupled to a speaker and microphone (not shown) to enable communication with a human operator, such as spoken commands, and/or generate an audio acknowledgement for some action.
Input devices 218 may include a variety of device types arranged to receive input from a user, such as a keyboard, a keypad, a mouse, a touchpad, a touch-screen (described with respect to display interface 214), a multi-touch screen, a microphone for spoken command input (describe with respect to audio interface 216), and the like.
I/O processor 220 is generally employed to handle transactions and communications with peripheral devices such as mass storage, network, input devices, display, and the like, which couple computing device 200 with the external world. In small, low power computing devices, such as some mobile devices, functions of the I/O processor 220 may be integrated with CPU 204 to reduce hardware cost and complexity. In one embodiment, I/O processor 220 may the primary software interface with all other device and/or hardware interfaces, such as optical storage 202, hard disk 232, interfaces 226-228, display interface 214, audio interface 216, and input devices 218.
An electrical bus 222 internal to computing device 200 may be used to couple various other hardware components, such as CPU 204, memory module 206, I/O processor 220, and the like, to each other for transferring data, instructions, status, and other similar information.
Non-volatile memory 224 may include memory built into computing device 200, or portable storage medium, such as USB drives that may include PCM arrays, flash memory including NOR and NAND flash, pluggable hard drive, and the like. In one embodiment, portable storage medium may behave similarly to a disk drive. In another embodiment, portable storage medium may present an interface different than a disk drive, for example, a read-only interface used for loading/supplying data and/or software.
Various other interfaces 226-228 may include other electrical and/or optical interfaces for connecting to various hardware peripheral devices and networks, such as IEEE 1394 also known as FireWire, Universal Serial Bus (USB), Small Computer Serial Interface (SCSI), parallel printer interface, Universal Synchronous Asynchronous Receiver Transmitter (USART), Video Graphics Array (VGA), Super VGA (SVGA), and the like.
Network Interface Card (NIC) 230 may include circuitry for coupling computing device 200 to one or more networks, and is generally constructed for use with one or more communication protocols and technologies including, but not limited to, Global System for Mobile communication (GSM), code division multiple access (CDMA), time division multiple access (TDMA), user datagram protocol (UDP), transmission control protocol/Internet protocol (TCP/IP), SMS, general packet radio service (GPRS), WAP, ultra wide band (UWB), IEEE 802.16 Worldwide Interoperability for Microwave Access (WiMax), SIP/RTP, Bluetooth, Wi-Fi, Zigbee, UMTS, HSDPA, WCDMA, WEDGE, or any of a variety of other wired and/or wireless communication protocols.
Hard disk 232 is generally used as a mass storage device for computing device 200. In one embodiment, hard disk 232 may be a Ferro-magnetic stack of one or more disks forming a disk drive embedded in or coupled to computing device 200. In another embodiment, hard drive 232 may be implemented as a solid-state device configured to behave as a disk drive, such as a flash-based hard drive. In yet another embodiment, hard drive 232 may be a remote storage accessible over network interface 230 or another interface 226, but acting as a local hard drive. Those skilled in the art will appreciate that other technologies and configurations may be used to present a hard drive interface and functionality to computing device 200 without departing from the spirit of the present disclosure.
Power supply 234 provides power to computing device 200. A rechargeable or non-rechargeable battery may be used to provide power. The power may also be provided by an external power source, such as an AC adapter or a powered docking cradle that supplements and/or recharges a battery.
Transceiver 236 generally represents transmitter/receiver circuits for wired and/or wireless transmission and receipt of electronic data. Transceiver 236 may be a stand-alone module or be integrated with other modules, such as NIC 230. Transceiver 236 may be coupled with one or more antennas for wireless transmission of information.
Antenna 238 is generally used for wireless transmission of information, for example, in conjunction with transceiver 236, NIC 230, and/or GPS 242. Antenna 238 may represent one or more different antennas that may be coupled with different devices and tuned to different carrier frequencies configured to communicate using corresponding protocols and/or networks. Antenna 238 may be of various types, such as omni-directional, dipole, slot, helical, and the like.
Haptic interface 240 is configured to provide tactile feedback to a user of computing device 200. For example, the haptic interface may be employed to vibrate computing device 200, or an input device coupled to computing device 200, such as a game controller, in a particular way when an event occurs, such as hitting an object with a car in a video game.
Global Positioning System (GPS) unit 242 can determine the physical coordinates of computing device 200 on the surface of the Earth, which typically outputs a location as latitude and longitude values. GPS unit 242 can also employ other geo-positioning mechanisms, including, but not limited to, triangulation, assisted GPS (AGPS), E-OTD, CI, SAI, ETA, BSS or the like, to further determine the physical location of computing device 200 on the surface of the Earth. It is understood that under different conditions, GPS unit 242 can determine a physical location within millimeters for computing device 200. In other cases, the determined physical location may be less precise, such as within a meter or significantly greater distances. In one embodiment, however, a mobile device represented by computing device 200 may, through other components, provide other information that may be employed to determine a physical location of the device, including for example, a MAC (Media Access Control) address.
In various embodiments, all network-capable devices are coupled to the network 326 via a wired or wireless connection. Each device may include various pre-defined events and services that may be used by other devices. For example, the pressure sensor 312 on gas tank 310 may include an event that defines a low-pressure or low-gas condition indicating that the gas tank may need to be refilled. Another example, is an event that defines a low-light threshold by the light sensor. Each device may further offer some services that are predefined for use by other devices. For example, lights 324 may offer a service of turning ON or OFF. In this example, if the light sensor indicates, via an event, that the ambient light is too low, then lights 324 may be turned ON to illuminate the space around it. Another example, is that the smart TV 316 may offer recording services may be activated remotely to record a favorite show.
As further described below with respect to
In various embodiments, the computer network 402 may be the internet, a local network, a wide area network, a network based on wired or wireless or combination of both that is based on any operable protocol, such publicly used protocols like TCP/IP or proprietary protocols. Those skilled in the art will appreciate that a computer network may be complex and include several subnets, gateways, various protocols, and the like. Any computer network, private or public, that can deliver data and be used for data communications may be used with the ordinary and master devices.
In various embodiments, the ordinary devices may be subordinate devices designed to handle specific and limited tasks with minimal configuration. The ordinary devices may include refrigerators, TV sets, pressure sensors, moisture sensors, light sensors, lights and lighting systems, valves, door and window locks, thermostats, water level sensors, alarm and security systems, video and camera systems, sound systems, electrical power switches, appliances like coffee makers and dishwashers, basic and dedicated communication systems like routers and transceivers, various actuators for performing simple mechanical tasks like releasing a lock or closing shutters, industrial equipment and machines that can turn ON or OFF and perform various appropriate tasks such as material mixers and conveyor belts, and the like.
Each such ordinary device may include one or more events and services. An event is a change of some predefined parameter or state of a system. For example, the dimming of light below a preset threshold may be considered or be defined as an event. The opening or closing of a door may also be considered an event. The events are generally output from a device. A device does not receive an event. Such reporting may be automatic and immediate as soon as the event is detected or occurs, it may be periodic, it may be on a predefined schedule like every hour or every week, it may be on command or query from a master device, or based on any other criteria. Services may be also be provided by ordinary devices.
Services are predefined tasks or information that are generally performed/produced and/or transmitted by the device on request or command. In some embodiments, services may be launched automatically in response to an event or values returned from other services. For example, a command may be issued by a master device to a light controller device to perform a service that turns ON a light and returns a confirmation. If the confirmation value indicates that the light did not turn ON, then the master device may issue an alarm to a user to investigate and/or change the light bulb. Services may or may not return one or more values, like a mathematical function. The returned values may be parameters that provide some information to the device that invoked the service. For example, one form of value may be an acknowledgement or confirmation of an action that the service performed. Another form of returned value may be a set of values that indicate the status or state of a system, such as the state of each light that was turned ON or OFF because of the service performed to control a series of lights. Those skilled in the art will appreciate that the returned values may be any quantity that serves to provide some information to the user of the service.
In various embodiments, each ordinary device and master device may be uniquely identified by a GUID (Globally Unique ID). Those skilled in the art will appreciate that GUIDs may be generated by software based on unique events and quantities such as a combination of a timestamp and other quantities like a serial number or MAC (Media Access Control) number assigned to the device at manufacture time. Because a date-time stamp, including seconds and smaller units never repeats, it is guaranteed to be unique, particularly in combination with other identifiers. Some GUIDs may also be generated based on random number generators.
In addition to the devices, in some embodiments, each event and service may also be associated with a GUID of its own. In some embodiments, the GUIDs associated with the events and services may be based on the GUID of the device to which the event or the service belongs. In other embodiments, the event and service GUIDs may be assigned independently from the device GUIDs. Some service and/or events may be shared by several devices. For example, a command to power up or shut down issued to a device may be universal as it may be applicable many or all devices that may receive such commands. Some events and services may also be specific to a particular device that performs a specific and unique function, such as a defrost command to a refrigerator, which is not applicable to a toaster or a coffeemaker.
In addition to the GUID, as IOT or network devices, the ordinary devise and master devices may also be assigned a network identifier to allow unambiguous communication on a subnet. Such network identifiers may be IP addresses or other customary network identifiers that may be used to address a recipient for data transmission across the network.
In various embodiments, a master device 406a-m may be programmed and/or configured via the UPA (Universal Programming Application). The software structure of the UPA may be layered to preclude the need for software device drivers to allow communication with the many devices involved. This structure is further described below with respect to
In various embodiments, in operation, one or master devices may be accessed via its network ID and programmed by a user via UPA interface to manage other ordinary devices or even other master devices. The user may login, via a user computer or mobile device, to a master device over the network via the UPA user interface to program the master device and also to configure other ordinary devices on the network via the master device. In some embodiments, the UPA may be a browser-based interface that uses the known protocols and standards in web browsing such as HTTP, HTML, HDML, SOAP, XML, and the like. In such an interface the user may be presented with hotlinks, buttons, menus, data input fields for entering programming scripts, or high-level menus for selecting and combining statements and expressions for defining rules to be executed by the programmed device, and the like to configure the master device and/or the ordinary devices. The UPA interface and general programming methods are further discussed below in more detail with respect to
In various embodiments, the master device may be programmed by the user to control/manage other master or ordinary devices like a central controller. In other embodiments, several master devices may control the system in parallel and/or in collaboration with each other. For example, to control facilities in a building, a user computer may be connected to a master device in the building and download a program to the master device, which defines what tasks the master device performs under defined conditions and how to control and/or configure other master or ordinary devices. In this example, the master device may be programmed to monitor temperature and doors and windows in a building. The master device, based on its program, may configure an ordinary device, such as a thermometer to report when a temperature threshold is exceeded, and configure a proximity detector to report when a window is open. Based on these data, the master device may then command an air conditioner to turn itself ON and cool the space.
In various embodiments, the ordinary device may include a configuration or programming memory and a simple processor for executing basic commands (not shown in this figure), which may be predefined. Alternatively, the ordinary device may include hardware circuits to implement the command interface and communication protocols. In still another embodiment, the ordinary device may be designed to have both hardwired circuits and firmware/software to implement its functionality for processing commands and communicating data.
In various embodiments, the events and services may have other parameters for further defining the events and services, as described in more details below with respect to
In various embodiments, the events are output methods in that they transmit data outwards from the device associated with the event to other devices, such master devices, that may consume or use the event information. In some embodiments, the events are also input channels in that some parameters may be supplied initially to configure or define what the event is or the conditions under which the event is reported by the device, as further described below with respect to
In various embodiments, the events and services may be cataloged to be discoverable by other devices that may use such events and services. Various network protocols may be used to discover the events and parameters published by the devices as they join the network, such as the use of Discovery packets. When a device joins a network or subnet, it may broadcast its events and services, which may be captured and stored by a master device listening on the subnet. Later, other devices may query these device capabilities by broadcasting a request or specifically querying a particular master device, which then responds by broadcasting the information or returning a specific response to a requester. In these embodiments, a network catalog, maintained by a database device, a master device, or other devices on the network, provides catalog services to discover device characteristics, events, and services. For example, a user may login to a network and query all devices that are UPA-compatible. The network catalog service may then return a list of all the available devices. The user may then select a subset of these devices to interact with or query further regarding their capabilities, events, services, and their respective parameters. Such events, services, and parameters are further described below with respect to
In various embodiments, the event parameters may provide additional information about the identification and selection of one event among many. For example, the event may be parametrized with an index number as Event(1), Event(2), and so on, to indicate which of a series of events is targeted. An event is a single signal, in the form an identifiable datum or quantity (for example, having an encoded value to distinguish it from other quantities or events), that indicates to a requester, master device, or user that something has occurred. The parameters of the event, if any, themselves being other pieces of data in a predefined format and/or data type (for example, text, integer, real number, an enumeration, and the like), are not values returned by the event, but rather, are used to define and/or identify what the event is and the applicable conditions for the triggering or expression of the event. The events may have more than one parameter to provide other information, such as time, sequence, device state or configuration, and the like. For example, many devices may include a Power On Self-Test (POST) for testing the health of the system during a power up period. So, an event such as “Event(3, POST)” may use two parameters to indicate a test result of a third test for the device during POST. In this example, the first parameter (3) may indicate which of several tests the event is associated with and the second parameter (POST) may indicate the conditions under which the event is applicable. The event parameters are generally provided and used by the requester or user of the events to identify an event and the conditions under which that event should be reported, in addition to which one of several event-handlers is to be used to take appropriate actions in response to the detection of the event.
In various embodiments, the service input parameters 546a-546k may provide additional information about what information or data the service may have to acquire in response to being invoked by a user or a master device, and possible the identification of a particular service in a series of similar services, in manner similar to event identification described above. For example, the service may be parametrized with an index number as Temperature-Service(1), Temperature-Service(2), and so on, to indicate which of a series of services is being called for measuring temperature at different points in a system or device. The input parameters play a similar role to function parameters when calling a function in a programing language such as the C computer language. The service input parameters, if any, are pieces of data in a predefined format and/or data type (for example, text, integer, real number, an enumeration, and the like). The services may have more than one input parameter to provide other information, such as time, sequence, device state or configuration, and the like. The input parameters are supplied by the requester (for example, a user or a master device) at the time of calling or invoking the service. For example, many devices may include a system test for testing the health of the system. So, a service call such as “RunTest(2, SEND)” may use two parameters to indicate a request to run a service to test a data transmission function. In this example, the first parameter (2) may indicate which of several transmission units the service must test, and the second parameter (SEND) may indicate the particular test to run for sending data out. The input parameters are generally used by the requester or user of the service possibly to identify a service and mostly to specify the particular function to perform or data needed to perform a particular function by the service.
In various embodiments, the master device may be similar to or simpler than the computing device shown in
In various embodiments and with continued reference to
In various embodiments, each event may correspond with usually one but sometimes more than one even handlers. Generally, when an even is triggered, signaled or otherwise occurs, the receiver of the event notification or signal, such as the master device or an event management software module responsible for receiving notification of the occurrence of the event, launches (or causes to be launched) a software routine usually known as an event handler to handle, respond to, or process the event. For example, if a Power-ON event is detected in an ordinary device, the event management software module on a master device may detect (or be notified of) the occurrence of the event and launch the appropriate event handler pre-associated with that particular event type to start a test or to record the event and timestamp it, or take any other appropriate and predefined action.
In various embodiments, the event handler for a particular event may be various predetermined actions or sequence of actions associated with the particular event. The event handler may launch a workflow (a predefined routine, usually having multiple actions or other routines, that is launched when the event occurs), a triggered action (a specific single action), a script (a non-complied system command program), or executing a program (compiled). The content and form (that is, workflow, script, etc.) of the actions that the even handler may take depend on the design and implementation of the device or system. In some embodiments, the event handler may be loaded from a file in response to the corresponding event and be changeable by the user. In other embodiments, the event handler may be in the form of embedded software/firmware placed in the device at manufacture time.
In various embodiments, the UPA login interface appears on the computer of the user via which the master device is programmed, or ordinary devices are listed and queried. In some embodiments, the UPA may include several different or similar modules deployed on various devices including the user's computer, each master device and each ordinary device. The architecture and operation of the UPA are further described with respect to
In various embodiments, each of the different types of devices participating in the arrangement of
In various embodiments, at the next lower level of the software structure of UPA, a logic layer 862 is interfaced with layer 860 and may be used to perform the programming, rule composition, configuration and other substantive operations. The user may write/enter, download, or select from an existing list of options rules that are later downloaded to the master and/or ordinary devices for configuring or programming the devices. Using the functionality of this layer, the user may also enter or load a computer program from an external storage source, whether compiled or script, for later downloading of the executionable form of the program, for example, after compiling or translating to a machine language, to the devices to control their operations. The programs and rules may include event handlers, service routines, logic to handle the event handlers and services, programs that specify how to configure other devices, how to send reports back to the user computer, programs for controlling other master devices by a central or supervisory master device, and any other functions that the master device or ordinary devices may be programmed or configured to perform.
In various embodiments, at the next lower level of the software structure of UPA, a data communication layer 864 is interfaced with layer 862 and may be used to perform the data communications, including sending and receiving data packets, establishing a communication link, negotiating protocols and data rates, and other communication-related functions needed for data transmission. This layer is generally the interface between the UAP module on any device on which it is installed with other UAP modules installed on other devices. For example, the user may download a rule or a program to a master device. The data representing the rule are transmitted to the master device by the UAP communication layer 864 on the user computer to the UAP communication layer on the master device described below.
In various embodiments, the master device UPA module may include a layered software structure 870 having a communication layer 872 and an internal operations layer 874. The communication layer is similar to communication layer 864 for the same purposes, at least in part. It is used to communicate with layer 864 of the user UPA module and layer 882 of the ordinary device UPA module. The internal operations layer 874 is used to receive and load configuration data, programs, and rules send from the user computer that control the operation of the master device. This layer may store such rules/programs in the memory 868 for execution by the CPU/controller of master device 866. It may also handle other peripheral support functions, such as UPA software and rules updates, event handlers, service operations, configuring other ordinary devices or master devices, replicating its programs to other master devices, providing reports to the user computer, and any other functions the master device may have to perform.
In various embodiments, the ordinary device UPA module may include a layered software structure 880 having a communication layer 882 and an internal operations layer 884. The communication layer is similar to communication layer 864 for the same purposes, at least in part. It is used to communicate with layer 864 of the user UPA module and layer 872 of the master UPA module. The internal operations layer 884 may be used to receive and load configuration data, and for some devise programs, and rules send from the master device that control the operation of the ordinary device. This layer may store such rules/programs in the memory 878 for execution by CPU/controller of ordinary device 876, for some more advanced ordinary devices that have a processor or controller. It may also handle other peripheral support functions, such as accepting commands to send event notifications or perform services and return results, and any other functions the ordinary device may have to perform.
Those skilled in the art will appreciate that the Layers may be arranged differently or include more or fewer layers that divide up various functions without departing from the spirit of the present disclosures.
In various embodiments, the device configuration system and the UPA software and modules 858, 868, and 878 may be implemented by a hardware and/or software system using one or more software components executing on the illustrative computing device of
Those skilled in the art will appreciate that to communicate with a device coupled locally or remotely to a computer, such as a printer, a keyboard, a scanner, a network adapter, a mouse, industrial equipment, and other peripheral devices, a software driver installed on the computer is often needed. A driver is a software module that is a part of the operating system of the computer and is designed as a mid-layer module between the operating system of a computer and the device. Some software drivers may have multiple internal layers of their own. A common characteristic of a driver is that structurally it is placed between the operating system and the hardware and includes the knowledge of the internal functions of the device for which it is designed. The driver also knows, in relevant parts, how to communicate with the operating system through function calls and data structures. As such, drivers are usually supplied by the device manufacturer, or third party vendors, and is installed on the computer as a driver by the user. So, for example, a printer driver is provided by the manufacturer of the driver and knows how to communicate with the computer and the printer device. When an application, such as word processor, sends a print request to the printer, the driver takes the command and the data to be printed and formats it for transmission to the printer in a form that the printer can process. The driver also translates the local computer commands, such as print double-sided, to the printer in printer command format so the printer can understand and carry out the command from the computer. As such, each device that is connected to the computer needs to have its own device driver to be installed on the computer to work.
In contrast to the above, the UPA modules installed on the computer play the role of a universal driver if the devices have corresponding UPA modules, as described above. Hence, in the device configuration system described herein, the need for a special driver for each device is precluded and replaced by the functions provided by the UPA modules, which can interact and communicate with each other.
In various embodiments, in operation, the user may go to the UPA start page to start the selection of ordinary and/or master devices to interact with and to choose programming method of the selected devices. Once at the Start page, the user may select one or more devices from a device list 906 previously discovered by a master device or the user's computer or a third party service called by the user UPA module or master device UPA module. In some embodiments, finding new devices may be accomplished by selecting the Scan Devices button 904, which may cause the use of a Discovery Request network packet to discover new devices on a subnet, among other similar practices. Those skilled in the art recognize that a Discover packet may be sent onto the network or local subnet by devices power up or coming online. A DHCP (Dynamic Host Configuration Protocol) server may capture the Discover packet and respond to it. Other similar protocols may be used to discover new devices on the network. The user may then use the Add and Remove buttons to create or change an in interactive devices list 908 that defines which ordinary devices the user wishes to interact with. Once the interactive device list is completed, the user may then choose a method of programming, such as the Trigger Action, 916 Workflow 918, Script 920, or others. The user may cancel this step by using the Cancel button or may move to the next UPA configuration screen.
In various embodiments, the user may select a desired master device from the list 1004 and then select Next button 1008 to continue with configuring the selected master device. The Cancel button 1008 may be used to cancel this step. Other screens described below may be used subsequently to continue the programming and configuration of the selected master devices.
In various embodiments, the Event Handler dialog box 1104 is used to define new event handlers or change existing ones. This dialog box is used to define the handler and the actions it takes, not the event itself. The event may be defined in another dialog box, as further described below with respect to
In various embodiments, the Workflow Management dialog box 1204 is used to assign a workflow to an event or change existing ones. This dialog box is used to define the assignment of existing workflows to particular events, such when the particular event occurs, the assigned workflow is executed. Those skilled in the art will appreciate that a workflow is a sequence of related actions executed in a particular predefined sequence, sometimes based on predefined conditions, to perform a particular task or carry out a predefined process. For example, a workflow in a device may consist of running a series of related tests to test the reliability of a communication channel. Such workflow may include creating a test packet, sending it to a predefined receiver and receiving an acknowledgement. The event may be defined by the manufacturer of the device based on the physical capabilities and features of the device. For example, if the device is a light sensor, then the events it may have or support may be limited to detecting one or more light level thresholds. In some embodiments, an event associated with a selected device, such as “Device3.Event5” (specified in this example with the dot notation, which indicates a member of a set, here shown as Event5 belonging to set Device2), is used in an assignment. The assignment specifies that if this event occurs, then some workflow is launched in response by the associated event handler. For example, if exceeding a temperature threshold event is detected in a device, then the workflow launched may be to validate the temperature reading and turn on a cooling device. Once the assignments of workflows to various events are completed, they may be loaded onto the selected device using the Load button 1216. The Cancel button may be used to cancel this step of the configuration, for example, to go back and select an alternate device or to delay configuration for later.
In various embodiments, a new event handler (not event) may be defined in this window by selecting the Add Event button 1306 to specify which event triggers the activation or execution of the new event handler. The Add Condition button is used to define one or more predetermined conditions that if true when the specified event occurs, then a particular action is carried out by the new event handler. If such conditions are not true at the time the event occurs, then an alternative or default action may be performed. For example, the new event handler may be associated with an event that indicates dimming lighting conditions outside. This event handler may check the time of day and calendar as a condition for this event, and if the time is past a certain threshold or point in the day, like 6:00 PM, the event handler may lock the doors and turn on the lights. If the time is not past the certain point, then it may only turn on the lights but leave the doors unlocked as a default action. The Action buttons 1310 and 1312 may be used to specify what action the event handler may take when the event occurs and the conditions hold or do not hold, respectively. The actions may be specified by picking routines or options from a drop-down list, or otherwise specified by associating a hardware action (such as activating a circuit or connecting two points via an electrical relay, and the like) or a software action (executing a subroutine for performing some task) with the event handler.
In various embodiments, events and respective parameters can be selected by a user from an event selection user interface. The event parameters are similar to function arguments in a mathematical function in which the parameters provide specific information for specification, identification, or use of the event. For example, and event “Event1(Parameter1, Parameter2)” may be “Light-Level-Fault(Light-Threshold1, Light-Threshold2)”. In this example, the Light-Level-Fault event is triggered if a measured light level is outside the range of “Light-Threshold1 minus Light-Threshold2”, specified by the two parameters, which specify two light level thresholds. These parameters allow the event to determine when to be triggered. Different values of these parameters can trigger this event differently. Using the dot notation, an event parameter may be specified as “Device-X.Event-Y.Parameter-Z” for device X, event Y within device X, and parameter Z within event Y.
In various embodiments, a new event handler (not event) may be defined in this window corresponding with the event identifier 1506 to specify which event triggers the activation or execution of the new event handler. The Add Condition button is used to define one or more predetermined conditions that if true when the specified event occurs, then a particular action is carried out by the new event handler. If such conditions are not true at the time the event occurs, then an alternative or default action may be performed. For example, the new event handler may be associated with an event that indicates temperature rising above a predefined or preset threshold in a space such as a room. This event handler may check a condition based on the state of a window, being open or closed, and if the window is closed, the event handler may turn ON the air conditioner to cool down the space. If the window is open, then as a default action, the event handler may only blink a light, beep, issue a message on a screen, send a message to a cellphone, or give other indication that the window needs to be closed before air condition is turned ON. The Action buttons 1510 and 1512 may be used to specify what action the event handler may take when the event occurs and the conditions hold or do not hold, respectively. The actions may be specified by picking routines or options from a drop-down list, or otherwise specified by associating a hardware action (such as activating a circuit or connecting two points via an electrical relay, and the like) or a software action (executing a subroutine for performing some task) with the event handler.
In various embodiments, many of the events, actions, event handlers, and other device related control mechanism may involve conditional expressions that may be defined in various stages of the configuration to specify how each device should behave under certain conditions, as further described herein. A condition may be expressed using a logical expression or a relational expression. A logical expression may be a combinational logic expression in which various logical, Boolean, or binary variables (conventionally shown in capital letters like A, B, X, Y, etc., which take on only one of two values such as {True, False} or {1, 0}) are combined with logical connectives or operators such as AND, OR, NOT, NAND, XOR, and the like. A logical expression is used to expressed certain conditions that may exist, based on which some action may be taken. For example, if A is a Boolean or binary or logical variable representing whether a door is open, and B is a variable representing that a temperature threshold is exceeded, then the expression “A AND NOT B” means “If temperature is hotter than a set limit and door is not open”. The user may select button Add Logical Expression 1610 to start the process of adding one or more logical expressions to define actionable conditions, as further described below with respect to
A relational expression defines a relative and/or comparative relationship between two quantities and the connectives or operators of which are conventionally represented by the following symbols: =(equal), /=(not equal), >(greater than), <(less than), >=(greater than or equal), <=(less than or equal). For example, if A is a measured temperature and B is a preset threshold, then the expression “if A>B” means “if the measured temperature is greater than the preset threshold”. Such an expression alone or in combination with one or more other logical and/or relational expressions may be used to define a condition that when satisfied, some predefine action is taken.
In various embodiments, the user may use the list or set of logical operators 1706 to select a logical operator and apply it to an expression being defined. Those skilled in the art will appreciate that the user interface windows described herein may be used in conjunction with other user interface windows or individually depending on the overall design of the user interface. For example, if the user is defining a new event handler (for example,
In various embodiments, the user may use the list or set of relational operators 1808 to select a relational operator and apply it to an expression being defined. The relational operators include: =(equal), /=(not equal), >(greater than), <(less than), >=(greater than or equal), <=(less than or equal). These operators are generally binary and have two sides or variables for comparison (for example, a Left Value and a Right Value). The relational comparison can be performed if the types of these variables are the same or compatible. If the types are not compatible or comparable then they cannot be compared. For example, if one variable has a type of “time” and the other has a type of “temperature”, then they cannot be compared, because saying this time is greater than this temperature has no meaning. Those skilled in the art will appreciate that the user interface windows described herein may be used in conjunction with other user interface windows or individually depending on the overall design of the user interface. For example, if the user is defining a new event handler (for example,
In various embodiments, the Left Value 1804 and Right Value 1806 of
In various embodiments, the when the parameter base 2008 is selected, the source of the parameter may also be specified as either events 2012 or services 2014, since both may have parameters with pre-designated types. If the event parameters 2012 are selected, then dropdown list 2016 of event parameters 2018 may be used to select a type to be used for the value in the relational expression.
In various embodiments, when the parameter base 2008 is selected, the source of the parameter may also be specified as either events 2012 or services 2014, since both may have parameters with pre-designated types. If the service base is selected, then a device is selected by the user from the device list 2106, and a particular service 2110 is further selected. The selected service 2110 may be associated with a number of service parameters 2112, each having a predetermined type as defined by the manufacturer of the device. Specific values may be set using the corresponding value setting buttons 2114-2120, as described with respect to
In various embodiments, the user may use the list or set of relational operators 2208 to select a relational operator and apply it to an expression being defined. As noted before, the relational operators include: =(equal), /=(not equal), >(greater than), <(less than), >=(greater than or equal), <=(less than or equal). The two variables or quantities subject to the relational operators may be defined as an event parameter 2206, a service parameter, or other parameter to be compared with the other relational parameter 2210, which may be a constant, or another event or service parameter. For example, a light value may be a parameter of an event, which may be compared to a percentage of a total brightness to take some action, such as turning OFF the light or increasing or decreasing its brightness.
Those skilled in the art will appreciate that the user interface windows described herein may be used in conjunction with other user interface windows or individually depending on the overall design of the user interface. For example, if the user is defining a new event handler (for example,
In various embodiments, an action to be taken by an event handler may be in the form of a service/function call by a requester, such as a master device. To specify an action for use in an event handler, in the form of a service offered by a device, first a device is selected by the user from the device list 2406, and then a particular service 2410 is further selected. The selected service 2110 may be associated with a number of service parameters 2412, each having a predetermined type as defined by the manufacturer of the device. Specific values may be set using the corresponding value setting buttons 2114-2120 to serve as the value in the relational expression or to be used for another purpose such as testing. Some services may also return a value to the calling entity. The parameter returned may be selected by the user as one or more parameters from the Parameter list. For example, if a device measures the temperature in a space, then calling a temperature reporting service may return the measured temperature to the calling entity, such as a service in a master device or a software module on the user UAP.
It will be understood that each step of the processes described above, and combinations of steps, may be implemented by computer program instructions. These program instructions may be provided to a processor to produce a machine, such that the instructions, which execute on the processor, enable implementing the actions specified. The computer program instructions may be executed by a processor to cause a series of operational steps to be performed by the processor to produce a computer implemented process such that the instructions, which execute on the processor to provide steps for implementing the actions. The computer program instructions may also cause at least some of the operational steps to be performed in parallel. Moreover, some of the steps may also be performed across more than one processor, such as might arise in a multi-processor computer system. In addition, one or more steps or combinations of steps described may also be performed concurrently with other steps or combinations of steps, or even in a different sequence than described without departing from the scope or spirit of the disclosure.
Accordingly, steps of processes or methods described support combinations of techniques for performing the specified actions, combinations of steps for performing the specified actions and program instruction for performing the specified actions. It will also be understood that each step, and combinations of steps described, can be implemented by special purpose hardware based systems which perform the specified actions or steps, or combinations of special purpose hardware and computer instructions.
It will be further understood that unless explicitly stated or specified, the steps described in a process are not ordered and may not necessarily be performed or occur in the order described or depicted. For example, a step A in a process described prior to a step B in the same process, may actually be performed after step B. In other words, a collection of steps in a process for achieving an end-result may occur in any order unless otherwise stated.
Changes can be made to the claimed invention in light of the above Detailed Description. While the above description details certain embodiments of the invention and describes the best mode contemplated, no matter how detailed the above appears in text, the claimed invention can be practiced in many ways. Details of the system may vary considerably in its implementation details, while still being encompassed by the claimed invention disclosed herein.
Particular terminology used when describing certain features or aspects of the disclosure should not be taken to imply that the terminology is being redefined herein to be restricted to any specific characteristics, features, or aspects of the disclosure with which that terminology is associated. In general, the terms used in the following claims should not be construed to limit the claimed invention to the specific embodiments disclosed in the specification, unless the above Detailed Description section explicitly defines such terms. Accordingly, the actual scope of the claimed invention encompasses not only the disclosed embodiments, but also all equivalent ways of practicing or implementing the claimed invention.
It will be understood by those within the art that, in general, terms used herein, and especially in the appended claims (e.g., bodies of the appended claims) are generally intended as “open” terms (e.g., the term “including” should be interpreted as “including but not limited to,” the term “having” should be interpreted as “having at least,” the term “includes” should be interpreted as “includes but is not limited to,” etc.). It will be further understood by those within the art that if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation no such intent is present. For example, as an aid to understanding, the following appended claims may contain usage of the introductory phrases “at least one” and “one or more” to introduce claim recitations. However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by the indefinite articles “a” or “an” limits any particular claim containing such introduced claim recitation to inventions containing only one such recitation, even when the same claim includes the introductory phrases “one or more” or “at least one” and indefinite articles such as “a” or “an” (e.g., “a” and/or “an” should typically be interpreted to mean “at least one” or “one or more”); the same holds true for the use of definite articles used to introduce claim recitations. In addition, even if a specific number of an introduced claim recitation is explicitly recited, those skilled in the art will recognize that such recitation should typically be interpreted to mean at least the recited number (e.g., the bare recitation of “two recitations,” without other modifiers, typically means at least two recitations, or two or more recitations). Furthermore, in those instances where a convention analogous to “at least one of A, B, and C, etc.” is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., “a system having at least one of A, B, and C” would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). In those instances where a convention analogous to “at least one of A, B, or C, etc.” is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., “a system having at least one of A, B, or C” would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). It will be further understood by those within the art that virtually any disjunctive word and/or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the terms, either of the terms, or both terms. For example, the phrase “A or B” will be understood to include the possibilities of “A” or “B” or “A and B.” It is further understood that any phrase of the form “A/B” shall mean any one of “A”, “B”, “A or B”, or “A and B”. This construct includes the phrase “and/or” itself.
The above specification, examples, and data provide a complete description of the manufacture and use of the claimed invention. Since many embodiments of the claimed invention can be made without departing from the spirit and scope of the disclosure, the invention resides in the claims hereinafter appended. It is further understood that this disclosure is not limited to the disclosed embodiments, but is intended to cover various arrangements included within the spirit and scope of the broadest interpretation so as to encompass all such modifications and equivalent arrangements.