This relates generally to the use of sensor networks.
A sensor network is a collection of sensors that may be distributed throughout a facility in order to determine information about activities going on within that facility. Examples of sensor network applications include in-home, long-term health care, in-home care for elderly, home or corporate security, activity monitoring, and industrial engineering to improve efficiency in plants, to mention a few examples.
In many cases, the installation of the array is done by a technician who is experienced and knowledgeable about how to install such an array. However, in many applications, including in-home applications for example, the need for a technician to install and maintain the array greatly increases the cost. Thus, it is desirable to provide a sensor network that may be self-installed by a user or a user's family member or caretaker.
a is an object entry user interface for one embodiment; and
b is an activity entry user interface for one embodiment.
In some embodiments, user self-installation of a sensor network can be improved or facilitated by asking the user to specify the activity monitored by the sensor. To facilitate this practice, the user may be provided with an electronic device that allows the user to associate sensors with objects or states, and displays user selectable activity options and/or allows the user to enter their own options. Using this elicited information, the device may automatically build a model, monitor the sensor data it receives over time and identify what activities are being undertaken.
As a simple example, the user may indicate that a shake sensor was placed on a refrigerator door and that the activities related to the refrigerator door might be getting a drink, preparing a meal, filling the refrigerator with groceries, getting ice, or determining whether additional groceries may be needed. Thus, when the refrigerator door sensor fires, the system has a variety of options to consider when identifying why the user was opening the refrigerator. However, using a sensor network, the system can obtain additional information from which it may be able to probabilistically identify the actual activity. For example, if, within a certain time, the user opened another drawer that includes silverware and, still another cabinet that includes plates, the probability may be higher that the user is preparing a meal.
Feedback may be obtained to determine whether or not this determination is correct. Based on the feedback received and/or on automated machine learning algorithms, the machine may improve its internal model of sensors, objects, states, and activities. A state relates to an object and defines its current condition (e.g. on, off, open, closed, operating, not operating, etc.).
Thus, referring to
Thus,
Referring to
Referring to
Thus, in the initial configuration sequence 47 for each sensor, the user causes the selected physical sensor to interact with the system, as indicated in block 46. The system then detects the sensor 32 at 52. This may be done by reading an RFID tag on the sensor using the RFID reader 41 so that the sensor 32 is identified. Other identification methods may include, but are not limited by, using infrared wireless communication, pushing buttons on the sensor 32 and the user interface 40 simultaneously, pushing a button on the user interface 40 while shaking the sensor 32, having a bar code reader on the user interface 40 to read a 1D or 2D code on the sensor 32, or using keyboard entry via computer 34 or user interface 40 of a sensor identifier number. For example, the sensor may have a bar code that identifies the type of sensor (e.g. motion, touch proximity, etc) and its identifier.
Then, an object selection system may be implemented in block 54. The user may select or identify what object the sensor is attached to in block 48 using a user interface 40 that may be the interface shown in
The user interface 40 may provide a list of objects within the home to select from, for example, by selecting the corresponding picture on a touch screen. As another example, the user can select the first letter of the object at A to get a display of objects in window B starting with that letter as indicated in
The user may also select the activities the sensor is intended to be associated with in block 50. The activity selection system 58 is used for this purpose. Each object may be associated with multiple activities in block 60. In one embodiment, shown in
In block 62, a model generation system generates a model 64 of the relationships between activities and objects, as provided by the user, and as learned by the system thereafter.
During the execution 44, each sensor sends data 70 to the observation manager 68 in computer 34 via transceiver 38 in one embodiment. The observation manager 68 collects sensor information and any other feedback, such as camera or user interface feedback as inputs. Based on this information and the model 64, the execution engine 66 determines what activity was being done as indicated in block 74. This determination may then be used in a model learning module 89 to improve the model 64 based on experience.
Model optimization using machine learning techniques may be implemented in software, hardware, or firmware, as indicated in
For example, the activity of operating the faucet (detected by proximity sensor 18), followed by the activity of opening the refrigerator door (as sensed by touch sensor 28), followed by the activity of pulling a dish out of the cabinet (detected by sensor 24), all within a certain window of time could indicate the activity of food preparation, rather than the task of preparing a grocery shopping list. At periodic intervals, camera information or user inquiries may be used to refine the model of how the sensors, objects, states, and activities relate. For example, the user can be asked to indicate what task the user just did, via the user interface. Thus, the computer can then reinforce over time that, given a sensor dataset with given time, a certain activity is more probable. In this way, the system can identify what activities the user is doing, in many cases without the need for technician installation.
References throughout this specification to “one embodiment” or “an embodiment” mean that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one implementation encompassed within the present invention. Thus, appearances of the phrase “one embodiment” or “in an embodiment” are not necessarily referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be instituted in other suitable forms other than the particular embodiment illustrated and all such forms may be encompassed within the claims of the present application.
While the present invention has been described with respect to a limited number of embodiments, those skilled in the art will appreciate numerous modifications and variations therefrom. It is intended that the appended claims cover all such modifications and variations as fall within the true spirit and scope of this present invention.