This invention relates to a distributed wireless sensing system.
Various networked sensors are known in the art, which allow remote collection of data for industrial and consumer applications. Also prior art products including Ninja Blocks™ and Twine™ include various sensors and a web app monitor the outputs. However neither product has any local reconfiguration of the sensing mode of the device.
Most smart phones or tablets include multiple sensors and may seem to be an ‘all-round’ solution for many applications. However, they might not be a cost effective solution when having to deploy multiple devices at various locations. Dedicated lost cost sensor devices deployed in large sensor networks may be cheaper, but may be complicated to setup and deploy. These sensor devices are also often a permanent installation that is inaccessible and non-spatially reconfigurable.
Embodiments may seek to bring together the advantages of portability, accessibility and/or re-configurability into a single system.
In general terms in a first aspect the invention proposes a portable wireless sensor that is remotely and locally reconfigurable. This may have the advantage that controlling the device is very simple and user friendly.
In a second aspect the invention proposes an app or user interface that allows configuration of portable wireless sensors. This may have the advantage that the sensors, receivers and/or base station can interactively implement scenarios.
In a third aspect the invention proposes a wearable receiver dongle. This may have the advantage that the dongle can provide a simple and easy alert to specific sensors that have determined an alert condition.
A system may include distributed wireless sensor nodes that have tangible input user interfaces, a central data collection/management unit (base station) and wireless receiver devices that are able to receive notification on the status of sensor nodes. Sound (acquired through a microphone) may be used as an input to the sensor nodes.
Embodiments may have the advantage(s) that:
In a first and second specific expression of the invention there is provided a distributed wireless sensing device according to claim 1 or 14. In a third specific expression of the invention there is provided a distributed wireless sensing system according to claim 7. Embodiments may be implemented according to any of claims 2 to 6 and 8 to 13.
One or more example embodiments of the invention will now be described, with reference to the following figures, in which:
a) is a screenshot of setting up a connection to the base station;
b) is a screenshot of notification from a new sensor node;
c) is a screenshot of configuring new sensor node by assigning it a name and a mode. In Alert mode, a user will be alerted with a sound, vibration and a message. In Ambient mode, a user only receives notification message;
d) is a screenshot of displaying a history of activities and events and allows reconfiguring of devices;
e) is a screenshot of setting up a sensor node as an object finder. User records a voice tag to associate it with the sensor node;
f) is a screenshot of a user giving a voice command to activate sound on the sensor node;
a) to (c): are perspective views of a physical design of a sensor node;
d) to (f) are schematic drawings of a user physically interacting with the sensor node (d) Turning, (e) Pressing, (f) Shaking;
Humans have evolved to use sound (apart from vision) as one of the primary medium for communication. In addition, humans perceive the world around them through their five senses (sight, sound, smell, touch and taste). Among these five senses, sound is perhaps the most natural ‘active’ form of two-way communication since human hear and produce sound naturally. Likewise, for natural ‘passive’ one-way communication to humans, the sense of sight and sound are perhaps the most efficient in terms of range. Embodiments may seek to enable users to extend their natural sense of sight and sound. Users may be able to ‘hear further’, ‘speak further’ and ‘communicate’ naturally with objects and the environment through a sound and light enabled input/output device distributed within their home or work environment, or even in remote locations.
According to a first embodiment shown in
i. Sensor Node
As shown in
Sensor nodes provide both auditory and visual output. A diaphragm or MEMS speaker 210 is used to provide auditory output. Visual output can be based on one or more tri-colour LED lights 212 or a more complex display such as an OLED or E-Ink display.
Each sensor node has a wireless radio frequency (RF) module 214 which can be based on 2.4 GHz or sub-GHz frequency. The wireless module is used for exchanging messages between other sensor nodes and with the networking base station. A near field communication (NFC) reader 216 is available on the sensor node for contactless communication to establish pairing with receiver devices.
A 32-bit ARM microprocessor 218 is used for interfacing the input, output and wireless modules. The microprocessor should meet the following minimum requirements: processor speed of 48 MHz, 8 KB of RAM and 32 KB of FLASH, support for SPI, I2C, UART, ADC and GPIO. A rechargeable battery is used to power each sensor node.
For example an ARM Cortex-M4 microprocessor may be used. A MEMS microphone (ADMP401) with analog output is used for measuring audio signals which is fed into an amplifier circuit. Variable gain control on the amplifier is achieved through a digital potentiometer (MCP4131) acting as a feedback resistor. A mechanical push button switch is used for detecting user press input. A low profile shaft-less rotary encoder is used for detecting the turning gesture for users. A 3-axis accelerometer with analog outputs (ADXL335) is used detect a shake input from a user. A RGB led and 8 Ohm speaker is connected to the output ports of the microcontroller. Wireless connectivity is achieved using a proprietary 2.4 GHz transceiver (nRF24L01+). A contactless communication controller (PN532) is used for data exchange with receiver devices.
ii. Wireless Base Station
For example an ARM Cortex-M4 microprocessor may be used, connected to a nRF24L01+2.4 GHz transceiver (with power amplifier and external antenna for improved range) and a Bluetooth module 306. For compatibility with iOS devices and Android devices that support Bluetooth Low Energy (BLE), a BLE112 Bluetooth 4.0 module is used. For capability with devices support Bluetooth 2.1 and below, a RN-42 Bluetooth module is used.
iii. Receiver Device
The receiver device can be based on a computing (mobile) device with the associated software applications or a receiver dongle. The receiver device can receive notification messages from the wireless base station. In certain hardware configurations, the receiver device can receive notification messages directly from sensor nodes. The function of the receiver device is to inform a user of any sensor trigger events through visual, haptic and/or audio feedback.
The computing (mobile) device is used for communication with the wireless base station. This could include any form of (portable/wearable) computing device that supports software and hardware requirements. The basic hardware requirements for the device include Bluetooth, Wi-Fi, display screen, user input capability (capacitive touch or physical buttons) and audio output capabilities (speaker). Software requirement varies from the operating system on the device. For example a mobile phone running on Android 4.0 and above, and iOS version 6.0 (with Bluetooth 4.0 support) and above may be used.
i. Sensor Node
The firmware running on each sensor node is interrupt based. In order to reduce power consumption and increase the operating time of the sensor node, the device is put to sleep most of the time unless an interrupt occurs. There are three interrupt events that can occur; a user input (from the push button 206, rotary encoder 208 and/or accelerometer 204), microphone 202 input (exceeding a predefined threshold) and from the wireless module 214 (when there is data available to be read).
When the device is first switched on or reset, it defaults to configure mode, which is activated by pressing on push button switch 206. As shown in
In the Input Mode, the device serves a sound monitoring function using the on-board microphone 202. A user can further adjust the sensitivity of the microphone by turning 506 on the rotary encoder 208 in which the LED 212 changes its brightness accordingly. Whenever the microphone 202 receives a sound exceeding the defined threshold 508, the LED 212 on sensor node will start blinking 510 and a message containing the ID of the sensor node will be sent to the base station.
In Output Mode, the device becomes a receiver to support an output triggering function. The sensor node waits for an incoming command received through the RF module 214. Upon receiving this command 512, it activates 514 the speaker 210 to produce a beeping tone and also blinks the LED light 212. A user can turn off the alarm and light by pressing 516 on the push button 206 switch on the sensor node device. The user can also shake 518 the device to reset to configure mode.
ii. Wireless Base Station
For a pairing notification 602, the program updates the database 612 with the sensor node device ID and the destination ID of the receiver. For a trigger to notification 604, the program sets a trigger status flag 614 in the database indicating that a particular sensor node has been triggered. For a query command 606, the program retrieves 616 the status flags from the database based on the ID of the receiver device and sends a reply 618 to the receiver to indicate if the sensor node has been triggered or if it is running low on battery power. For a low battery notification 608, the program updates a battery status flag 620 in the database indicating that the sensor node has running low on battery power. For trigger sound 610, the program receives a command 622 to query the ID of the sensor node tagged to the receiver and then issues a command 624 to the sensor node to trigger sound on it.
iii. Receiver Dongle
For the sensor node and receiver device, the software is standalone, written and complied specifically for the microprocessor type. For example, the firmware on the sensor node and receiver device is developed in C on a 32 bit ARM microprocessor. But it can be generalized to work on any microcontroller/microprocessor that meets the specified hardware requirements.
The software application is shown in
The software application on the mobile computational device can be written for various platforms including (but not limited to): Android OS, iOS, MeeGo, Symbian and Windows mobile.
a) shows the Android application background service: Setting up a connection to the base station.
i. Sensor Node
The design of the sensor node may include two disks mounted so that they can be rotated relative to each other to trigger a rotary encoder as shown in
The flat surface on the top and bottom side of the casing has a semi-transparent diffuser that allows colour light from a RGB led to be seen as shown in
With a symmetric design on a sensor node, each side can serve a different function. When using the sensor node to monitor a sound event on a specific object, the sensor node is attached to the object with its bottom surface facing down such the microphone is facing toward the surface of the object. It is to maximize sound reception from only the object and not from the surroundings. On the other hand, when using the sensor node to monitor for sound events around an area, the bottom side of the sensor node is facing outward so that it can readily receive sound from the surrounding area.
There is only one LED inside the sensor node but it produces a light that is visible on both sides. A user will see the whole sensor node lighting up regardless of which side it is attached to. Also, there is only 1 push button switch, which can be triggered from either side of the sensor node.
The LED will light up and blink when it detects sound regardless of whether it is monitoring an object or a space. It will also start blinking when a user remotely triggers sound on the sensor node.
The visual feedback may be in the form of a single LED or any form of visual display such as an OLED display screen or an E-Ink display.
The (directional) microphone may be oriented such that its main receiving lobe is toward the front face of the sensor node.
ii. Receiver Dongle
As seen in
Setting-Up and Interacting with Sensor Nodes
In this current embodiment of the system, a user physically interacts with the sensor node through three means of action: turning the face of the device, pressing the face of the device and shaking the device. Each of these user actions has been programmed for specific functions. We define the following user inputs and possible actions:
A user can establish pairing between sensor nodes and receiver devices by bringing them within close proximity (4 cm or less). The various pairing configurations include:
A pairing sequence involves the user bringing the receiver device close to the sensor node in which the LED on the sensor node will blink for 5 seconds to indicate a pairing request has been initiated and completed. The sensor node will read the ID of the receiver device using the on-board contactless communication controller, and forward this information to the wireless networking base station, to be subsequently stored in its database. The pairing can either relate to a mobile phone with the software app, or a receiver dongle.
i. Remote Monitoring of Everyday Object(s) Specific or Location Specific
ii. Remote Event Triggering on an Object
iii. Autonomous Response to Events without External Intervention from a User (e.g. Sensor Input Triggers a Predefined Output)
iv. Collaboration Between Sensor Nodes
a. Collective Input Monitoring/Capturing
a) shows the ability to perform sound source localization using time of arrival with a plurality of sensors nodes around the sound source.
b. Collective Output
b) shows with sound as output, multiple sensor nodes can be place at strategic locations to create multi-channel sound effects. Alternatively, with light as an output, sensor nodes can be treated as individual pixels and collectively be used to generate a display with higher dimensionality.
c. Input from Sensor Node(s) Triggering Response on Other Sensor Node(s) or Vice Versa.
c) shows sensor nodes reacting and communicating to each other. One example would be sensors relaying messages. Recognition algorithms may be included to detect/classify specific sounds and generate different output accordingly.
v. Sensor Nodes Transforming an Everyday Object into a Sound Based Input Device for Interacting with Personal Digital Devices
The in-built sensing capabilities of each sensor node enables it to be used an alternate input device that remaps its input measurements to inputs for other devices. For example,
vi. Industrial Applications
While example embodiments of the invention have been described in detail, many variations are possible within the scope of the invention as claimed as will be clear to a skilled reader.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/SG2013/000545 | 12/20/2013 | WO | 00 |
Number | Date | Country | |
---|---|---|---|
61750578 | Jan 2013 | US |