Embodiments of the present invention relate to sensor networks.
A typical sensor network is composed of spatially distributed autonomous sensor nodes that each measure physical and/or environmental conditions, such as temperature, sound, vibration, pressure, motion, or pollutants, and relay the measurement information to a central processing or data storage node. Sensor networks are used to monitor conditions in a wide variety of industrial and environmental settings and have traditionally been implemented using either electrical wires or wireless transmission for relaying the measurement results. With wired sensor networks, each wire electronically connects one or more sensor nodes to the central processing node. Each wired sensor node includes, in addition to sensors and a microcontroller, an energy source such as a battery. With wireless sensor networks, each sensor node can communicate with the central processing node using a separate radio frequency. Each wireless sensor node includes, in addition to sensors, a radio transceiver or other wireless communication devices, a microcontroller, and an energy source.
A grid of sensor nodes typically has to be deployed with accurate three-dimensional coordinate locations, such as longitude, latitude, and elevation, for each sensor node.
Various embodiments of the present invention are directed to systems and methods for deploying sensor nodes of a sensor network. System embodiments include an optical sensor used to accurately determine distances from each sensor node to a known reference location. Before a grid of sensor nodes is deployed, the optical sensor is used to identify a reference location. The optical sensor is then used to image the terrain as the grid of sensor nodes is deployed. For each sensor node, the optical sensor tracks the movement of a grid layer (e.g., person or vehicle) by capturing overlapping terrain images until the grid layer reaches a location at which a sensor node is to be deployed. The sensor node location with respect to the reference location is determined and programmed into the sensor node.
Images captured by the photo sensor array are transmitted to the processor 206 for image processing. The elevation sensor 208 detects conditions that can be used to determine the elevation of the sensor as terrain images are captured. The optical sensor 200 also includes a computer readable medium 212, which can be any suitable medium that participates in providing instructions to the processor(s) 208 for execution. For example, the computer readable medium 212 can be non-volatile media, such as an optical or a magnetic disk; or volatile media, such as memory. Once the coordinate location of a sensor node has been determined relative to a reference location as described below, the computer readable medium 212 can include light or radio frequency waves to transmit the coordinate location to the sensor node. The computer readable medium 212 can also store other software applications, including global position system applications for identifying the reference location.
The computer-readable medium 212 may also store an operating system, network applications, and an image processing applications. The operating system can be multi-user, multiprocessing, multitasking, multithreading, real-time and the like. The operating system can also perform basic tasks such as recognizing input from input devices, such as a keyboard, a keypad, or a mouse; sending output to the display 216; keeping track of files and directories on the medium 212; controlling peripheral devices, such as disk drives, and printers; and managing traffic on the one or more buses 218. The network applications includes various components for establishing and maintaining network connections, such as software for implementing communication protocols including TCP/IP, HTTP, Ethernet, USB, and FireWire. In certain embodiments, some or all of the processes performed by the applications can be integrated into the operating system. In certain embodiments, the processes can be at least partially implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in any combination thereof.
System embodiments of the present invention are not limited to all of the computation components being implemented in a single optical sensor.
The optical sensors 200 and 220 are operated by pointing the lens 214 at the ground in order to capture images of the terrain as a grid layer moves to a location at which a sensor node is to be deployed. For example, the grid layer can be a vehicle and the optical sensor 200 can be attached to the vehicle bumper or suspended by a boom attached to the vehicle with the lens 214 pointed at the ground. As the vehicle operator drives the vehicle toward a desired location to deploy a sensor node, the optical sensor 200 captures overlapping images of the terrain. The optical sensor 200 can implemented as a hand-held device and the grid layer can be a person. The person holds the optical sensor 200 to capture overlapping images of the terrain as the person walks or hikes toward a desired sensor node location.
The coordinate locations of the next five sensor nodes are determined in series by following the paths 242-246. Along each path a series of overlapping terrain images are captured and used to determine the coordinate location of each node. For example, once the coordinate location of sensor node 1 is determined and programmed into sensor node 1, the optical sensor is moved along the path 242 to the deployment location of sensor node 2. While the optical sensor is being moved from sensor node 1 to sensor node 2, overlapping terrain images are captured. The overlapping terrain images are used to determine the coordinate location 248 of sensor node 2, and the coordinate location is programmed into sensor node 2. The coordinate locations of sensor nodes 3-6 are determined in a like manner by following paths 243-246.
At the beginning of deploying a grid of sensor nodes, a reference location 308 is identified and a terrain image I0 of the reference location is captured using the optical sensor. The longitude and latitude coordinates of the reference location 308, identified as x0 and y0, can be obtained using a global positioning system (“GPS”). The GPS coordinates (x0,y0) of the reference location are entered into the optical sensor 200 or computing device 218, and the point within the terrain image corresponding to the reference location (x0,y0) is identified. In certain embodiments, the display 216 can be a touch screen and the operator can identify the reference location (x0,y0) in the image by touching the point on the display that corresponds to the reference location (x0,y0), or, in other embodiments, the operator can move a mouse cursor to the pixels associated with the reference location (x0,y0) and click the mouse button to identify the reference location in the terrain image.
However, GPS systems only identify the longitudinal and latitude coordinates and cannot determine elevation z. The elevation sensor 208 can include a system for measuring the air pressure P. Typically air pressure varies smoothly from the Earth's surface to the mesosphere. Although air pressure changes with weather conditions, barometric formulas that relate air pressure to elevation z have been determined and average air pressure versus elevation for many places around the Earth have been tabulated. Barometric formulas or tabulated elevations and pressures can be used to determine the elevation z as follows.
In certain embodiments, based on the measured air pressure P, the elevation z can be calculated using a barometric formula. For example, given the air pressure P, an approximate elevation z at any location between mean sea level and 11,000 meters (36,089 feet) can be determined using
where
In other embodiments, the elevation z can be calculated by interpolating a polynomial approximation of the elevation as a function of air pressure where the data points used to construct a polynomial approximation are based on tabulated pressure and elevation data. An example portion of a look-up table that can be used to interpolate an approximate elevation is given in Table I:
500 ft
Note that in order to obtain an accurate elevation z, the elevation z corresponds to the elevation of the optical sensor when the terrain image is captured minus the height of the optical sensor above the ground.
Returning to
Once the reference location coordinates have been determined, the grid layer can begin moving to a desired location at which a sensor node is to be placed. The optical sensor 200 begins capturing overlapping images of the landscape terrain as the grid layer moves.
{right arrow over (v)}
i=(xi,yi,zi)−(xi−1,yi−1,zi−1)
where (x0, y0, z0)=(0, 0, 0).
The coordinate location of the next sensor node to be deployed is determined by tracking the terrain images as described above with reference to
The resultant vector {right arrow over (v)}result represents the direction and distance of the sensor node from the origin. The coordinates of the resultant vector associated with the sensor node are programmed into the sensor node.
The elevation at each point of the terrain images Ii can be determined by taking an air pressure measurement followed by calculating the elevation using a barometric formula, or by interpolating a polynomial approximation of the elevation as function of air pressure based on tabulated data, the polynomial approximation can be used to calculate the elevation for a particular measured air pressure. In other embodiments, the elevation sensor 210 can be configured to detect changes in the orientation of the optical sensor as the optical sensor is moved. These changes can then be used to approximate the elevation.
The foregoing description, for purposes of explanation, used specific nomenclature to provide a thorough understanding of the invention. However, it will be apparent to one skilled in the art that the specific details are not required in order to practice the invention. The foregoing descriptions of specific embodiments of the present invention are presented for purposes of illustration and description. They are not intended to be exhaustive of or to limit the invention to the precise forms disclosed. Obviously, many modifications and variations are possible in view of the above teachings. The embodiments are shown and described in order to best explain the principles of the invention and its practical applications, to thereby enable others skilled in the art to best utilize the invention and various embodiments with various modifications as are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the following claims and their equivalents: