The invention relates generally to systems and methods of tracking packages and other assets.
The shipping of packages, including, but not limited to, letters, parcels, containers, and boxes of any shape and size, is big business, one that grows annually because of online shopping. Every day, people and businesses from diverse locations throughout the world ship millions of packages. Efficient and precise delivery of such packages to their correct destinations entails complex logistics.
Most package shippers currently use barcodes on packages to track movement of the packages through their delivery system. Each barcode stores information about its package; such information may include the dimensions of the package, its weight and destination. When shipping personnel pick up a package, he or she scans the barcode to sort the package appropriately. The delivery system uses this scanned information to track movement of the package.
For example, upon arriving at the city of final destination, a package rolls off a truck or plane on a roller belt. Personnel scan the package, and the system recognizes that the package is at the city of final destination. The system assigns the package to an appropriate delivery truck with an objective of having delivery drivers operating at maximum efficiency. An employee loads the delivery truck, scanning the package while loading it onto the truck. The scanning operates to identify the package as “out for delivery”. The driver of the delivery truck also scans the package upon delivery to notify the package-delivery system that the package has reached its final destination.
Such a package-delivery system provides discrete data points for tracking packages, but it has its weaknesses: there can be instances where the position or even the existence of the package is unknown. For example, a package loader may scan a package for loading on delivery truck A, but the package loader may place the package erroneously on delivery truck B. In the previously described package-delivery system, there is no way to prevent or quickly discover this error.
Further, package-delivery systems can be inefficient. Instructions often direct the person who is loading a delivery truck to load it for optimized delivery. This person is usually not the delivery person. Thus, his or her perception of an efficient loading strategy may differ greatly from that of the person unloading the vehicle. Further, different loaders may pack a vehicle differently. Additionally, the loader may toss packages into the truck or misplace them. Packages may also shift during transit. Time expended by drivers searching for packages in a truck is expended cost and an inefficiency that financially impacts the shippers.
Industry has made attempts to track packages efficiently. One such attempt places RFID (Radio Frequency Identification) chips on the packages. Such a solution requires additional systems and hardware. For instance, this solution requires the placement of an RFID tag on every package and the use of readers by package loaders or the placement of readers throughout the facility to track packages.
All examples and features mentioned below can be combined in any technically possible way.
In one aspect, a package tracking system comprises a package room for holding packages intended for delivery to one or more package recipients, an optical sensing device positioned to capture one or more images of each package brought to the package room, one or more light sources, and a computing system including a processor, memory, and executable code stored on the memory. The processor executes the executable code to detect a presence and location of a given package held in the package room based on the one or more images captured by the optical sensing device and on package identification information relating to the given package. The processor further executes the executable code to cause the one or more light sources to shine on or near the given package in the package room to provide light-based guidance specific to the given package.
The processor may execute the executable code to cause the one or more light sources to shine on or near the given package in the package room to show a person retrieving the given package where the given package resides by shining light on or near the given package in the package room. The one or more light sources may comprise a strip of lights disposed at a front region of package shelving, wherein the processor executes the executable code to cause a given light in the strip of lights to illuminate to show where the given package resides, the given light in the strip of lights being located near the given package on the package shelving.
The processor may execute the executable code to cause the one or more light sources to focus light onto or illuminate a location where a package being dropped off is to be placed in the package room, to superimpose a light-based image on or near the given package in the package room, wherein the superimposed light-based image is a text message, or both. The text message may indicate a public service notice, a traffic notice, a weather condition notice, or any combination thereof.
The processor may execute the executable code to cause the one or more light sources to produce an outline around a region of the package room, wherein the outline corresponds to a field of view of the optical sensing device.
The one or more light sources may be coupled directly to the optical sensing device and the outline produced by the one or more light sources that corresponds to the field of view of the optical sensing device is predetermined by the coupling.
The one or more light sources may be independent of the optical sensing device and calibrated to the optical sensing device to establish the field of view of the optical sensing device.
In another aspect, provided is a method of providing light-based guidance in a package tracking system. The method comprises holding in a package room packages intended for delivery to one or more package recipients, capturing one or more images of each package brought to the package room, detecting a presence and location of a given package being held in the package room based on the one or more images captured of the given package and on package identification information relating to the given package, shining light on or near the given package in the package room to provide light-based guidance specific to the given package.
The shining of a light on or near the given package in the package room may operate to show a person retrieving the given package where the given package resides in the package room.
The method may further comprise illuminating a location where a package that is being dropped off is to be placed in the package room, or superimposing a light-based image on or near the given package in the package room. The superimposed light-based image may be a text message. The text message may indicate a public service notice, a traffic notice, a weather condition notice, or a combination thereof.
The method may further comprise displaying an outline around a region of the package room, wherein the outline corresponds to a field of view of an optical sensing device disposed in the package room.
The above and further advantages of this invention may be better understood by referring to the following description in conjunction with the accompanying drawings, in which like numerals indicate like structural elements and features in various figures. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the invention.
Package tracking systems described herein actively tracking packages continuously. Advantageously such systems may not require major alterations in personnel behavior and can be implemented with low hardware cost. In general, these systems employ cameras, depth sensors, or other optical sensors (herein referred to generally as cameras) to track packages, objects, assets, or items (herein referred to generally as packages). The cameras are placed in or adjacent to the holding area for the packages, for example, the cargo bay of a delivery vehicle or a package room. One or more cameras can also be situated near a package conveyor or roller belt, to track the movement of packages optically before the packages are placed into a holding area. A package barcode is scanned in conjunction with it being moved into the holding area. As used herein, a barcode is any readable or scannable medium, examples of which include, but are not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor media, or any suitable combination thereof. Package identification information about the package is determined from scanning the package barcode. Such package identification information typically includes dimensions, weight, contents or other information that may be utilized to detect and track the package.
An image processor analyzes the video stream from the cameras associated with the holding area to detect the presence of the package(s) contained within. When a package is identified, the image processor determines if the package corresponds to the package data derived from the package barcode. If the package barcode data and package image data match with a high degree of confidence, the system marks the package as existing within the camera area of coverage (e.g., within the delivery vehicle). Any user that thereafter views a stream of the camera view or a static image of the packages inside the holding area may receive an overlay that identifies the packages contained therein and their precise location.
A package tracking system can also employ one or more guidance mechanisms (e.g., audible, visual) to guide placement of a package into a holding area or to bring attention to the present location of a package (e.g., for purposes of removal).
Shipper systems typically identify and track packages 116 using barcodes. A barcode is placed on a package 116 when the shipper takes possession of the package. The barcode includes package identification information about the package, including the package dimensions, identification number, delivery address, shipping route and other data. The term barcode is to be broadly understood herein to include images or markings on a package that contain information or data (coded or otherwise) pertaining to the package. The barcode on the package is initially scanned into the system 100 with a scanner 124.
In general, the scanner 124 may be optical, magnetic, or electromagnetic means, depending on the type of barcode on the package. The scanner 124 may be a conventional barcode scanner or a smart phone or tablet-like device. The form factor of the scanner 124 is not limiting. Example embodiments of the scanner 124 and techniques for wirelessly tracking the scanner 124 are described in U.S. patent application Ser. No. 14/568,468, filed Dec. 12, 2014, titled “Tracking System with Mobile Reader,” the entirety of which is incorporated by reference herein.
The system 100 includes an optical system. In this embodiment, the optical system includes four optical sensors represented by cameras 118-1, 118-2, 118-3, and 118-4 (generally, camera 118). Each camera 118 has a field of view 120 covering a portion of the area within which the packages 116 lie (to simplify the illustration, only one field of view is shown). An appropriate number of cameras 118 can be mounted inside the tracking area 112 in such a way to provide a complete field of view, or at least a functionally sufficient field of view, of the area 112, and, in some cases, of an area outside the area 112 (e.g., a conveyor belt moving the packages prior to loading). Before the system 100 begins to operate, each camera position is fixed to ensure the camera(s) cover the tracking area 112. The exact position and number of cameras 118 is within the discretion of the system designer.
The camera 118 may be a simple image or video capture camera in the visual range, an infrared light detection sensor, depth sensor, or other optical sensing approach. In general, this camera enables real-time package tracking when the package is within the camera's area of coverage. The area of coverage is preferably the shelves 114 and tracking area 112. In some instances, the field of view can extend beyond the tracking area 112, to ensure that the packages scanned outside the tracking area 112 correspond to those packages placed inside the tracking area 112.
In addition, each camera 118 is in communication with a processor 122 (CPU 122), for example, a DSP (digital signal processor) or a general processor of greater or lesser capability than a DSP. In one embodiment, the CPU 122 is a Raspberry Pi. Although shown as a single CPU within the tracking area 112, the processor 122 can be a processing system comprised of one or more processors inside the tracking area, outside of the tracking area, or a combination thereof. Communication between the cameras 118 and the CPU 122 is by way of a wired or wireless path or a combination thereof. The protocol for communicating images, the compression of image data (if desired), and the image quality required are within the scope of the designer.
In one embodiment, the cameras 118 are video cameras running in parallel, and the cameras simultaneously provide images to the CPU 122, which performs an image processing solution. For this approach, the images are merged into a pre-determined map or layout of the tracking area 112 and used like a panorama. (Alternatively, or additionally, the CPU 122 can merge the images into a mosaic, as described in more detail below). The camera images are synchronized to fit the map and operate as one camera with a panorama view. In this embodiment, two (or more) cameras capture two different perspectives and the CPU 122 flattens the images by removing perspective distortion in each of them and merges the resulting image into the pre-determined map.
An image stitching process usually first performs image alignment using algorithms that can discover the relationships among images with varying degrees of overlap. These algorithms are suited for applications such as video stabilization, summarization, and the creation of panoramic mosaics, which can be used in the images taken from the cameras 118 (i.e., optical sensors) in the described system.
After alignment is complete, image-stitching algorithms take the estimates produced by such algorithms and blend the images in a seamless manner, while taking care of potential problems, such as blurring or ghosting caused by parallax and scene movement as well as varying image exposures inside the environment at which the cameras are placed in. Example image stitching processes are described in “Image Alignment and Stitching: A Tutorial”, by Richard Szeliski, Dec. 10, 2006, Technical Report, MSR-TR-2004-92, Microsoft Research; “Automatic Panoramic Image Stitching using Invariant Features,” by Brown and D. Lowe, International Journal of Computer Vision, 74(1), pages 59-73, 2007; and “Performance Evaluation of Color Correction Approaches for Automatic Multiview Image and Video Stitching,” by Wei Xu and Jane Mulligan, In Intl. Conf on Computer Vision and Pattern Recognition (CVPR10), San Francisco, Calif., 2010, the teachings of which are incorporated by reference herein in their entireties.
In an alternative embodiment, a mosaic approach may be utilized to integrate camera images. In this embodiment, one camera 118 is used for a certain area, a second (or third or fourth) camera 118 is used for another area, and a handoff is used during the tracking, with the images from cameras 118 being run in parallel on the CPU 122. In a mosaic, like a panorama approach, image data from the multiple cameras (or from other sensors) are merged into the map of the tracking area 112 (e.g., truck, container, plane, etc.) with each viewpoint designated for the area that is seen by the camera 18. It will be recognized that in both embodiments, a handoff is made when objects move from one viewpoint to another or are seen by one camera and not the others. These handoffs may be made using the images running in parallel on the cameras 118, with the package placement and movement determined by the CPU 122 using whichever camera has the best view of the package 116.
In an alternative embodiment, if the system 100 is using depth sensors, the image stitching operation can be omitted and each camera stream data is processed independently for change, object detection and recognition. Then, the result “areas of interest” are converted to individual point clouds (described further in connection with
In one embodiment, the image processing is performed by the CPU 122. Alternatively, if bandwidth is not a significant concern, the image data can be transferred to a central server (
The image processing CPU 122 creates the aforementioned map of the tracking area 112 under surveillance. Locating the shelves 114 assists the image processor 112 identification edge locations of packages 116. Further, a priori calculation of the distance of each camera 18 from shelves 114 assists in properly calculating package dimensions. In one embodiment, a single reference dimension is needed and dimensions of a tracked asset 116 can be determined at any position in space relative to the known dimension. In case of image or video cameras only, a dimension reference has to be related to position in the tracking area 112 (i.e., the length and depth of the shelves are known, thus the dimensions of a package placed on these shelves can be determined in relation with these shelves). In this embodiment, pixel count or vector distances of contours of these pixels can represent the package 116 and be used to help determine relevant package dimension data.
The scanners 124 are in communication with the central server 204, either continuously or through data dumps, to transfer package identification information when a barcode on a package is scanned and the location. Typically, the location of the scanner 124 is generic (e.g., “Atlanta”).
Each delivery vehicle 202 includes a tracking area 112, containing packages 116, and a processor 122. Each delivery vehicle 202 may have a GPS system (
The image processing CPU 122 detects (step 310) the presence of the package 116-1, as described in more detail in connection with
Referring back to
As stated previously, the image processing CPU 122 includes wireless communication (commonly Bluetooth, Wi-Fi, or other communication methods and protocols suitable for the size of the area of coverage of the camera). The image processing CPU 122 continuously receives (step 314) real-time views captured by the cameras 118 in the delivery vehicle 202-1. Because the location of the matched package is stored in memory of the image processing CPU, the real-time image data from the camera 118 is streamed to a handheld or fixed or mounted view screen to show the live view of the package overlaid with augmented reality markings identifying the package. The image processing CPU 122 continuously monitors and tracks (step 314) within the vehicle 202-1 until motion of an object is detected (step 316). In response to the detection of motion, the process 300 returns to detecting packages at step 310.
Implications of such real-time tracking can be appreciated by the following illustration. A driver entering the delivery vehicle 202-1 may not and need not have any personal knowledge of what packages were loaded where in the vehicle. Instead, the driver carries a view screen (often in the form of a handheld tablet, smartphone, or scanner) that displays a stream of one of the cameras 118 in the cargo bay of the vehicle 202-1. The image appearing on the view screen includes marks identifying various packages. A mark may be a box around the live view of the package with text stating the package name, identifier, intended addressee or most efficient package identifier. Upon arriving at a stop for an intended package addressee, for example Mr. Jones, the driver can walk to the back of the delivery vehicle. The system 200 may automatically display the package(s) intended for delivery to Mr. Jones using highlighting or demarcating for easy location. Alternatively, the driver can search the image data on the view screen for markings labeled “Jones” and such packages are be demarcated on the view screen for easy location. In addition, the system 200 may employ light-based guidance to show the driver the location of the package.
In some embodiments, multiple live streams of the cargo in a vehicle are available, with one camera (e.g., 118-1 of
At step 506, an absolute difference is determined across the two images to detect the presence of new objects. To quicken the processing, threshold detection (step 508) may be utilized to detect regions of interest. In addition, in those regions of interest data may be filtered (step 510) to limit the amount of data processed. After filtering, threshold detection (step 512) may be utilized on the filtered data.
At step 514, if no changes between the grayscale images are found, this indicates a high probability of no new package being located; the system 100 does not identify or mark a package. For instance, the loader may not have moved or loaded a package, or a new package cannot be located. The system 100 then acquires (step 502) the next temporal two frames (N and N+1). Sampling frequency may be continuous or at regular intervals according to designer preference, available processing power, and bandwidth.
If a change in the images (N and N−1) is detected at step 514, further analysis occurs. For example, the change detected by the system 100 may be the detection of the presence of the loader in the image. Alternatively, if changes in the images are indicative of a package moving, the image processing CPU 122 also continues to work on the current image data (frame N and N−1).
Those of ordinary skill in the art will recognize that a variety of images may be compared to determine loading or movement of a package. For example, an N ‘current frame’ and N-X ‘previous frame’ may be tested for motion, where X is greater than 1, and if motion occurs then the N-X frame (before motion occurred) may be saved as a background frame for later processing in comparison to a more recent image frame (i.e., a new N ‘current frame’). After motion is stopped, the background frame and a new N current frame are used for package location and identification.
Whenever a new package is located, the package is to be identified. In one embodiment, the image processing CPU 122 uses edge detection to determine (step 516) the dimensions of the package. Objects that are not compatible with being a package are filtered at this point. For example, if an object size is less than the smallest possible package, the object is ignored. The system 100 can also filter other objects of a size, dimension, or location that do not correspond to a package (e.g., the loader or a clipboard or tablet carried by the loader).
Various metrics may be utilized in addition to or conjunction with those described above to aid in identifying a package. For example, any object placed on a shelf (mapped as described above) may be weighted logically so as to be presumed to be the last scanned package. The package size, color (if cameras are color), contours or other distinguishing characteristics may be compared to any data captured by the barcode scanner. As previously described, when a package barcode is scanned, the system 100 expects that the next package detected will match the scanned package. Reliance on this assumption is accurate provided loaders handle packages sequentially, that is, a barcode of a package is scanned and then that package is sorted and moved appropriately. This a priori knowledge facilitates package identification.
At step 518, the package dimensions are used to match the package to the scanned barcode data, as described previously in connection with
In addition to view screens, other package location identification methods can be used to improve the locating process. For example, as a vehicle arrives at the destination address for the delivery of a certain package, a light projector (LED, laser or other) can be used to shine focused light, or a particular color light, on the location of the package within the cargo area to show the delivery person exactly where the “matched” package is in the vehicle. The focused light can be altered to change colors, blink, flash, or shine a pattern to signal additional information to the delivery person, for example, priorities of delivery and warnings of weight, or to signify that the package of interest is behind or under another package. Information is directly overlaid on the package that to be picked up, without needing any other screen or sound interface that might consume time to read or hear and consequently prolong the delivery process.
The above discussion assumes that a package that is scanned is relatively quickly matched to a package placed in the delivery vehicle. However, there may be instances where no match occurs or where a delay in matching occurs. This may occur if the package is loaded on the wrong truck, the driver scans one package but loads a different package, the driver tosses a package into the truck but not within video coverage (e.g., the package is occluded from view) or the driver's body occludes video coverage of a package.
In such situations, an embodiment of the system 100 requires a deliverable (i.e., a particular outcome) after a package is scanned. For example, if no package is detected that matches the scanned package, the system 100 may disallow further packages from being scanned, the system 100 may mark the package as scanned but unidentified, issue a warning to the loader, notify a central server of an unidentified package, or any combination thereof. The system designer may choose how rigidly to require package identification and processing (i.e., no further scanning until the package is appropriately tracked or just marking the package as scanned but with an unconfirmed loading status).
In some situations, a package may be loaded without having been scanned. This may be a loader error, where the loader places the package on the wrong truck, or may be intentional as in the case of theft. In these situations, the image processing CPU 122 still recognizes the existence of a loaded package, but there will be no “match” of the loaded package to a scanned package. Such a package may be “marked” in image streams as “unidentified”, instead of with data identifying the package, and the system may issue a “warning” to the loader (visual/auditory or other) that an unidentified package is in the vehicle. The warnings may allow the loader (or driver) to correct the issue by scanning the package, placing the package in the camera view and producing an appropriately matched package. Alternatively, the system 100 may be constructed to disallow further scanning of packages if such errors occur, may issue warnings, may send the errors to the central server, or any combination thereof. In one example of an unidentified package being loaded into a delivery vehicle, the driver upon first entering the delivery vehicle may receive a notice that 300 packages have been loaded in the vehicle, but that one of the packages is “unidentified”. The driver's tablet can show the location of the unidentified package, and remedial action may be suggested to, or required from, the driver. Alternatively, a distinct light (i.e., red light) may be directed onto the location where the unidentified package rests.
Detection of a package may be delayed or inhibited by occlusion of the field of view (such as the loader's body or another package). Through prediction from threshold detection from the loader position inside the vehicle cargo area and the vehicle cargo area map already stored by CPU 122, the system 100 can compare the known map of the vehicle cargo space before the loader enters with a package with the new map of the vehicle cargo space after the loader places a package in the cargo area to determine the location of the package. Thus, even if the loader's body temporarily occludes optical tracking as the package is placed inside the cargo area, the package can be located, identified, and matched by using image frames after the loader leaves the cargo area to frames before the loader entered the cargo area.
In one embodiment, the system 100 performs the process 500 to track packages continuously after they have been scanned, loaded, and “matched”. The process 500 enables tracking of matched packages within an area of coverage after a package has been identified (“marked”). Specifically, after a package is loaded and marked in one place, the image processing CPU 122 can regularly (or continuously) perform the same (or similar) threshold detection to search for a “change” at the location of interest. This accounts for object movement during transport.
In this scenario, the system 100 has identified packages within the area of coverage and no new packages have been scanned. This may represent when the driver is driving the vehicle to a destination. If the image processing CPU 122 detects a change at or near a package location, a tracking subroutine is invoked. The detection of a change may comprise an image absolute difference comparison between frames as previously described with respect to detailed image processing. The processor 122 analyzes the location of the package within the image at which the change occurred and determines if the package at that location still matches the data for the package captured off the barcode. If the match is identical, the system 100 may continue to label the package as corresponding to the package scanned and placed at that location.
If, however, no package is detected at the location or if the package dimensions do not match the expected package dimensions with a high level of confidence, the image processor 122 searches for an “unidentified” package that matches the moved package dimensions. When the matching package is located, its overlay marking on the cargo system is updated to display the new package location.
The above ability to identify movement of previously located packages is particularly valuable in delivery vehicles. Drivers often shift packages forward in the vehicle during the delivery day to make packages accessible. By monitoring known package locations and tracking the movement of a package to a new location, the system 100 maintains a real time map of package locations.
In another embodiment, the system 100 can be configured to reduce potential human loading errors that occur from a breakdown of a sequential loading pattern of scanning a package then loading that package immediately into truck. This reduction may be achieved by, for example, providing additional scanners over the delivery vehicle loading doors to scan bar codes automatically as packages are placed into the vehicle. Such a system can guarantee that the packages scanned are the packages loaded into the truck. After a package is scanned, it is also viewed by the optical sensors in the vehicle; that direct and almost simultaneous registration improves package identification.
In another embodiment, the system 100 can alternatively provide continuous, real time tracking, albeit with more complicated image processing. In such a system, for example, a person (loader, driver, etc.) may be identified and the system may detect objects located in the vicinity of the hands of the person to determine if the object matches the package expected to be loaded. Further, an algorithm for identifying a package or its unique identifier (size, color, etc.) may be tailored to specific environments or hardware. The tradeoff of such a full real-time tracking system is increased system complexity.
In another embodiment of the system 100, an augmented reality (“AR”) real time video view may be presented to the loader/driver. For AR video in real time, a single perspective is shown of the vehicle cargo map with those designated packages needing to be taken being highlighted or lit. The user may view one perspective of the vehicle from the front (or back, depending on how the user is removing the packages, that is, from either the front or from the back), one perspective of the left side of the vehicle and one perspective of the right side of the vehicle associated with each camera. The image processing CPU 122 may determine where the driver/delivery person is and provide a perspective on the tablet based on the driver position in relation to the package being delivered. As previously described, identifying the user position within the area of coverage is analogous to identifying a package.
Additional package delivery data may be gathered using the present system. For example, the system 100 may track package movement in real time. Therefore, tracking package movement, especially velocity, can help prevent mistreatment of packages through packages being thrown, dropped, or placed in positions that are not secure and risk having the packages fall. By tracking packages movement in real time and determining movement velocity, impact through rough handling can be monitored and reported to improve the quality of the loading and unloading procedures and to prevent damage to the packages. In this embodiment, velocity may be determined by dividing the distance a package moves by the frame rate in which such movement occurs.
Referring to
At step 606, an initial calibration is performed if a calibration has not been previously performed. A function of this initial calibration, which is performed over multiple image frames, is to determine background information both for 2D optical images and depth sensing. Any motion (e.g., people) is extracted or ignored (step 608) during background extraction until stable background optical (RGB) and depth information can be stored (step 610). Calibration may optionally include creation of a foreground or front-ground region. This front region limits the data set for analysis to a region near shelves where objects of interest (e.g., packages) are to be located. Calibration may be performed on start-up, at intervals, be initiated by the user, or by the system, for example, if errors are detected.
After calibration is complete, the resulting spatial filter masks are used to extract the “area of interest.” In one embodiment, this area of interest corresponds to the area between the background and the foreground, so everything that is not the wall and the shelves (for background) and not the person in front of the shelves, is ignored. This ignoring of the background and foreground focuses on data within the depth threshold of the area of interest being monitored. Alternatively, the “area of interest” can include a different part of the scene, for example, the foreground in order to see where the person is in later recognition steps and can be expanded or contracted as system requirements dictate. In general, the area of interest applies to any cut-out of a scene that is to be the focus within which to perform object tracking.
Multiple image frames (e.g., N−1 and N) are obtained (step 612) and compared (step 614), similarly to that performed in process 500 (
Referring to
In one embodiment, the process 600 compares two frames of image information for change, ignoring the background/foreground masks; any actual change in the image triggers further analysis. However, it is less processing and power intensive to detect only changes in the “area of interest” between the background and foreground (if foreground masking is utilized). When the background is stable, at step 622 absolute background subtraction is performed (likewise for foreground). This step allows the resulting 3D information to be processed faster for determining areas of interest in which one or more new packages may by present. Absolute image subtraction may be formed using OpenCV library modules in one embodiment, though other alternative techniques may also be used.
With the background information (and foreground if applicable) subtracted, the process 600 checks (step 624) for changes in depth of any objects in the field of view of the camera(s) and the measurement field of the depth sensor(s). If no changes are found and no package has been scanned (step 626), this indicates that no package has been detected and the next images are processed (step 602). However, if a package was scanned (step 626), but no package was detected, the process 600 can use (step 628) historical optical and depth information (or information from an adjacent wireless tracking system) to register that the last scanned package has not been located, indicate the last known location of the package, and inform the user of the ambiguity.
Referring now to
When the area of interest is determined, a “point cloud” is generated (step 632) using the optic sensor(s) extrinsic and intrinsic parameters through algorithms for “2D to 3D” data representation conversion preformed on the RGB and/or depth images obtained and processed through OpenNI and OpenCV. In one embodiment, the Point Cloud Library may be used. The object shape and location information generated from the Point Cloud Library are used to identify and track a package in three dimensions using edge detection, color detection, object recognition and/or other algorithms for determining an object within the scene. If object information is in the shape of a human, for example, then the process 600 continues processing further image data and does not track the human (unless the system 100 tracks user motion). However, if the size, shape or other appearance information indicates that the object is a package, the object is recorded as such. The process 600 resolves (step 634) the identity of a plurality of scanned packages based on this information by comparing expected package size, shape and/or appearance attributes (as established by information associated with scanning a package) with measured information. The use of both optical and depth sensing information allows the system to calculate package size based on the 3D data generated from the camera images and depth sensor data. The identity, location and other information (e.g., time of placement and motion) may be stored at a central server (e.g., 204 of
When an object is detected and matches a scanned package in size and appearance, the object is registered. A variety of reasons exist for a detected object not to match a scanned package. For example, the object may be partially occluded or a different object may have been substituted. In some instances, further analysis on subsequent image frames is performed to resolve the object size and appearance. In such instances, further image processing occurs until the object is identified or marked unidentified (step 636).
The aforementioned description of the process 600 is with respect to a positive change in an image scene: specifically, a new object is located. A “negative change” can also be detected in a similar fashion and occurs when a package is removed from an area of interest. In such a situation, a difference is not mistaking package occlusion as object removal. Specifically, if a person steps in front of a package, then the system detects the motion and shape of the person. After the person moves away from the front of the package, the image processor 122 detects if the identified package was removed. Note that the user typically scans a package when moving it, so taking a package from a location without scanning it may trigger a flag to the user to scan or identify the package.
In many situations, a second package may be placed so as to partially occlude a first registered package. In those instances, the system 100 looks for evidence based on depth and size information that the first package is still in its original location. Such evidence can be a corner of the package remaining visible behind the second package. If the first package is fully occluded, but not scanned to indicate its removal, then the system 100 may be designed to assume the first package is sitting behind the larger second package.
As previously described, the system 100 detects changes in a field of view to build a database of known packages. The database is used to locate and disregard these registered packages while looking for identifying new objects being placed into the field of view. While the registered packages are “disregarded” when looking for new packages that are being loaded, they are continually monitored to see if they have moved or been removed.
The process 600 may run continuously or be triggered upon user startup, detection of motion, or other triggers. Allowing the system 100 to drop to a lower state of analysis may be desirable in some instances to reduce bandwidth and power consumption. For example, if a delivery vehicle is being loaded, then the system 100 can run at full speed with processing of images at the maximum rate described by the camera. However, after loading is complete, the system 100 can operate at intervals (for example, by processing images once every 3 seconds) to conserve power, data storage and bandwidth while meeting the requirements of the specific application.
Augmented Package Loading Techniques
Package tracking systems described herein can track packages within conventional delivery systems wherein loaders place packages on vehicles according to their perception of proper loading protocols. This perception may vary by loader, region, delivery vehicle, or other factors. Such package tracking systems can also be configured to optimize package loading in addition to delivery. In one example, the central server 204 (
In one embodiment, when the loader scans a package and enters the delivery vehicle with the package, the CPU 122 activates a light that shines on the location for that package. The location and matching of the package may be confirmed as previously described. A focused light may be used to identify the proper loading place for the package. The source of the light can be the same light as that used to identify a package for a driver.
In the various embodiments detailed herein, the location of a package may be “marked” or indicated in a variety of manners: by projecting light on the package of interest (unidentified package, package to be delivered, etc.), by projecting light where the package is to be loaded, by marking the position of the package on a live camera feed of the cargo bay, in a representational view of the cargo bay with the package location identified, or in a projection of the marking in augmented reality glasses. For example, consider an embodiment of a package tracking system wherein one or more shelves in a package room have a strip of lights along the front edge of that shelf. The package tracking system can be configured to illuminate a particular light in a given light strip to show the location on the shelving of a package to be removed or placed.
As an example of light-based guidance for package loading, consider a system that employs conveyor belts to move packages inside a facility. As the packages are transported on the conveyor belt they are scanned for identification, either by optical, magnetic, or electromagnetic means. After each package is identified, the system continually monitors the position of the package as it moves from one area of the facility to the end destination for transportation vehicle loading. As packages reach areas for vehicle loading, the system uses a form of light guidance to help loaders identify proper vehicle package assignment. For example, if a package is assigned to particular truck, that truck could be assigned a particular color, say blue. The package designated for the blue truck is then illuminated with a blue light, through LED, laser, or related light guidance means, thus making package vehicle identification easy for loaders. After the loader places the package in the identified delivery truck, the package tracking system can detect its presence and register its location as previously described.
One of ordinary skill in the art will recognize that other cues (visual, auditory or the like) using various technologies may be used to mark package location for easy loading, delivery or tracking of packages.
Augmented Tracking
Various embodiments of the package tracking systems described herein may benefit from additional tracking technology. For example, in the bigger areas (e.g., freight, air cargo, large shipping containers), one may incorporate other techniques to make tracking more interactive, such as Ultra-wideband (UWB) or Wireless Lan (including, but not limited to, 802.11 protocol communications or the like). Example implementations of techniques for tracking can be found in U.S. patent. Appn. Ser. No. 14/614,734, filed Feb. 5, 2015, titled “Virtual Reality and Augmented Reality Functionality for Mobile Devices,” the entirety of which is hereby incorporated by reference.
In a package tracking system that augments optical tracking with UWB tracking, the driver, the driver's tablet, the packages, or all of the above, are actively tracked as described in U.S. patent. Appn. Ser. No. 15/041,405, filed Feb. 11, 2016, titled “Accurate Geographic Tracking of Mobile Devices,” the entirety of which is incorporated by reference herein. In one embodiment, the position of the driver's tablet is tracked so that the viewpoint from the tablet's camera associated with the tablet location and orientation is streamed to the tablet, with digital images overlaid onto the tablet's camera view, and is used for navigation or package identification. In this example, as the tablet camera views a stack of packages, the accurate tracking of the physical position and orientation of the tablet allows the system to overlay a digital image, for example, a flashing red light, on top of the package that is seen by the tablet camera. In this case, digital images are shown on the tablet camera view, not projected onto the actual package by an external light source.
Small delivery (and other delivery modes, like airfreight, cargo containers) may use of UWB or RF (radio frequency) to improve positional accuracy tracking for when and where packages are scanned. The packages may be tracked using UWB with tags on the packages until a handoff to the camera for optically tracking inside the delivery vehicle becomes possible. This is a benefit as it reduces or eliminates the need to do optical image processing in the delivery vehicle, but still provides package ID confirmation and tracking (which may then also be re-registered via dimension data inside the delivery vehicle by the cameras).
In addition, cumulative tracking methods (i.e., optics and UWB) help track the driver and packages. For example, in dark environments, large environments or in situations involving other issues with optical coverage, it may be preferable to use UWB or related RF-based tracking to identify initial package location, and to switch to optical scanning after package location is generally identified. In such situations, UWB tracking may augment or supplant optical tracking.
Also, in some situations, one may want to track the loader using a tag physically associated with that person. In such an environment, one may scan a package and then track the loader using UWB to make sure the package goes to the correct delivery vehicle (for instance, they may be loading multiple trucks) or, in other use cases, track the driver as the driver leaves the delivery vehicle to insure proper delivery drop off location. In the scenario where a driver is being tracked, the driver is tracked as he leaves the delivery vehicle with the GPS position known either on the delivery vehicle or on the driver. As the driver leaves the delivery vehicle, the driver is tracked and when the package is dropped off, the package is scanned and the position in relation to the delivery vehicle is recorded to show proof of delivery. As described in the aforementioned U.S. patent. Appn. Ser. No. 15/041,405, augmented reality (AR) glasses can be used to track a driver. In this scenario, the AR glasses are being tracked by a form of RF tracking, and the orientation and position of the driver may be determined by the glasses.
Example implementations of UWB or other wireless tracking systems are described disclosed in U.S. patent. Appn. Ser. No. 13/975,724, filed Aug. 26, 2013, titled “Radio Frequency Communication System”, the entirety of which is incorporated by reference herein. Tracking may be implemented outside the delivery to confirm that a package that was scanned by glasses or a finger scanner is the same package that gets loaded into the delivery vehicle. In such scenarios, a loader scans the package off a conveyor belt, and the loader is tracked by the UWB system to ensure that the package scanned is the package placed in the truck or is at the proper loading area of the delivery vehicle. Thereafter, the optical tracking system tracks packages within the area of coverage.
The RF positioning system 704 includes four RF nodes 712-1, 712-2, 712-3, and 712-4 (generally, 712) and an RF tag 714. The RF positioning system 704 operates to track the position of the RF tag 714, which can be affixed to the package or worn by personnel, such as a driver or package loader. In general, the RF nodes 712 provide an interface over Wi-Fi to the user device 706. The RF nodes 712 are in communication with the user device 706 via Wi-Fi, and the user device 706 is in communication with the hub 702 via Wi-Fi; in effect, the hub 702 provides an ad hoc Wi-Fi hotspot to the user device 706 and RF nodes 712.
The user device 706 is any computing device capable of running applications and wireless communications. Examples of the user device 706 include, but are not limited to, tablets and smart phones. The user device 706 can be in communication with the hub 702 over a wireless communications link 718, with the server system 708 over a wireless communications link 720, or both. An example implementation of the communication links 718, 720 is Wi-Fi.
The area 710 for holding assets can be stationary or mobile. A stationary holding area can be disposed anywhere along the delivery chain, from a warehouse to a package delivery center. Examples of stationary holding areas include, but are not limited to, package rooms, closets, warehouses, inventory rooms, storage rooms, and trailers. Examples of mobile holding areas include, but are not limited to, delivery trucks, tractor trailers, railway cars, shipping containers, and airplane cargo bays. Each holding area (i.e., each facility, truck, etc.) is equipped with an optical tracking hub 702. An example of a delivery truck than can be equipped with an optical tracking hub 702 is the standard Ford® P1000.
The RF tag 714 is in communication with the user device 706 over a wireless communication link 722, for example, Bluetooth, and with the RF nodes 712 by way of RF signals 724.
During operation, in general the hub 702 provides interior tracking (e.g., inside a delivery vehicle) of a package using optical techniques and the RF positioning system 704 provides exterior tracking (e.g., outside of the delivery vehicle) of the RF tag 714 using RF signals. In one embodiment, the user device 706 directly communicates with the server system 708 (e.g., in the cloud). In another embodiment, the user device 706 provides data to the hub 702, and the hub 702 communicates with the server system 708. In this embodiment, any feedback information from the server system 708 goes through the hub 702, which communicates such information to the user device 706 by Wi-Fi.
The hub and power subsystem 804 includes an image processor 814, a power subsystem 816 connected to a power source 818, and an optional charger 820. The power subsystem 816 provides power to the image processor 814 and charger 820 by the power bus 814. In one embodiment, the power source 818 is a battery (e.g., 12 VDC, 55 aH). An accessory power source 838 is connected to the power subsystem 816. In communication with the image processor 814 is a cellular antenna 822, a GPS antenna 824 and a Wi-Fi antenna 826. The image processor 814 is also in communication with the cameras 808 by communication links 828 and with the optional display device 810 by communication link 830. Also shown are the user device 832, RF tag 834, and scanner 836. The scanner 836 can be separate from the computing system that embodies the hub and power subsystem 804, as shown, or be integral to the computing system (e.g., a built-in barcode scanner). An optional light projector external to the holding area 802 (not shown) can be used to shine light on a package before the package is loaded, for purposes of guiding a loader to the location where the package is to be loaded (e.g., a particular delivery truck).
In one embodiment, the image processor 814 is implemented with a bCOM6-L1400 Express Module produced by General Electric of Fairfield, Conn. The interfaces of the image processor 814 include: at least three USB3 ports for connecting to the cameras 808 and a USB2 port for connecting to an optional light-projector gimbal; an HDMI port for connecting to the display device 810; an integral GPS unit with the external GPS antenna; a cellular PHY card/interface (e.g., LTE, GSM, UMTS, CDMA or WCDMA, or WiMAX) with a cellular antenna jack (for an appropriate multiband cellular antenna operating at 800-950 MHz, 1800-1900, 1900-2000, 2100-2200 MHz bands, and can be a different physical antenna depending on the cellular PHY provider chosen for the given area) to enable a wireless connection to a cellular data service provider; and a Wi-Fi module with a Wi-Fi antenna jack (the antenna is omni-directional, providing 500 m of range, and operating over the 2400-2480 MHz range).
The holding area 802 can be stationary or mobile. For a mobile holding area 802, such as a delivery truck, the RF nodes 806 can be mounted externally on the roof of the cargo area at the four corners, with the cameras 808 and display device 810 mounted internally within the holding area 802. All of the cameras 808 are mounted near the ceiling of the truck box, facing towards the back of the truck, one camera at each front corner of the truck box, with the third camera at the front of the truck box disposed between the other two cameras. The cellular antenna 822 and Wi-Fi antenna 826 are mounted inside the truck and the GPS antenna 824 is mounted on the roof. In addition, a standard small form factor 2-axis gimbal can be mounted to the ceiling or rafter of the truck box. The gimbal provides azimuth (180 degree) and elevation angle (90 degree) positioning of the optional interior light projector (e.g., a laser pointer), which can be turned on and off. A USB2 interface of the image processor to a light projector sets the azimuth, elevation, and on/off state of the light.
The hub and power subsystem 804 can be placed within the cab of the truck, for example, behind the driver's seat. The system 800 is not attached directly to the vehicle DC power terminals, or directly to the battery of the vehicle, to avoid draining the battery of the delivery vehicle. Power subsystem 818 can connect to the accessory power 838 of the vehicle on a fuse. When the delivery vehicle is parked and off, the accessory power 838 is turned off, and the system 800 runs on the internal battery 818. The battery 818 thus ensures that when the delivery vehicle is off (such as during package loading) the various components of the system 800 remain powered. When the vehicle is idling or in motion, the system 800 charges the battery 818. The power subsystem 818 also provides 12 VDC and 5 VDC dedicated for the RF Nodes 806 and the cameras 808.
For a stationary holding area 802, the RF nodes 806 can be mounted externally near an entrance to the area 802, with the cameras 808 and display device 810 installed inside. The hub and power subsystem 804 can also be installed inside or outside of the holding area 802. For a stationary holding area 802, the cellular antenna 822 and GPS antenna 824 are optional.
A schematic diagram, as shown in
People interact with package tracking systems described herein in a variety of ways, as carriers, couriers, store personnel, package room managers, and package recipients. Couriers and store personnel, for example, interact with a package tracking system when bringing (i.e., “dropping off”) packages to a package room (a term used herein to refer generally to any area designated for holding packages, inclusive of conveyor belts). Whether a courier is bringing a package to a drop-off location, or a store clerk is carrying a customer-bought item to a designated holding area within a business enterprise, in each instance the person brings the package to a data acquisition site, where the person enters package identification information about the package being dropped off, for example, by scanning a bar code on the package, taking a picture of the shipping label and having character recognition software automatically recognize and input the package recipient information, or manually entering the recipient's information. This data acquisition site can be in the vicinity of or in the package room. Afterwards, the person places the package on a surface (e.g., shelf) in the package room, where the package tracking system detects placement of the package, and confirms (i.e., registers) whether the detected package corresponds to the package from which package identification information was last obtained.
In addition, couriers, package recipients, and “last mile” delivery personnel, for example, interact with a package tracking system when taking a package from the package room (i.e., referred to as “pick-up”). “Last mile” delivery personnel refers to a person, other than the intended package recipient, who is authorized to take a package from the package room and bring it one step closer to the package recipient. Such last mile delivery personnel may act as a courier and bring the package directly to the intended recipient or to a person authorized to take the package on behalf of the intended recipient. Alternatively, the last mile delivery personnel may drop off the package at another authorized designated holding area (where the package recipient or authorized individual can later pick up the package). Such authorized designated holding areas may or may not be configured with a package tracking system. Examples of designated holding areas include, but are not limited to, package rooms, locker rooms, vehicles, loading docks, warehouses, marked regions of open areas, closets, hallways, walls, At the designated holding area are surfaces for receiving the packages; such surfaces can be stationary, such as tabletops or shelves, or moving, such as a conveyor belt.
Many advantages derive from the use of cameras in the embodiments of package tracking systems described herein. Cameras strategically situated can capture images or take video during any stage of systemoperation: when a package arrives at a holding area; when a person enters information about the package into the system; when a person places the package in the holding area; while the package resides in the holding area; when a person comes to retrieve the package from the holding area; and when a person removes the package from the holding area. The holding area can be any designated area used for the placement of packages, These cameras may be active continuously or be turned on in response to the detection of the presence of the person, for example, by a motion detector.
The video taken by the cameras can serve to confirm package drop-off at a package room. For instance, when bringing a package to the package room, the person initially brings the package to a data acquisition site, for example, embodied by a kiosk. When the person scans the package to acquire the package identification information (or manually enters the information through a data input device), the package tracking system generates a record associated with the package. Video hereafter captured by an optical sensing device becomes associated with this package record. If the data acquisition site is outside of the package room, a camera positioned outside of the room captures the video, and the processor associates this video with the package. When the person then carries the package into the package room, a camera disposed within the package room captures the video of the person putting the package on a shelf After the processor detects this package and determines it to be the one just scanned, the processor also associates this interior video with the package by making it part of the record. The record associated with this package includes two videos, a first video captured outside the package room and the second video capture on the inside, or one video comprised of the external and internal videos stitched together into one. Whether stitched together or kept separately, the package record that contains these videos (or links to them) provides visual confirmation that the package was indeed deposited in the package room. If, alternatively, the data acquisition site is within or part of the package room, the video captured by an appropriately positioned camera suffices to provide the visual confirmation of the package being registered and dropped off.
The captured images or vi can serve to track each user's interactions with the system and to gather metrics (i.e., data) related to such interactions. Such metrics can provide a basis for issuing alerts, for example, in the event people are not properly interacting with the package tracking system in accordance to established procedure. The aggregate interactions of multiple users, tallied in the form of metrics, empirical data, or metadata, can serve to improve procedures for and interfaces to the package tracking system. Examples of such user interactions include; but are not limited to, the amount of time elapsed between when information about a package is entered into the system to when the package is brought to (dropped off at) a package room or picked up (retrieved) from the room; the amount of time elapsed between when a package is dropped off at a package room and retrieved from the room; the speed with which a person handles the package at the time of package pickup or package drop off; the movement of a package and of a person in the package room; and the time spent by a person in the package room or warehouse picking up or dropping off a package. Examples of procedural improvements based on collected data include improvements to package placement or placement sequence, improvements to retrieval procedures and protocols, and improvements to navigation to packages.
Because cameras can video all aspects of a user's interaction with the package tracking system and packages, the system can determine through a set of protocol events that a certain interaction is not following procedures or is occurring counter to interaction rules, such as throwing a package. Should an action be flagged as against rules, such as throwing packages, standing on packages or shelves, pushing packages too forcefully, and taking the wrong package, the system can issue an audible alert; for example, a voice command or alarm, or the system can notify building staff through electronic messaging that the system is being improperly used. Videos from this interaction can then be recorded for verification of the improper use, sent to security or appropriate building staff for viewing, or both, in real-time or at a later time.
Cameras and imaging can also be utilized to provide better user interface options and features. For example, when a package is received in the room, an electronic text message, email, or similar notification can be sent to the package recipient indicating that their package is in the package room. The notification can include an image of the package in the package room. This feature of notification with package images can assure the recipient that the package has arrived safely, provide proof of package safe handling and delivery, and help direct the recipient to the package through such visualization before the recipient even arrives at the package room.
This package imaging can also be used in transit during the loading and unloading of a package from a truck. Such images can serve to verify proper package handling, which reassures the intended recipient or serves as evidence for insurance companies or delivery companies in the event a claim is made on broken or damaged packages or personnel injuries. Truck and transit video recording can also be used to improve procedure efficiency as noted in the package room environment.
Video and tracking can also aid and guide delivery personnel and recipients on package lifting and placement should a package be heavy. Heavy or awkwardly shaped packages can be flagged, and the cameras can record video confirming whether a driver or package recipient follows directions or protocols to prevent injury or package damage because of the package weight or shape.
Video capability in a room, facility, conveyor belt, or other areas designated for holding or moving packages can provide evidence of undamaged or damaged packages at certain times. Captured video or images of each side of the package can serve to verify damage. The package tracking system saves video or images, with the time when such video or images were taken, in a record associated with the package. The video or series of images can serve to illustrate when damage occurred, if any, and whether such damage occurred as a result of personnel, equipment, or a vehicle mishandling, throwing, dropping, or abruptly moving the package. The record produced by these images can help resolve matters, such as insurance claims and liability disputes, providing evidence of who handled the package and when.
In one embodiment, the package tracking system notifies a recipient when their package has arrived at the package room, Notification can occur in response to entry of package identification information into the system or registration of the package in the package room. The data capture, followed by package registration, operates to acquire the recipient's name and address information, for example, through character recognition of the shipping label on the package, a bar code scan, or manual input of the recipient's name and address information. An electronic file maintains a list of residents in a particular building, complex, or residence, including contact information (e.g., telephone number, email address). Using the recipient name and address information acquired from the package, the package tracking system searches the electronic file containing the list of residents for a match, in order to find the intended recipient and obtain the contact information of the recipient. After confirming the intended recipient, a notification is sent using the acquired contact information, by email, an electronic text message to a cell phone, or any other related communication means. Notification can include an alert that the package has been delivered, the time, a picture of the package in the room, and a code for gaining access to the package room when retrieving the package.
The package tracking system can also send a notification to a package delivery service in the event someone leaves the package at a location (i.e., package room) that is not the final destination (i.e., the residence of the package recipient). Such a package requires additional delivery to reach the recipient. For example, if the package resides at a package room at a retail location or at a remote drop off (pickup location), a notification can be sent to a ride sharing service or package delivery sharing service to get the package. Provided authorization has been given to the service for delivery sharing, the delivery service can obtain the package from the package room and deliver the package to the addressee. This service can be part of the package tracking system, which improves the last mile delivery process for both package recipients and couriers.
For authentication of delivery service drivers, additional security features can be added, for example, fingerprint or retina scanning, face recognition, and pin or ID code registration. A database stores a record of all user transactions for each package delivered, including, but not limited to, who handled the package, placed the package in the package room, removed the package from the package room, when each user transaction occurred, the handling conditions under which the transaction took place.
As previously described, when a person or courier service delivers a package to the package room, the recipient receives a notification that their package has arrived. The notification includes a code needed to obtain the package from the package room. The recipient may authorize another individual or service to retrieve the package from the package room on behalf of the recipient. To accomplish this authorization, the recipient accesses the package tracking system (remotely, through an application program running on a computer or mobile device, such as a smart phone), submits the code received in the notification (or related means of verification) to the package tracking system, and notifies the package tracking system that another individual or service, other than the recipient, will be coming to pick up the package. The recipient can require that the package tracking system provide a new, different code to be used by the authorized individual or service when picking up the package. Alternatively, the recipient can give the original code (the one received in the notification) to the deputized individual in order to gain access to the package room holding the package. That deputized individual submits this code to the package tracking system when retrieving the package from the package room on behalf of the recipient. In either instance, the package tracking system can capture video of the person who picks up the package. In event of a dispute, this video record can prove when the package was retrieved and by whom.
This authorization feature can also be used to redirect package delivery locations (i.e., package rooms). For example, if the facility with the package room is inconveniently located (e.g., the recipient has changed address of residence), the recipient can remotely send a notification to the package tracking system identifying another location (i.e., package room) preferred by recipient for package delivery. In real-time, the package tracking system can change the location of the package delivery to the location chosen by the recipient.
When a delivery service or a designated authorized driver delivers a package to a package room, the package tracking system can record certain details useful to evaluate the delivery service. For example, the cameras of the package tracking system can capture on video a package delivery service driver who shakes, drops, or forcefully removes the package from the package room. The package tracking system can then send this video record to the package recipient, to the service company employing the driver, or both, to bring to attention that the driver improperly handled the package. In addition to monitoring package handling, the package tracking system can maintain a record of the time the package is removed from the package room to the time the package reaches the final destination of the package recipient. Retailers, for example, can use this record to rate the delivery service, with such ratings being used for quality assurance, tied to fees paid for delivery.
The package tracking system can also be used for outbound packages, that is, for packages awaiting removal (i.e., pick-up) from the package room. When a package is placed into the room, a notification is sent automatically to the courier delivery service, indicating; that the package is in the package room, ready for pick-up and delivery to another location. This drop off feature of the package tracking system can be used for outbound packages, returned packages, packages that were damaged or not supposed to be delivered. Drop off can also be accomplished without a notification feature by designating a drop off area within the package room that the couriers or delivery service understands is specifically intended for outbound packages.
The package tracking system can use environmental factors for scheduling package pickup and drop off, and for alerting recipients and delivery personnel of inclement weather or traffic conditions that may delay or risk safe package transport. Industry has produced various publicly available apps that provide weather or traffic conditions, for example, The Weather Channel™, provided by IBM, and Google Maps™, offered by Google. The package tracking system has interfaces to weather or traffic apps, such as these, from which the package tracking system acquires relevant data. Weather factors and traffic conditions can be important to help coordinate the timing and routing of package deliveries and retrievals. Rain or snow conditions may be factored in to determine if a package is damaged by water, ice, or snow.
Through data management and identification of package shipments and deliveries, an embodiment of the package tracking system monitors for suspicious or potentially illegal shipments or activities by recognizing certain characteristics of deliveries of goods to certain locations from either flagged manufacturers or locations. Locations or manufacturers known to source illegal or dangerous materials, for example, could be recognized as such and, if the number of packages from these locations sent to a particular address exceeds a certain threshold, the address can be flagged and the appropriate security organizations alerted.
The package tracking system, in addition to weather and traffic conditions, can monitor room or related package storage and transportation environments to protect against temperature, humidity, elevation, vibration, or other factors that may compromise the safe storage conditions of packages. Sensors could be placed within the storage or transportation spaces to help identify safe conditions for package storage and alert appropriate personnel should conditions fall beyond a predetermined safe range, for a package room that has become too hot or too cold.
In one embodiment, the package tracking system deploys weight sensors in the package room, to help confirm the placement and location of a package in the package room. An example of the use of weight sensors in package tracking systems appears in U.S. application Ser. No. 15/259,474, filed Sep. 8, 2016, titled “System and Method of Object Tracking Using Weight Confirmation,” the entirety of which is incorporated by reference herein. The package tracking system can collect the weight data measured by the weight sensors along the shipping route of the package: when at the distribution hub personnel first places the package in the delivery truck, then as the truck travels along the route to the package room, then again when personnel removes the package from the trucks and deposits it in the package room. While the package sits on a shelf, whether within the truck or in the package room, the package tracking system can take weight measurements periodically. The collected package weight can serve to verify the condition of the package, to identify changes to the package along the shipping route, to suggest the type of handling and delivery needed for the package heavy packages require careful lifting to reduce the risk of injury), to suggest the proper placement for a package (e.g., heavy packages should not be placed atop of other packages, especially on lighter ones). The package tracking system can store such collected package weight data, to enhance the recorded history of the package, such as its delivery, quality, and status.
The package tracking system can build a record of various aspects of those packages registered with the system. A record can include images of the package during its shipping route and upon final destination delivery, the speed at which the package moved when placed in a package holding area that is being monitored by camera, the package location in package holding area, the placement of the package with respect to other package placements (e.g., if other packages are placed on top of package), and images of personnel who delivered or handled the package and of residents or designates who picked up the package. In addition, the record can include data measured by any sensors employed by the package tracking system. The package tracking system can monitor this data, which provides a record of package delivery, quality and status. Personnel may find this data useful for improving delivery quality, verifying or denying insurance claims on damaged packages, and improving security on drop off or pickup.
This data about the packages can help resolve matters with customers, for example, in the event a customer reports receiving a damaged package. Further, customers can photograph a damaged package and send the image to the package tracking system, for example, by attaching the image to an electronic reply to a previously received package notification, or by some other communication means. The customer's smart phone can run an application program (i.e., App) that provides an interface to the platform of the package tracking system and provides a simple means to communicate any issues, check on the delivery status of the package, or review any information of interest about the package, like current location. An image submitted by a customer serves as evidence of the package condition, and includes the time when the image was taken. Personnel can review the data collected for the package by the package tracking system, including the photograph taken of the package by the customer. Data and image comparisons may prove package damage.
Delivery-Sharing System
One embodiment of the package tracking system may improve the efficiency of last-mile package delivery by incorporating a crowd-sourced ride sharing model. The package tracking system can capitalize on existing ride-sharing services, like Uber℠ and Lyft℠, by establishing a marketplace where bids may be placed on the delivery of packages to their final destinations. For example; consider a retailer that uses the package tracking system to sort and manage online purchases or out-of-stock purchases. The retailer typically receives and holds these purchases at the premises for later pickup by customers. But rather than have the customer travel to the store to get a package, the package tracking system can bid out the delivery to the final destination (i.e., the premises of the customer). Any authorized driver can accept a bid, pick up the package from the store, and deliver the package to the customer.
For instance, if the retailer is holding a package for a customer in a neighboring town, and an authorized driver is presently in the store and planning to drive to that town, the driver can accept the bid and be paid a predetermined amount to make that final destination delivery, often in their own personal vehicle. The customer benefits by not having to make the trip to the store and by typically receiving delivery sooner than would have been otherwise; the retailer benefits economically by reduced delivery costs, and from a satisfied customer; and the community benefits through lighter traffic achieved by leveraging the crowd-sourced drivers who are already planning to travel near the final destinations, in vehicles that require less fuel than typical shipping trucks.
In addition or in the alternative, the package tracking system can permit customers of the retailer to join a delivery sharing network. If a customer who is a member of the delivery sharing network is presently at the store to pick up a package, the customer can determine, through the package tracking system, whether the store is holding any packages for neighbors or other members of town. Upon request from the customer interested in determining what packages are in the package room awaiting delivery and the addresses of their intended package recipients, the computing system may display a list of packages and the associated bids. The computing system may be configured to filter the list of packages geographically. The customer can thus choose to deliver any such package to its final destination by removing the package from the package room as previously described. In this instance where the customer is part of the delivery sharing network, the package tracking system further leverages efficiencies in demographic similarities in customer travel, shopping habits, and residential address proximities. The computing system (e.g., 804 in
In one embodiment, the package tracking system provides an access code to the smartphone of the person who accepts the bid. This smartphone can then act as an automatic identification verifier. When this person comes to the package room to retrieve the package, the person submits the access code to the computing system in any one of a variety of ways. For example, the smartphone can transfer the access code in a RF transmission (e.g., WiFi, Bluetooth™), provided the computing system is configured with an RF receiver. Alternatively, optical code verification can be used for the computing system to acquire the code from the smartphone. In this instance, the screen of the smartphone displays a barcode or a QR code, and the computing system is configured with an optical reader to scan the code displayed on the screen. When the computing system recognizes the code, the person is permitted to access the package room and retrieve the package.
When the package tracking system detects that the person has removed the package from the package room, the computing system is configured to notify the particular package recipient that the package has been taken. The notification may include the identity of the person delivering the package and the estimated delivery time. The computing system may also notify, personnel responsible for managing the package room that the person who accepted the bid has taken the package from the package room. Such personnel may require the person to demonstrate authorization for removing the package (e.g., the access code). In response to receiving the package, the particular package recipient may send a confirmation to the computing system (e.g., through an app on a smartphone that can communicate with the package tracking system, for instance, over the Internet). The person bringing the package to the intended package recipient may also communicate with the package tracking system to indicate completion of the delivery. The computing system can then compute an amount of the time between removal of the given package from the package room and receipt of the package by the intended package recipient, which can serve as a metric for evaluating the delivery performance of that member of the package delivery-sharing system. In embodiments wherein the computing system has access to weather services, traffic conditions, or both, the computing system can take current conditions into account when evaluating the person's delivery performance, or when scheduling or estimating when such delivery can be completed.
Light-Based Features
As previously described, some embodiments of package tracking systems described herein include light guidance features, wherein a light, laser or light projector shines onto or near the package to be picked up or at the area where the package is to be placed. Use of the laser or light source can provide further include functionality beyond light guidance. For example, consider an embodiment of a package tracking system that includes a light projector, that light projector can superimpose images or notifications on the package or across the area where the package is to be placed. For example, when coming to pick up a package, a resident can see a notification of an upcoming event to be held on the premises as a text image superimposed on the package being picked up. For delivery, the driver may see a text message projected onto the area where the package is to be placed, for example, warning of a storm approaching the area or indicating that the package is fragile and requires handling care.
As another example, the superimposed image displays an outline around an area within the package room. This outline corresponds to a field of view of a given optical sensing device (i.e., camera). When setting up a package room, personnel can judge from the location of this visible outline whether the field of view of the camera is properly covering the desired area, and can adjust the camera, if need be, to achieve the desired coverage. In one embodiment, the light source is coupled directly to a camera. The outline produced by the light source is predetermined by this physical coupling; the coupling determines where the outline appears. In another embodiment, the light source is separate from and independent of the camera. To determine where light source displays the outline so that the outline accurately corresponds to the field of view of the camera requires calibration between the camera and light source.
As will be appreciated by one skilled in the art, aspects of the systems described herein may be embodied as a system, method, and computer program product. Thus, aspects of the systems described herein may be embodied in entirely hardware, in entirely software (including, but not limited to, firmware, program code, resident software, microcode), or in a combination of hardware and software. All such embodiments may generally be referred to herein as a circuit, a module, or a system. In addition, aspects of the systems described herein may be in the form of a computer program product embodied in one or more computer readable media having computer readable program code embodied thereon.
Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer readable medium may be a non-transitory computer readable storage medium, examples of which include, but are not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination thereof.
As used herein, a computer readable storage medium may be any tangible medium that can contain or store a program for use by or in connection with an instruction execution system, apparatus, device, computer, computing system, computer system, or any programmable machine or device that inputs, processes, and outputs instructions, commands, or data. A non-exhaustive list of specific examples of a computer readable storage medium include an electrical connection having one or more wires, a portable computer diskette, a floppy disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), a USB flash drive, an non-volatile RAM (NVRAM or NOVRAM), an erasable programmable read-only memory (EPROM or Flash memory), a flash memory card, an electrically erasable programmable read-only memory (EEPROM), an optical fiber, a portable compact disc read-only memory (CD-ROM), a DVD-ROM, an optical storage device, a magnetic storage device, or any suitable combination thereof.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. As used herein, a computer readable storage medium is not a computer readable propagating signal medium or a propagated signal.
Program code may be embodied as computer-readable instructions stored on or in a computer readable storage medium as, for example, source code, object code, interpretive code, executable code, or combinations thereof. Any standard or proprietary, programming or interpretive language can be used to produce the computer-executable instructions. Examples of such languages include C, C++, Pascal, JAVA, BASIC, Smalltalk, Visual Basic, and Visual C++.
Transmission of program code embodied on a computer readable medium can occur using any appropriate medium including, but not limited to, wireless, wired, optical fiber cable, radio frequency (RF), or any suitable combination thereof.
The program code may execute entirely on a user's device, partly on the user's device, as a stand-alone software package, partly on the user's device and partly on a remote computer or entirely on a remote computer or server. Any such remote computer may be connected to the user's device through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
Additionally, the methods described herein can be implemented on a special purpose computer, a programmed microprocessor or microcontroller and peripheral integrated circuit element(s), an ASIC or other integrated circuit, a digital signal processor, a hard-wired electronic or logic circuit such as discrete element circuit, a programmable logic device such as PLD, PLA, FPGA, PAL, or the like. In general, any device capable of implementing a state machine that is in turn capable of implementing the proposed methods herein can be used to implement the principles described herein.
Furthermore, the disclosed methods may be readily implemented in software using object or object-oriented software development environments that provide portable source code that can be used on a variety of computer or workstation platforms. Alternatively, the disclosed system may be implemented partially or fully in hardware using standard logic circuits or a VLSI design. Whether software or hardware is used to implement the systems in accordance with the principles described herein is dependent on the speed and/or efficiency requirements of the system, the particular function, and the particular software or hardware systems or microprocessor or microcomputer systems being utilized. The methods illustrated herein however can be readily implemented in hardware and/or software using any known or later developed systems or structures, devices and/or software by those of ordinary skill in the applicable art from the functional description provided herein and with a general basic knowledge of the computer and image processing arts.
Moreover, the disclosed methods may be readily implemented in software executed on programmed general-purpose computer, a special purpose computer, a microprocessor, or the like. In these instances, the systems and methods of the principles described herein may be implemented as program embedded on personal computer such as JAVA® or CGI script, as a resource residing on a server or graphics workstation, as a plug-in, or the like. The system may also be implemented by physically incorporating the system and method into a software and/or hardware system.
While the aforementioned principles have been described in conjunction with a number of embodiments, it is evident that many alternatives, modifications, and variations would be or are apparent to those of ordinary skill in the applicable arts. References to “one embodiment” or “an embodiment” or “another embodiment” means that a particular, feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment described herein. A reference to a particular embodiment within the specification do not necessarily all refer to the same embodiment. The features illustrated or described in connection with one exemplary embodiment may be combined with the features of other embodiments. Accordingly, it is intended to embrace all such alternatives, modifications, equivalents, and variations that are within the spirit and scope of the principles described herein.
This application is a continuation-in-part of U.S. application Ser. No. 15/091,180, filed Apr. 5, 2016, titled “Package and Asset Tracking System”, which claims the benefit of and priority to U.S. provisional application No. 62/143,332, filed Apr. 6, 2015, titled “Package Tracking System using Sensors,” and to U.S. provisional application No. 62/221,855, filed Sep. 22, 2015, titled “Package Tracking System using Sensors,” the entireties of which non-provisional and provisional applications are incorporated by reference herein.
Number | Name | Date | Kind |
---|---|---|---|
2408122 | Wirkler | Sep 1946 | A |
3824596 | Guion et al. | Jul 1974 | A |
3940700 | Fischer | Feb 1976 | A |
4018029 | Safranski et al. | Apr 1977 | A |
4328499 | Anderson et al. | May 1982 | A |
4570416 | Shoenfeld | Feb 1986 | A |
5010343 | Andersson | Apr 1991 | A |
5343212 | Rose | Aug 1994 | A |
5426438 | Peavey et al. | Jun 1995 | A |
5510800 | McEwan | Apr 1996 | A |
5545880 | Bu et al. | Aug 1996 | A |
5574468 | Rose | Nov 1996 | A |
5592180 | Yokev et al. | Jan 1997 | A |
5600330 | Blood | Feb 1997 | A |
5657026 | Culpepper et al. | Aug 1997 | A |
5671362 | Cowe et al. | Sep 1997 | A |
5923286 | Divakaruni | Jul 1999 | A |
5953683 | Hansen et al. | Sep 1999 | A |
6088653 | Sheikh et al. | Jul 2000 | A |
6101178 | Beal | Aug 2000 | A |
6167347 | Lin | Dec 2000 | A |
6255991 | Hedin | Jul 2001 | B1 |
6285916 | Kadaba | Sep 2001 | B1 |
6292750 | Lin | Sep 2001 | B1 |
6409290 | Lin | Jun 2002 | B1 |
6409687 | Folin | Jun 2002 | B1 |
6417802 | Diesel | Jul 2002 | B1 |
6496778 | Lin | Dec 2002 | B1 |
6512748 | Mizuki et al. | Jan 2003 | B1 |
6593885 | Wisherd et al. | Jul 2003 | B2 |
6619550 | Good et al. | Sep 2003 | B1 |
6630904 | Gustafson et al. | Oct 2003 | B2 |
6634804 | Toste | Oct 2003 | B1 |
6683568 | James et al. | Jan 2004 | B1 |
6697736 | Lin | Feb 2004 | B2 |
6720920 | Breed et al. | Apr 2004 | B2 |
6721657 | Ford et al. | Apr 2004 | B2 |
6744436 | Chirieleison, Jr. et al. | Jun 2004 | B1 |
6750816 | Kunysz | Jun 2004 | B1 |
6861982 | Forstrom | Mar 2005 | B2 |
6867774 | Halmshaw et al. | Mar 2005 | B1 |
6988079 | Or-Bach | Jan 2006 | B1 |
6989789 | Ferreol et al. | Jan 2006 | B2 |
7009561 | Menache | Mar 2006 | B2 |
7104453 | Zhu et al. | Sep 2006 | B1 |
7143004 | Townsend et al. | Nov 2006 | B2 |
7168618 | Schwartz | Jan 2007 | B2 |
7190309 | Hill | Mar 2007 | B2 |
7193559 | Ford et al. | Mar 2007 | B2 |
7236091 | Kiang et al. | Jun 2007 | B2 |
7236092 | Joy | Jun 2007 | B1 |
7292189 | Orr | Nov 2007 | B2 |
7295925 | Breed et al. | Nov 2007 | B2 |
7315281 | Dejanovic et al. | Jan 2008 | B2 |
7336078 | Merewether et al. | Feb 2008 | B1 |
7353994 | Farrall et al. | Apr 2008 | B2 |
7409290 | Lin | Aug 2008 | B2 |
7443342 | Shirai et al. | Oct 2008 | B2 |
7499711 | Hoctor et al. | Mar 2009 | B2 |
7533569 | Sheynblat | May 2009 | B2 |
7612715 | Macleod | Jul 2009 | B2 |
7646330 | Karr | Jan 2010 | B2 |
7689465 | Shakes | Mar 2010 | B1 |
7844507 | Levy | Nov 2010 | B2 |
7868760 | Smith et al. | Jan 2011 | B2 |
7876268 | Jacobs | Jan 2011 | B2 |
7933730 | Li et al. | Apr 2011 | B2 |
7995109 | Kamada et al. | Aug 2011 | B2 |
8009918 | Van Droogenbroeck et al. | Aug 2011 | B2 |
8189855 | Opalach et al. | May 2012 | B2 |
8201737 | Palacios Durazo et al. | Jun 2012 | B1 |
8219438 | Moon et al. | Jul 2012 | B1 |
8269624 | Chen et al. | Sep 2012 | B2 |
8295542 | Albertson et al. | Oct 2012 | B2 |
8406470 | Jones et al. | Mar 2013 | B2 |
8457655 | Zhang et al. | Jun 2013 | B2 |
8492905 | Suh et al. | Jul 2013 | B2 |
8619144 | Chang et al. | Dec 2013 | B1 |
8749433 | Hill | Jun 2014 | B2 |
8843231 | Ragusa et al. | Sep 2014 | B2 |
8860611 | Anderson et al. | Oct 2014 | B1 |
8957812 | Hill et al. | Feb 2015 | B1 |
9063215 | Perthold et al. | Jun 2015 | B2 |
9092898 | Fraccaroli et al. | Jul 2015 | B1 |
9120621 | Curlander | Sep 2015 | B1 |
9141194 | Keyes et al. | Sep 2015 | B1 |
9171278 | Kong et al. | Oct 2015 | B1 |
9174746 | Bell et al. | Nov 2015 | B1 |
9269022 | Rhoads et al. | Feb 2016 | B2 |
9349076 | Liu et al. | May 2016 | B1 |
9424493 | He et al. | Aug 2016 | B2 |
9482741 | Min | Nov 2016 | B1 |
9497728 | Hill | Nov 2016 | B2 |
9500396 | Yoon et al. | Nov 2016 | B2 |
9514389 | Erhan et al. | Dec 2016 | B1 |
9519344 | Hill | Dec 2016 | B1 |
9544552 | Takahashi | Jan 2017 | B2 |
9594983 | Alattar et al. | Mar 2017 | B2 |
9656749 | Hanlon | May 2017 | B1 |
9740937 | Zhang et al. | Aug 2017 | B2 |
9782669 | Hill | Oct 2017 | B1 |
9872151 | Puzanov et al. | Jan 2018 | B1 |
9904867 | Fathi et al. | Feb 2018 | B2 |
9933509 | Hill et al. | Apr 2018 | B2 |
9961503 | Hill | May 2018 | B2 |
9996818 | Ren | Jun 2018 | B1 |
10001833 | Hill | Jun 2018 | B2 |
10148918 | Seiger et al. | Dec 2018 | B1 |
10163149 | Famularo et al. | Dec 2018 | B1 |
10180490 | Schneider et al. | Jan 2019 | B1 |
10257654 | Hill | Apr 2019 | B2 |
10324474 | Hill et al. | Jun 2019 | B2 |
10332066 | Palaniappan et al. | Jun 2019 | B1 |
10373322 | Buibas et al. | Aug 2019 | B1 |
10399778 | Shekhawat et al. | Sep 2019 | B1 |
10416276 | Hill et al. | Sep 2019 | B2 |
10444323 | Min et al. | Oct 2019 | B2 |
10455364 | Hill | Oct 2019 | B2 |
10605904 | Min et al. | Mar 2020 | B2 |
10853757 | Hill | Dec 2020 | B1 |
20010027995 | Patel | Oct 2001 | A1 |
20020021277 | Kramer | Feb 2002 | A1 |
20020095353 | Razumov | Jul 2002 | A1 |
20020140745 | Ellenby | Oct 2002 | A1 |
20020177476 | Chou | Nov 2002 | A1 |
20030024987 | Zhu | Feb 2003 | A1 |
20030053492 | Matsunaga | Mar 2003 | A1 |
20030110152 | Hara | Jun 2003 | A1 |
20030115162 | Konick | Jun 2003 | A1 |
20030120425 | Stanley et al. | Jun 2003 | A1 |
20030176196 | Hall et al. | Sep 2003 | A1 |
20030184649 | Mann | Oct 2003 | A1 |
20030195017 | Chen et al. | Oct 2003 | A1 |
20040002642 | Dekel et al. | Jan 2004 | A1 |
20040095907 | Agee et al. | May 2004 | A1 |
20040107072 | Dietrich et al. | Jun 2004 | A1 |
20040176102 | Lawrence et al. | Sep 2004 | A1 |
20040203846 | Carronni et al. | Oct 2004 | A1 |
20040267640 | Bong et al. | Dec 2004 | A1 |
20050001712 | Yarbrough | Jan 2005 | A1 |
20050057647 | Nowak | Mar 2005 | A1 |
20050062849 | Foth et al. | Mar 2005 | A1 |
20050074162 | Tu et al. | Apr 2005 | A1 |
20050143916 | Kim et al. | Jun 2005 | A1 |
20050154685 | Mundy | Jul 2005 | A1 |
20050184907 | Hall | Aug 2005 | A1 |
20050275626 | Mueller et al. | Dec 2005 | A1 |
20060013070 | Holm et al. | Jan 2006 | A1 |
20060022800 | Krishna et al. | Feb 2006 | A1 |
20060061469 | Jaeger et al. | Mar 2006 | A1 |
20060066485 | Min | Mar 2006 | A1 |
20060101497 | Hirt | May 2006 | A1 |
20060192709 | Schantz et al. | Aug 2006 | A1 |
20060279459 | Akiyama et al. | Dec 2006 | A1 |
20060290508 | Moutchkaev et al. | Dec 2006 | A1 |
20070060384 | Dohta | Mar 2007 | A1 |
20070138270 | Reblin | Jun 2007 | A1 |
20070205867 | Kennedy et al. | Sep 2007 | A1 |
20070210920 | Panotopoulos | Sep 2007 | A1 |
20070222560 | Posamentier | Sep 2007 | A1 |
20070237356 | Dwinell et al. | Oct 2007 | A1 |
20080007398 | DeRose et al. | Jan 2008 | A1 |
20080035390 | Wurz | Feb 2008 | A1 |
20080048913 | Macias et al. | Feb 2008 | A1 |
20080122926 | Zhou | May 2008 | A1 |
20080143482 | Shoarinejad et al. | Jun 2008 | A1 |
20080150678 | Giobbi et al. | Jun 2008 | A1 |
20080154691 | Wellman et al. | Jun 2008 | A1 |
20080156619 | Patel et al. | Jul 2008 | A1 |
20080174485 | Carani et al. | Jul 2008 | A1 |
20080183328 | Danelski | Jul 2008 | A1 |
20080204322 | Oswald et al. | Aug 2008 | A1 |
20080266253 | Seeman et al. | Oct 2008 | A1 |
20080281618 | Mermet et al. | Nov 2008 | A1 |
20080316324 | Rofougaran | Dec 2008 | A1 |
20090043504 | Bandyopadhyay et al. | Feb 2009 | A1 |
20090073428 | Magnus et al. | Mar 2009 | A1 |
20090114575 | Carpenter | May 2009 | A1 |
20090121017 | Cato et al. | May 2009 | A1 |
20090149202 | Hill et al. | Jun 2009 | A1 |
20090224040 | Kushida et al. | Sep 2009 | A1 |
20090243932 | Moslifeghi | Oct 2009 | A1 |
20090245573 | Saptharishi et al. | Oct 2009 | A1 |
20090323586 | Hohl et al. | Dec 2009 | A1 |
20100019905 | Boddie | Jan 2010 | A1 |
20100076594 | Salour et al. | Mar 2010 | A1 |
20100090852 | Eitan et al. | Apr 2010 | A1 |
20100097208 | Rosing et al. | Apr 2010 | A1 |
20100103173 | Lee | Apr 2010 | A1 |
20100103989 | Smith et al. | Apr 2010 | A1 |
20100123664 | Shin | May 2010 | A1 |
20100159958 | Naguib et al. | Jun 2010 | A1 |
20110002509 | Nobori et al. | Jan 2011 | A1 |
20110006774 | Baiden | Jan 2011 | A1 |
20110037573 | Choi | Feb 2011 | A1 |
20110066086 | Aarestad et al. | Mar 2011 | A1 |
20110166694 | Griffits et al. | Jul 2011 | A1 |
20110187600 | Landt | Aug 2011 | A1 |
20110208481 | Slastion | Aug 2011 | A1 |
20110210843 | Kummetz | Sep 2011 | A1 |
20110241942 | Hill | Oct 2011 | A1 |
20110256882 | Markhovsky et al. | Oct 2011 | A1 |
20110264520 | Puhakka | Oct 2011 | A1 |
20110286633 | Wang et al. | Nov 2011 | A1 |
20110313893 | Weik, III | Dec 2011 | A1 |
20110315770 | Patel et al. | Dec 2011 | A1 |
20120013509 | Wisherd et al. | Jan 2012 | A1 |
20120020518 | Taguchi | Jan 2012 | A1 |
20120081544 | Wee | Apr 2012 | A1 |
20120087572 | Dedeoglu et al. | Apr 2012 | A1 |
20120127088 | Pance et al. | May 2012 | A1 |
20120176227 | Nikitin | Jul 2012 | A1 |
20120184285 | Sampath | Jul 2012 | A1 |
20120257061 | Edwards | Oct 2012 | A1 |
20120286933 | Hsiao | Nov 2012 | A1 |
20120319822 | Hansen | Dec 2012 | A1 |
20130018582 | Miller et al. | Jan 2013 | A1 |
20130021417 | Ota et al. | Jan 2013 | A1 |
20130029685 | Mehran | Jan 2013 | A1 |
20130036043 | Faith | Feb 2013 | A1 |
20130051624 | Iwasaki et al. | Feb 2013 | A1 |
20130063567 | Burns et al. | Mar 2013 | A1 |
20130073093 | Songkakul | Mar 2013 | A1 |
20130113993 | Dagit, III | May 2013 | A1 |
20130182114 | Zhang et al. | Jul 2013 | A1 |
20130191193 | Calman et al. | Jul 2013 | A1 |
20130226655 | Shaw | Aug 2013 | A1 |
20130281084 | Batada et al. | Oct 2013 | A1 |
20130293684 | Becker et al. | Nov 2013 | A1 |
20130293722 | Chen | Nov 2013 | A1 |
20130314210 | Schoner | Nov 2013 | A1 |
20130335318 | Nagel et al. | Dec 2013 | A1 |
20130335415 | Chang | Dec 2013 | A1 |
20140022058 | Striemer et al. | Jan 2014 | A1 |
20140108136 | Zhao | Apr 2014 | A1 |
20140139426 | Kryze et al. | May 2014 | A1 |
20140253368 | Holder | Sep 2014 | A1 |
20140270356 | Dearing | Sep 2014 | A1 |
20140300516 | Min et al. | Oct 2014 | A1 |
20140317005 | Balwani | Oct 2014 | A1 |
20140330603 | Corder | Nov 2014 | A1 |
20140357295 | Skomra et al. | Dec 2014 | A1 |
20140361078 | Davidson | Dec 2014 | A1 |
20150009949 | Khoryaev et al. | Jan 2015 | A1 |
20150012396 | Puerini et al. | Jan 2015 | A1 |
20150019391 | Kumar et al. | Jan 2015 | A1 |
20150029339 | Kobres et al. | Jan 2015 | A1 |
20150039458 | Reid | Feb 2015 | A1 |
20150055821 | Fotland | Feb 2015 | A1 |
20150059374 | Hebel | Mar 2015 | A1 |
20150085096 | Smits | Mar 2015 | A1 |
20150091757 | Shaw et al. | Apr 2015 | A1 |
20150130664 | Hill et al. | May 2015 | A1 |
20150133162 | Meredith et al. | May 2015 | A1 |
20150134418 | Leow et al. | May 2015 | A1 |
20150169916 | Hill | Jun 2015 | A1 |
20150170002 | Szegedy et al. | Jun 2015 | A1 |
20150202770 | Palron et al. | Jul 2015 | A1 |
20150210199 | Payne | Jul 2015 | A1 |
20150221135 | Hill et al. | Aug 2015 | A1 |
20150227890 | Bednarek et al. | Aug 2015 | A1 |
20150248765 | Criminisi | Sep 2015 | A1 |
20150254906 | Berger et al. | Sep 2015 | A1 |
20150278759 | Harris | Oct 2015 | A1 |
20150310539 | McCoy et al. | Oct 2015 | A1 |
20150323643 | Hill et al. | Nov 2015 | A1 |
20150341551 | Perrin | Nov 2015 | A1 |
20150362581 | Friedman et al. | Dec 2015 | A1 |
20150371178 | Abhyanker et al. | Dec 2015 | A1 |
20150371319 | Argue | Dec 2015 | A1 |
20150379366 | Nomura | Dec 2015 | A1 |
20160035078 | Lin | Feb 2016 | A1 |
20160063610 | Argue | Mar 2016 | A1 |
20160093184 | Locke et al. | Mar 2016 | A1 |
20160098679 | Levy | Apr 2016 | A1 |
20160140436 | Yin et al. | May 2016 | A1 |
20160142868 | Kulkarni et al. | May 2016 | A1 |
20160150196 | Horvath | May 2016 | A1 |
20160156409 | Chang | Jun 2016 | A1 |
20160178727 | Bottazzi | Jun 2016 | A1 |
20160195602 | Meadow | Jul 2016 | A1 |
20160232857 | Tamaru | Aug 2016 | A1 |
20160238692 | Hill et al. | Aug 2016 | A1 |
20160248969 | Hurd | Aug 2016 | A1 |
20160256100 | Jacofsky et al. | Sep 2016 | A1 |
20160286508 | Khoryaev et al. | Sep 2016 | A1 |
20160300187 | Kashi et al. | Oct 2016 | A1 |
20160335593 | Clarke et al. | Nov 2016 | A1 |
20160366561 | Min et al. | Dec 2016 | A1 |
20160370453 | Boker et al. | Dec 2016 | A1 |
20160371574 | Nguyen et al. | Dec 2016 | A1 |
20170030997 | Hill | Feb 2017 | A1 |
20170031432 | Hill | Feb 2017 | A1 |
20170066597 | Hiroi | Mar 2017 | A1 |
20170117233 | Inayama et al. | Apr 2017 | A1 |
20170123426 | Hill et al. | May 2017 | A1 |
20170140329 | Bernhardt et al. | May 2017 | A1 |
20170234979 | Mathews et al. | Aug 2017 | A1 |
20170261592 | Min et al. | Sep 2017 | A1 |
20170280281 | Pandey et al. | Sep 2017 | A1 |
20170293885 | Grady et al. | Oct 2017 | A1 |
20170313514 | Lert, Jr. et al. | Nov 2017 | A1 |
20170323174 | Joshi et al. | Nov 2017 | A1 |
20170323376 | Glaser et al. | Nov 2017 | A1 |
20170350961 | Hill | Dec 2017 | A1 |
20170351255 | Anderson et al. | Dec 2017 | A1 |
20170359573 | Kim et al. | Dec 2017 | A1 |
20170372524 | Hill | Dec 2017 | A1 |
20170374261 | Teich et al. | Dec 2017 | A1 |
20180003962 | Urey et al. | Jan 2018 | A1 |
20180033151 | Matsumoto | Feb 2018 | A1 |
20180068100 | Seo | Mar 2018 | A1 |
20180068266 | Kirmani et al. | Mar 2018 | A1 |
20180094936 | Jones et al. | Apr 2018 | A1 |
20180108134 | Venable et al. | Apr 2018 | A1 |
20180139431 | Simek et al. | May 2018 | A1 |
20180164103 | Hill | Jun 2018 | A1 |
20180197139 | Hill | Jul 2018 | A1 |
20180197218 | Mallesan et al. | Jul 2018 | A1 |
20180231649 | Min et al. | Aug 2018 | A1 |
20180242111 | Hill | Aug 2018 | A1 |
20180339720 | Singh | Nov 2018 | A1 |
20190029277 | Dderdal et al. | Jan 2019 | A1 |
20190053012 | Hill | Feb 2019 | A1 |
20190073785 | Hafner et al. | Mar 2019 | A1 |
20190090744 | Mahfouz | Mar 2019 | A1 |
20190098263 | Seiger et al. | Mar 2019 | A1 |
20190138849 | Zhang | May 2019 | A1 |
20190295290 | Schena et al. | Sep 2019 | A1 |
20190394448 | Ziegler et al. | Dec 2019 | A1 |
20200005116 | Kuo | Jan 2020 | A1 |
20200011961 | Hill et al. | Jan 2020 | A1 |
20200012894 | Lee | Jan 2020 | A1 |
20200097724 | Chakravarty et al. | Mar 2020 | A1 |
Number | Date | Country |
---|---|---|
102017205958 | Oct 2018 | DE |
2001006401 | Jan 2001 | WO |
2005010550 | Feb 2005 | WO |
2009007198 | Jan 2009 | WO |
2020061276 | Mar 2020 | WO |
Entry |
---|
Raza, Rana Hammad, Three Dimensional Localization and Tracking for Site Safety Using Fusion of Computer Vision and RFID, Michigan State University, 2013. |
Farrell & Barth, “The Global Positioning System & Inertial Navigation”, 1999, McGraw-Hill; pp. 246-252. |
Grewal & Andrews, “Global Positioning Systems, Inertial Navigation and Integration”, 2001, John Weiley and Sons, pp. 252-256. |
Jianchen Gao, “Development of a Precise GPS/INS/On-Board Vehicle Sensors Integrated Vehicular Positioning System”, Jun. 2007, UCGE Reports No. 20555; 245 pages. |
Kong Yang, “Tightly Coupled MEMS INS/GPS Integration with INS Aided Receiver Tracking Loops”, Jun. 2008, UCGE Reports No. 20270; 205 pages. |
Goodall, Christopher L., “Improving Usability of Low-Cost INS/GPS Navigation Systems using Intelligent Techniques”, Jan. 2009, UCGE Reports No. 20276; 234 pages. |
Debo Sun, “Ultra-Tight GPS/Reduced IMU for Land Vehicle Navigation”, Mar. 2010, UCGE Reports No. 20305; 254 pages. |
Sun, et al., “Analysis of the Kalman Filter With Different Ins Error Models for GPS/INS Integration in Aerial Remote Sensing Applications”, Bejing, 2008, The International Archives of the Photogrammerty, Remote Sensing and Spatial Information Sciences vol. Nov. 14, 2016 IDS11-14-2016 IDS11-14-2016 IDSVII, Part B5.; 8 pages. |
Schmidt & Phillips, “INS/GPS Integration Architectures”, NATO RTO Lecture Series, First Presented Oct. 20-21, 2003; 24 pages. |
Adrian Schumacher, “Integration of a GPS aided Strapdown Inertial Navigation System for Land Vehicles”, Master of Science Thesis, KTH Electrical Engineering, 2006; 67 pages. |
Vikas Numar N., “Integration of Inertial Navigation System and Global Positioning System Using Kalman Filtering”, M.Tech Dissertation, Indian Institute of Technology, Bombay, Mumbai, Jul. 2004; 69 pages. |
Jennifer Denise Gautier, “GPS/INS Generalized Evaluation Tool (Giget) for the Design and Testing of Integrated Navigation Systems”, Dissertation, Stanford University, Jun. 2003; 160 pages. |
Farrell, et al., “Real-Time Differential Carrier Phase GPS-Aided INS”, Jul. 2000, IEEE Transactions on Control Systems Technology, vol. 8, No. 4; 13 pages. |
Filho, et al., “Integrated GPS/INS Navigation System Based on a Gyrpscope-Free IMU”, DINCON Brazilian Conference on Synamics, Control, and Their Applications, May 22-26, 2006; 6 pages. |
Santiago Alban, “Design and Performance of a Robust GPS/INS Attitude System for Automobile Applications”, Dissertation, Stanford University, Jun. 2004; 218 pages. |
International Search Report & Written Opinion in International Patent Application No. PCT/US12/64860, dated Feb. 28, 2013; 14 pages. |
U.S. Appl. No. 13/918,295, filed Jun. 14, 2013, entitled “RF Tracking with Active Sensory Feedback”; 31 pages. |
U.S. Appl. No. 13/975,724, filed Aug. 26, 2013, entitled “Radio Frequency Communication System”; 22 pages. |
Proakis, John G. and Masoud Salehi, “Communication Systems Engineering”, Second Edition, Prentice-Hall, Inc., Upper Saddle River, New Jersey, 2002; 815 pages. |
Pourhomayoun, Mohammad and Mark Fowler, “Improving WLAN-based Indoor Mobile Positioning Using Sparsity,” Conference Record of the Forty SiNov. 14, 2016 IDSth Asilomar Conference on Signals, Systems and Computers, Nov. 4-7, 2012, pp. 1393-1396, Pacific Grove, California. |
Final Office Action in U.S. Appl. No. 16/206,745 dated May 22, 2019; 9 pages. |
Non-Final Office Action in U.S. Appl. No. 15/416,366 dated Jun. 13, 2019; 11 pages. |
Non-Final Office Action in U.S. Appl. No. 15/259,474 dated May 29, 2019; 19 pages. |
Wilde, Andreas, “Extended Tracking Range Delay-Locked Loop,” Proceedings IEEE International Conference on Communications, Jun. 1995, pp. 1051-1054. |
Notice of Allowance in U.S. Appl. No. 15/270,749 dated Oct. 4, 2019; 5 pages. |
Dictionary Definition for Peripheral Equipment. (2001). Hargrave's Communications Dictionary, Wiley. Hoboken, NJ: Wiley. Retrieved from Https://search.credorefemce.com/content/entry/hargravecomms/peripheral_equioment/0 (Year:2001). |
Non-Final Office Action in U.S. Appl. No. 16/206,745, dated Jan. 7, 2019; 10 pages. |
Restriction Requirement in U.S. Appl. No. 15/091,180 dated Mar. 19, 2019; 8 pages. |
Szeliski, R., “Image Alignment and Stitching: A Tutorial”, Technical Report, MST-TR-2004-92, Dec. 10, 2006. |
Brown et al., “Automatic Panoramic Image Stitching Using Invariant Features”, International Journal of Computer Vision, vol. 74, No. 1, pp. 59-73, 2007. |
Xu et al., “Performance Evaluation of Color Correction Approaches for Automatic Multi-view Image and Video Stitching”, International Converence on Computer Vision and Pattern Recognition (CVPR10), San Francisco, CA, 2010. |
Li, et al. “Multifrequency-Based Range Estimation of RFID Tags,” IEEE International Conference on RFID, 2009. |
Welch, Greg and Gary Bishop, “An Introduction to the Kalman Filter,” Department of Computer Science, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599-3175, Updated: Monday, Jul. 24, 2006. |
Min, et al. “Systems and Methods of Wireless Position Tracking” U.S. Appl. No. 15/953,798, filed Apr. 16, 2018. |
Hill, Edward L. “Wireless Relay Station for Radio Frequency-Based Tracking System” U.S. Appl. No. 15/961,274, filed Apr. 24, 2018. |
Hill, et al. “Spatial Diveristy for Relative Position Tracking” U.S. Appl. No. 15/404,668, filed Jan. 12, 2017. |
Hill, et al. “Package Tracking Systems and Methods” U.S. Appl. No. 15/091,180, filed Apr. 5, 2016. |
Seiger, et al. “Modular Shelving Systems for Package Tracking” U.S. Appl. No. 15/270,749, filed Sep. 20, 2016. |
Hill, et al. “Video for Real-Time Confirmation in Package Tracking Systems” U.S. Appl. No. 15/416,366, filed Jan. 26, 2017. |
Min, et al. “Expandable, Decentralized Position Tracking Systems and Methods” U.S. Appl. No. 15/446,602, filed Mar. 1, 2017. |
Hill, et al. “Position Tracking System and Method Using Radio Signals and Inertial Sensing” U. S. U.S. Appl. No. 14/600,025, filed Jan. 20, 2015. |
Non-Final Office Action in U.S. Appl. No. 15/270,749 dated Apr. 4, 2018; 8 pages. |
J. Farrell & M. Barth, “The Global Positioning System & Inertial Navigation” pp. 245-252 (McGraw-Hill,1999). |
“ADXL202/ADXL210 Product Sheet,” Analog Devices, Inc., Analog.com, 1999; 11 pages. |
Morbella N50: 5-inch GPS Navigator User's Manual, Maka Technologies Group, May 2012. |
Non-Final Office Action in U.S. Appl. No. 15/091,180, dated Jun. 27, 2019; 11 pages. |
Non-Final Office Action in U.S. Appl. No. 16/206,745 dated Oct. 18, 2019; 8 pages. |
Final Office Action in U.S. Appl. No. 15/416,366 dated Oct. 7, 2019; 14 pages. |
Corrected Notice of Allowability in U.S. Appl. No. 15/270,749 dated Oct. 30, 2018; 5 pages. |
Notice of Allowance in U.S. Appl. No. 15/416,366 dated Aug. 19, 2020; 13 pages. |
International Search Report and Written Opinion in PCT/US2019/051874 dated Dec. 13, 2020; 9 pages. |
International Search Report and Written Opinion in PCT/US2020/013280 dated Mar. 10, 2020; 9 pages. |
Raza, Rana Hammad “Three Dimensional Localization and Tracking for Site Safety Using Fusion of Computer Vision and RFID,” 2013, Dissertation, Michigan State University. |
Final Office Action in U.S. Appl. No. 15/259,474, dated Jan. 10, 2020; 19 pages. |
Final Office Action in U.S. Appl. No. 15/091,180, dated Jan. 23, 2020; 17 pages. |
Final Office Action in U.S. Appl. No. 16/206,745 dated Feb. 5, 2020; 15 pages. |
Non-Final Office Action in U.S. Appl. No. 15/416,366 dated Apr. 6, 2020; 13 pages. |
Non-Final Office Action in U.S. Appl. No. 15/861,414 dated Apr. 6, 2020; 14 pages. |
Non-Final Office Action in U.S. Appl. No. 16/437,767 dated Jul. 15, 2020; 19 pages. |
Non-Final Office Action in U.S. Appl. No. 15/091,180 dated Oct. 1, 2020. |
Non-Final Office Action in U.S. Appl. No. 15/259,474, dated Sep. 1, 2020; 17 pages. |
Final Office Action in U.S. Appl. No. 15/861,414 dated Nov. 17, 2020. |
Non-Final Office Action in U.S. Appl. No. 16/206,745 dated Sep. 23, 2020. |
Non-Final Office Action in U.S. Appl. No. 16/740,679, dated Jan. 6, 2021; 15 pages. |
Final Office Action in U.S. Appl. No. 15/091,180, dated Mar. 10, 2021; 24 pages. |
Notice of Allowance and Fees Due in U.S. Appl. No. 16/206,745, dated Mar. 12, 2021; 9 pages. |
Final Office Action in U.S. Appl. No. 15/259,474, dated Mar. 9, 2021; 23 pages. |
Final Office Action in U.S. Appl. No. 15/861,414 dated Feb. 8, 2021. |
Final Office Action in U.S. Appl. No. 16/437,767 dated Feb. 5, 2021. |
Non-Final Office Action in U.S. Appl. No. 16/575,837, dated Apr. 21, 2021; 18 pages. |
Notice of Allowance and Fees Due in U.S. Appl. No. 16/740,679, dated Apr. 20, 2021; 15 pages. |
International Preliminary Report on Patentability in PCT/US2019/051874 dated Apr. 1, 2021. |
Notice of Allowance and Fees Due in U.S. Appl. No. 16/437,767, dated May 14, 2021; 8 pages. |
International Preliminary Report on Patentability in PCT/US2020/013280, dated Jul. 22, 2021; 8 pages. |
Non-Final Office Action in U.S. Appl. No. 15/259,474 dated Aug. 26, 2021. |
Non-Final Office Action in U.S. Appl. No. 15/861,414 dated Aug. 26, 2021. |
Final Office Action in U.S. Appl. No. 16/575,837 dated Sep. 3, 2021. |
Non-Final Office Action in U.S. Appl. No. 15/091,180 dated Sep. 1, 2021. |
Final Office Action in U.S. Appl. No. 15/861,414 dated Mar. 16, 2022. |
Final Office Action in U.S. Appl. No. 15/259,474 dated Feb. 8, 2022. |
Final Office Action in U.S. Appl. No. 15/091,180 dated May 12, 2022; 7 pages. |
Number | Date | Country | |
---|---|---|---|
62221855 | Sep 2015 | US | |
62143332 | Apr 2015 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15091180 | Apr 2016 | US |
Child | 15416379 | US |