The invention relates generally to machine-learning systems and methods for identifying objects.
Various consumer goods and retail operations are attempting to improve customers' shopping experience by automating the purchase and checkout process. Such automation entails deploying systems that can identify what items a customer has taken from a shelf. Some systems employ video monitoring and image processing techniques to identify those items. However, the proper detection and identification of an item in captured images can be affected by various factors, for example, lighting conditions, shadows, obstructed views, and the location and position of the item on the shelf. Inconsistent results render such systems ineffectual.
All examples and features mentioned below can be combined in any technically possible way.
In one aspect, the invention is related to an object-identification system comprising an image sensor configured to capture images of objects disposed in an area designated for holding objects, a deep neural network trained to detect and recognize objects in images like those objects held in the designated area, and a controller in communication with the image sensor to receive images captured by the image sensor and with the deep neural network. The controller includes one or more processors configured to register an identity of a person who visits the area designated for holding objects, to submit an image to the deep neural network, to associate the registered identity of the person with an object detected in the image submitted to the deep neural network, to retrain the deep neural network using the submitted image if the deep neural network is unable to recognize the object detected in the submitted image, and to track a location of the detected object while the detected object is in the area designated for holding objects.
The controller may be further configured to acquire labeling information for the detected object in response to the deep neural network being unable to recognize the detected object in the submitted image, to associate the labeling information with the version of the image submitted to the deep neural network, and to store the version of the image and associated labeling information in an image database used to retrain the deep neural network. A human-input acquisition module may be configured to acquire the labeling information from a user in response to a request from the controller when the deep neural network is unable to recognize the detected object in the image in the submitted image.
The controller may be further configured to find an area within the image in which a change appears if the deep neural network is unable to recognize the detected object, to produce a version of the image that focuses upon the area of change, and to submit the version of the image to the deep neural network to determine whether the deep neural network is able to recognize the detected object in the second version of the image. The controller may be further configured to acquire labeling information for the detected object irrespective of whether the deep neural network recognizes the detected object in the submitted version of the image, to associate the acquired labeling information with the version of the image submitted to the deep neural network, and to store the version of the image and associated labeling information in an image database used to retrain the deep neural network. In addition, the controller may be further configured to acquire the labeling information from the deep neural network when the deep neural network recognizes the detected object in the submitted version of the image.
The deep neural network may be a first deep neural network, and the system may further comprise a second deep neural network configured to operate in parallel to the first deep neural network. Each of the first and second deep neural networks produce an output based on image data obtained from the image, wherein the image data obtained by the first deep neural network are different from the image data obtained by the second deep neural network.
The object-identification system may further comprise a depth sensor with a field of view that substantially matches a field of view of the image sensor. The depth sensor acquires depth pixels value of images within its field of view, wherein a depth pixel value and less than three pixel values taken from the group consisting of R (red), G (green), and B (blue) are submitted as image data to the deep neural network when the image is submitted to the deep neural network during training or object recognition.
The deep neural network may reside on a remote server system, and the controller may further comprise a network interface to communicate with the deep neural network on the server system.
In another aspect, the invention is related to a method of identifying and tracking objects. The method comprises the steps of registering an identity of a person who visits an area designated for holding objects, capturing an image of the area designated for holding objects, submitting a version of the image to a deep neural network trained to detect and recognize objects in images like those objects held in the designated area, detecting an object in the version of the image, associating the registered identity of the person with the detected object, retraining the deep neural network using the version of the image if the deep neural network is unable to recognize the detected object, and tracking a location of the detected object while the detected object is in the area designated for holding objects.
The method may further comprise acquiring labeling information for the object detected in the version of the image in response to the deep neural network being unable to recognize the detected object in the version of the image, associating the labeling information with the version of the image, and storing the version of the captured image and associated labeling information in an image database used to retrain the deep neural network. The step of acquiring labeling information for the object detected in the version of the image may be in response to the deep neural network being unable to recognize the detected object in the version of the image comprises acquiring the labeling information from user-supplied input.
The method may further comprise finding an area within the version of the image in which a change appears when the deep neural network is unable to recognize the object detected in the first version of the image, producing a second version of the image that focuses upon the found area of change, and submitting the second version of the image to the deep neural network to determine whether the deep neural network can recognize the detected object in the second version of the image. The method may further comprise acquiring labeling information for the object detected in the first version of the image irrespective of whether the deep neural network recognizes the detected object in the second version of the image, associating the labeling information with the first version of the image, and storing the first version of the captured image and associated labeling information in an image database used to retrain the deep neural network. The step of acquiring labeling information for the object detected in the version of the image may comprise acquiring the labeling information from the deep neural network when the deep neural network recognizes the detected object in the version of the image.
The step of submitting a version of the image to the deep neural network may comprises submitting a depth pixel value and less than three pixel values taken from the group consisting of R (red), G (green), and B (blue) as image data to the deep neural network.
The method may further comprise the step of submitting image data, acquired from the version of the image, to a first deep neural network and a second deep neural network in parallel, wherein the image data submitted to the first deep neural network are different from the image data submitted to the second deep neural network.
In another aspect, the invention is related to a sensor module comprising an image sensor configured to capture an image within its field of view and a depth sensor having a field of view that substantially matches the field of view of the image sensor. The depth sensor is configured to acquire estimated depth values for an image captured by the depth sensor. The sensor module further comprises a controller in communication with the image sensor and depth sensor to receive image data associated with the image captured by the image sensor and estimated depth values associated with the image captured by the depth sensor. The controller includes one or more processors configured to register an identity of a person who visits an area designated for holding objects, to submit the image data associated with the image captured by the image sensor and the estimated depth values associated with the image captured by the depth sensor to a deep neural network trained to detect and recognize objects in images like those objects held in the designated area, to associate the registered identity of the person with an object detected in the image data and estimated depth values submitted to the deep neural network, and to save a version of the images captured by the image sensor and the depth sensor for use in subsequent retraining of the deep neural network if the deep neural network is unable to recognize the detected object.
The controller may further comprise a cloud interface to communicate with the deep neural network over a network.
The sensor module may further comprise a human-input acquisition module configured to acquire the labeling information from a user in response to a request from the controller when the deep neural network is unable to recognize the detected object based on the submitted image data and estimated depth values.
The above and further advantages of this invention may be better understood by referring to the following description in conjunction with the accompanying drawings, in which like numerals indicate like structural elements and features in various figures. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the invention.
Object-identification systems and methods described herein combine computer vision, machine learning, and a deep neural network (DNN) to enable the accurate identification and tracking of people and objects. Initially, the DNN may be a blank slate and incapable of object identification without human assistance or it can be trained with a predetermined set of images to give it a baseline. To give baseline object identification capabilities to the DNN, a human has to train the DNN with a predetermined training set of images. After its initial training, the DNN's ability to identify objects continuously improves because of subsequent trainings. These subsequent trainings are based on images in which the DNN could not initially identify an object. Objects in these images have become identifiable, and thus valuable for retraining the DNN, because of human-supplied information that identifies objects in the images or because of a multi-authentication process that focuses the DNN's detection efforts on a region in the images where change has been detected.
In one embodiment, each sensor module 102 is a self-contained electronic unit capable of registering persons who visit the object-holding area, capturing images, image processing, detecting objects, machine-learning-assisted self-improving object recognition, object tracking, and, when so configured, providing light guidance. In other embodiments, one or more of these functions takes place remotely (i.e., not at the sensor module); for example, the functions of object detection, machine-learning-assisted self-improving object recognition, and object tracking can occur at a remote computing site with which the sensor module is in communication over a network.
The sensor module 102 may be deployed in a fixed position near a support surface 116 in an object-holding area, or it may be mobile, embodied in a mobile device. As an example of a fixed deployment, the sensor modules 102 may drop down from the ceilings in a surveillance configuration so that all corners of an enterprise site are covered. These sensor modules are small and non-intrusive and can track the identifications and paths of individuals through the enterprise, for example, as described in U.S. Pat. Pub. No. US-2018-0164112-A1, published Jun. 14, 2018, the entirety of which application is incorporated by reference herein.
Mobile embodiments of the sensor module include, but are not limited to, a smartphone, tablet computer, wearable computing device, or any other portable computing device configured with one or more processors, an RGB camera, wireless communication capabilities, an optional depth sensor, an optional light source, and software for performing the image processing, object detecting, tracking, and recognizing, self-improving machine learning, and optional light guidance functions described herein. The software can be embodied in a downloaded application (app) that can be stored on the mobile device. Being portable, a person or machine can, in effect, carry an object-identification device capable of recognizing objects captured by the camera(s) of the mobile device. For example, a person with such a device can run the software, approach a table (i.e., support surface) holding various objects, point the device (i.e., its camera(s)) at each object, capture an image of an object, and be told the type (identity) of the object. To obtain the identity of the object, the mobile device may communicate with a remote server that hosts the DNN, sending the image to the remote server, and receiving the identity of the object.
Each image sensor 106, which may also be referred to herein as an optical sensor, provides color information; each depth sensor 108 provides estimated depth for each pixel of a captured image. The image sensor 106 and depth sensor 108 may be embodied in a single camera, such as, for example, Microsoft's Kinect™, or be embodied in separate cameras. The image and optional depth sensors are disposed to face the support surface 116. Examples of the support surface include, but are not limited to, desktops, tables, shelves, and floor space. In general, the support surface is disposed in or at an object-holding area. The object-holding area can be, for example, a supermarket, warehouse, inventory, room, closet, hallway, cupboards, lockers, each with or without secured access. Examples of identified and tracked objects include, but are not limited to, packages, parcels, boxes, equipment, tools, food products, bottles, jars, and cans. (People may also be identified and tracked.) Each image sensor 106 has a field of view (FOV) that covers a portion of, or all the area occupied by the support surface 116; the field of view of an optional depth sensor matches at least that of an image sensor. Each separate sensor has its own perspective of the area and of the objects placed on the support surface 116.
The controller 104 may be configured to control the light source 110 to provide light guidance to objects located on the support surface 116 or to certain regions of the support surface, depending upon the object or region of interest. Examples of the light source 110 include, but are not limited to, lasers, projectors, LEDs, light bulbs, flashlights, and lights. The light source 110 may be disposed on or remote from and directed at the support surface 116.
A display 118 may be included in the object-identification system 100, to provide, for example, a visual layout of the objects on the support surface, visual guidance to objects or regions on the surface, and a user interface for use by persons who enter and leave the object-holding area. The display 118 may be conveniently located at the threshold of or within the holding area. The display 118 may be part of an electronic device (e.g., a computer, smartphone, mobile device) configured with input/output devices, for example, a physical or virtual keyboard, keypad, barcode scanner, microphone, camera, and may be used to register the identities of persons entering the object-holding area and/or to scan object labels.
The controller 104 may also be in communication with one or more servers 120 (i.e., server system) over a network connection. These server(s) 120 may perform third-party services, such as “cloud services,” or be implemented locally or onsite at the enterprise. As used herein, the “cloud” refers to software and services that run on a remote network, such as the Internet, instead of at the sensor module or at a local computer. The cloud may be public, private, or a combination thereof. An example of cloud services suitable for the principles described herein is Azure™ cloud services provided by Microsoft® of Redmond, WA The server(s) 120 can run a virtual machine that provides the cloud services required by the sensor module 102.
During operation of the object-identification system 100, persons arrives at the object-holding area to perform any one or more of at least four object handling activities, including depositing an object, removing an object, moving an object to another spot in the holding area, or alerting personnel of an object warranting inspection. In general, the object-identification system registers the identities of persons who arrive at the holding area (i.e., who interact with the object-identification system) and associates each registered person with one or more objects that the person is handling. Using image processing techniques, the object-identification system continuously monitors and acquires real-time image data of the holding area. From the real-time image data, the object-identification system detects when each such object is placed on the support surface 116, moved to another region of the support surface, or removed from the support surface. Techniques for detecting and tracking objects disposed on a support surface in a holding area can be found in U.S. patent application Ser. No. 15/091,180, filed Apr. 5, 2016, titled “Package Tracking Systems and Methods,” the entirety of which patent application is incorporated by reference herein. In addition, the object-identification system may identify a perishable item and send a notification to staff of its expiration. Or the object-identification system can recognize damaged goods on a shelf and notify staff accordingly. In response to the notifications, staff can then inspect the item in question to remove if past its expiration date or confirm the extent of damaged packaging.
The object-identification system further recognizes each object on the support surface or involved in a handling activity. Object recognition serves to identify the type of object detected and tracked (e.g., a package from a certain carrier, a jar of pickles, a microscope). Such object recognition may involve human interaction to initially identify or to confirm, correct, or fine tune the recognition of a given object. The object-identification system employs machine-learning techniques to improve its object recognition capabilities. Recognition of a given object can facilitate the tracking of the object while the object is in the holding area, serving to confirm the presence or movement of the object.
Upon occasion, the sensor module 102 will capture an image for which object recognition falls below a threshold, namely, the object-identification system is unable to recognize an object in the image. Despite being unable to recognize the object (at least initially), the object-identification system can still track the object, namely, its initial placement and any subsequent location within the holding area, based on visual characteristics of the object. The unidentifiable image is retained for purposes of later retraining of the DNN 112 so that the DNN will become able to recognize a previously unrecognizable object when that object is present in subsequently processed images. Human interaction with the object-identification system, through voice recognition, gesture recognition, or keyboard input, can specifically identify an object in an unidentifiable image, giving the image a proper label. An example of gesture recognition is a person holding up three fingers to identify the object as type number 3, where the object-identification system has stored the association of a three-finger gesture with a specific object (e.g., three fingers correspond to a microscope). After an object in the previously unidentifiable image becomes recognized, with the help of the human input, the image and associated proper label are stored in an image database 122. The object-identification system 100 uses these stored images and labels to retrain the deep neural network 112. By retraining the deep neural network with previously unidentifiable images, now made identifiable by human-provided information, the neural network 112 increasingly grows “smarter”. Over time, the probability of the neural network recognizing objects in later captured images approaches one hundred percent.
The image database 122 may be kept in local storage 124, accessed through a central computer 126 in proximity of the sensor module 102. In this embodiment, the central computer 126 provides access to the image database 122 for all deployed sensor modules 102. In another embodiment, shown in phantom in
The one or more processors 200 are in communication with a video interface 204, an optional light source interface 206, an optional audio interface 208, a network interface 210, and interfaces 212 to I/O components (e.g., the display 118). By the video interface 204, the controller 104 communicates with each image sensor 106 and depth sensor 108, if any, in the sensor module 102; by the light source interface 206, the controller 104 controls activation of the light source 110, and, depending upon the type of light source, the direction in which to point an emitted light beam; by the audio interface 208, the controller 104 communicates with audio devices that capture or play sound.
In addition to conventional software, such as an operating system and input/output routines, the memory 202 stores program code for configuring the one or more processors 200 to implement the deep neural network (DNN) 112, and to perform personnel registration 214, object detection 216 in images, object tracking 218 in the holding area, object recognition 220 in images, neural network training 222, image-preprocessing 224, change tracking 226 in images, and, optionally, light guidance 228. The one or more processors 200 and memory 202 can be implemented together or individually, on a single or multiple integrated circuit (IC) devices. In addition, the program code stored in memory 202 can reside at different sites. For example, the program code for implementing the DNN 112 can reside at a remote location (e.g., on the cloud) while the program code for user recognition can reside and execute locally (i.e., on the sensor module).
In brief overview, the program code for personnel registration 214 records the identities and activities of individuals who use the object-identification system 100 and associates such individuals with the objects they affect; the program code for object detection 216 uses image-processing techniques to detect the presence of objects in images; the program code for object tracking 218 tracks the locations of detected objects within the holding area, the program code for object recognition 220 employs the DNN 112 to recognize (i.e., identify or classify) objects in images; the program code for neural network training 222 trains the DNN 112 to become capable of recognizing particular types of objects; the program code for image pre-processing 224 applies image editing techniques to captured images to improve object detection and recognition efforts in such images; the program code for change tracking 226 detects changes in images and assists in labeling images; and, optionally, the program code for light guidance 228 guides humans to objects and/or locations in the object-holding area using the light source 110. As later described in more detail, various elements or functionality of the controller 104 may reside remotely; that is, in some embodiments, some elements or functionality of the controller 104 are not part of the sensor module 102 (
The image acquisition-module 304 of the AI module 300 is configured to acquire images from the image sensor 106 and optional depth sensor 108. Captured images pass to the image-preprocessing module 306, and the image-preprocessing module 306 forwards the images to the object-tracking module 308. The image-preprocessing module 306 sends each image (line 316) to the computer-vision module 114 and copy of that image (line 318) to the DNN 112 (alternatively, the computer-vision module 114 receives the copy of the image.
In general, the object-tracking module 308 is configured to detect objects in images, to track such objects, and to perform object recognition using the DNN 112 of
Based on the information received from the object-tracking module 308, the QMM 312 determines whether the DNN 112 was successful identifying an object (or objects) in an image. If successful, the QMM 312 signals (line 322) success. The controller 104 can receive this success signal and respond to the signal accordingly, depending upon the end-user application that seeks to determine the identification of objects, such as a package-tracking application.
If an object is not identifiable within an image, the QMM 312 notifies (line 324) the computer-vision module 114. The computer-vision module 114 optionally sends an image (line 326) to the DNN 112; this image is derived from the original image and focuses on a region in the original image in which a change was detected. The DNN 112 may attempt to identify an object in this focused image (line 326), that is, the DNN 112 performs a second pass. If the DNN is unsuccessful during the second pass, the QMM 312 sends a request (line 327) to the human-input-acquisition module 310, seeking labeling information for the unidentifiable object in the original image. Irrespective of the success or failure of the DNN 112 to recognize an object in this focused image, the computer-vision module 114 sends (line 328) the original image within which an object was not initially recognized to the local storage 124. The image being stored is joined/associated (box 330) with a human-provided label (line 332) from the human-input-acquisition module 310 or with a label (line 334) produced by the DNN 112 (line 320), sent to the QMM 312, and then forwarded by the QMM 312. The DNN trainer 314 uses those images in the local storage 124 and their associated ID information (i.e., labels) to retrain (line 336) the DNN 112.
Each sensor module 102 (
Specifically, the AI module 400 includes the image-acquisition module 304 (
The computer-vision module 114, image-acquisition module 304, image-preprocessing module 306, object-tracking module 404, human-input-acquisition module 310, tracking QMM 312, cloud (i.e., remote) storage 128, DNN trainer 314, and DNN 112 operate like their counterpart modules in
If the QMM 312 determines the DNN 112 was successful identifying an object (or objects) in an image, the QMM 312 signals (line 322) success. If an object is not identifiable within an image, the QMM 312 notifies (line 324) the computer-vision module 114. The computer-vision module 114 optionally sends an image (line 326) to the cloud interface 414, for transmission to the remote DNN 112. This image is derived from the original image and is focused on a region in the original image in which the computer-vision module 114 detected change. The DNN 112 may attempt to identify an object in this focused image. If the DNN attempts but is unsuccessful during the second pass, the QMM 312 sends a request (not shown) to the human-input-acquisition module 310, seeking labeling information for the unidentifiable object in the original image.
Irrespective of the success or failure of the DNN 112 to recognize an object in this focused image during the DNN's second attempt, the computer-vision module 114 forwards (line 328) the original image (or an edited version of the original image) to the cloud storage 128. The image to be stored is joined or associated with (box 330) a human-provided label (line 332) acquired by the human-input-acquisition module 310 (in the event of a DNN failure) or with a label (line 320) produced by the DNN 112 on a successful second pass and forwarded by the QMM 312 (in the event of the DNN success). The DNN trainer 314 uses those images in the remote storage 128 and their associated ID information (i.e., labeling information) to retrain the DNN 112.
Each sensor module 102 (
If an object is not identifiable within an image, the QMM 312 signals (line 324) the computer-vision module 114. In response to the “DNN FAILS” signal, the computer-vision module 114 may send an image (line 326), derived from the original image (or an edited version of it) that is focused on a region in the original image in which the computer-vision module 114 detects a change, to the DNN 112 for an attempt to identify an object in this focused image, in effect, performing a second pass at authentication. The DNN 112 sends (line 320) the results of this second attempt to the QMM 312.
Irrespective of the success or failure of the DNN 112 to recognize an object in the focused image during the second attempt, the remote computer-vision module 114 forwards (line 328) the original image (or an edited version thereof), in which the DNN 112 was initially unable to recognize an object, to the cloud storage 128.
If an object is not identifiable within this focused image, the QMM 312 signals the AI module 500 (line 327), telling the AI module 500 to request human input. When the human-input-acquisition module 310 receives the human input, the cloud interface 504 sends (line 332) a human-input label to the cloud storage 128. Before being stored, the human input label (line 332) is combined or associated with (box 330) the image coming from the remote computer-vision module 114.
If an object is identifiable within the focused image, the QMM 312 sends a label (line 334) produced by the DNN 112 that is combined or associated with (box 330) the image sent to the cloud storage 128 by the computer-vision module 114. As previously described, the DNN trainer 314 uses those images and their associated labels in image database 122 maintained in the remote storage 128 to retrain (line 336) the DNN 112.
In one embodiment, the DNN 112 has a deep learning architecture, for example, a deep convolutional neural network, having an input layer 602, an output layer 604, and multiple hidden layers (not shown). Hidden layers may comprise one or more convolutional layers, one or more fully connected layers, and one or more max pooling layers. Each convolutional and fully connected layer receives inputs from its preceding layer and applies a transformation to these inputs based on current parameter values for that layer. Example architectures upon which to implement a deep learning neural network include, but are not limited to, the Darknet Open source Deep Neural Net framework available at the website pjreddie.com and the Caffe framework available at the website caffe.berkeleyvision.org.
The DNN 112 is involved in two processes: object detection/recognition and training. For purposes of object detection and recognition, images 606 are provided as input 608 to the DNN 112 from the image-acquisition module. The images 606 include color images (e.g., RGB) and, optionally, depth images. Color and depth images captured at a given instant in real-time are linked as a pair. Such images may pass through the image preprocessor 306, which produces image data 608 based on the processed images. The image preprocessor 306 may or may not modify an image before the image 606 passes to the DNN. In one embodiment, the image preprocessor 306 is configured to apply one or more image-editing techniques determined to enhance the DNN's ability to detect objects in images by making such images robust (i.e., invariant) to illumination changes. For RGB, one pre-processing algorithm uses a series of steps to counter the effects of illumination variation, local shadowing, and highlights. Steps in the algorithm include gamma correction, difference of Gaussian filtering, masking and contrast equalization. Depth data can be noisy and can have missing data depending on the circumstances under which depth data are captured. Ambient light and highly reflective surfaces are major factors of noise and missing data. This pre-filtering ensures that these artifacts are corrected for and that the data are well preserved. Pre-processing steps include ambient light filtering, edge-preserving smoothing, Gaussian blurring, and time-variant blurring. When depth images and RGB images both pass to the image preprocessor 306, the image preprocessor performs a blending transformation process that blends the RGB data with the depth data to produce image data 608. Examples of blending transformation processes include, but are not limited to, blending by concatenation or blending by interleaving, both of which are described in more detail below.
The image data 608 passes to the input layer 602 of the DNN 112. (Though not shown in
Alternatively, blending by interleaving can blend the RGB image with the depth image. In this blending technique, instead of concatenating the RGB and depth images and gaining channels, the channels of both images are blended in a manner that retains the original structure, that is, the number of channels in the resulting image do not increase after blending from the number of channels in the original RGB image. One such example follows:
Consider an eight-bit three-channel RGB image, that is, the R-channel has eight bits, the G-channel has eight bits, and the B-channel has eight bits. Further, consider that the depth image is a single channel of 16-bit data; that is the D-channel has 16 bits.
One method of combining data from multiple dimensions (i.e., channels) and packing the data into fewer dimensions (i.e., channels) is the Morton Number Interleaving.
For example, a color pixel value [R, G, B] of [255, 125, 0] has an eight-bit binary representation of [11111111, 01111101, 00000000], where the three eight-bit values represent the three eight-bit R, G, and B channels, respectively.
For the 16-bit depth value, three eight-bit values are derived. The first eight-bit value, referred to as D1, entails a conversion of the 16-bit value to an eight-bit value. This conversion is done by normalizing the decimal equivalent of the 16-bit depth value and multiplying the normalized value by the maximum value of an eight-bit number (i.e., 255). For example, consider an original 16-bit depth value [D] that has a decimal value of [1465]. Normalizing the decimal value [1465] entails dividing this decimal value by the maximum decimal value that can be represented by 16 bits, namely [65025]. Accordingly, the multiplied, normalized decimal value for D1=(1465/65025)*255=6 (rounded up). The eight-bit binary representation of D1 is [00000110].
The next two bytes are obtained by partitioning the original 16-bit depth value [D] into two eight-bit bytes, called D2 and D3. For example, the previously noted 16-bit depth value [D] of [1465] has a binary representation of [0000010110111001]. The 8-bit D2 byte corresponds to the first byte of the 16-bit depth value [D], which is [00000101], and the 8-bit D3 byte corresponds to the second byte of the 16-bit depth value [D], which is [10111001]. Accordingly, [D2, D3]=[00000101, 10111001].
The three bytes [D1, D2, D3] derived from the original depth value [D] are [00000110, 00000101, 10111001]. As previously mentioned, the three-channel, eight-bit RGB values are [11111111, 01111101, 00000000].
Morton order interleaving produces a 16-bit, three-channel image from the three channels of depth values bytes [D1, D2, D3] and the three channels of RGB values [R, G, B] bytes by appending the depth values to the RGB values as such: [RD1, GD2, BD3]. With regards to the previous example, the Morton order interleaving produces three 16-bit channels of [1111111100000110, 0111110100000101, 0000000010111001]. The technique executes for each pixel of the corresponding images 606 (i.e., RGB image and its associated depth image). The result is a three-channel image that has both depth and color information. It is to be understood that Morton order interleaving is just an example of a technique for interleaving depth data with color data for a given pixel; other interleaving techniques may be employed without departing from the principles described herein.
As with the blending by concatenation technique, less than all color values R, G, or B, may be interleaved with a depth value. For example, R+D, G+D, B+D, R+G+D, R+B+D, G+B+D are instances of where less than all three color (RGB) values are submitted as input together with a D value. In these cases, there is a separate channel for each interleave of color and depth. When less than three RGB channels are used, any of the D1, D2, D3 depth channels can serve for interleaving. For example, the combinations such as R+D, G+D, and B+D each require only one channel; combinations such as R+G+D, R+B+D, G+B+D each have two channels. If only one RGB channel is used, D1 is the preferred choice, because the D1 depth channel contains the whole depth information. If two color channels are used, then two depth channels are used in the interleaving: for example, D2 and D3 (D2 and D3 together have the whole depth information). To illustrate, again using the color pixel value [R, G, B] of [255, 125, 0] and the original depth value of [1465], the combination of R+G+D produces the following 16-bit two channel [RD2, GD3] input data: [1111111100000110, 0111110100000101]], where D2 and D3 are the chosen depth channels. In general, the ability to achieve object detection benefits from having more information available rather than less; accordingly, blending by concatenation, which retains all of the available color and potentially depth data, may produce better detection outcomes than blending by interleaving, which reduces the number of channels and may use less than all color and depth. Where blending by interleaving may be more advantageous over blending by concatenation is when it comes to training speed.
The output layer 604 produces an output 320 that passes to the QMM, which may be by way of a cloud interface 406 (
The DNN 112 is also in communication with the DNN trainer for purposes of receiving parameter value updates used in retraining.
In one embodiment, the DNN 112 is comprised of two deep neural networks (not shown) that operate in parallel. One neural network receives the R, G, and B pixel values, while the other receives the R, G, B, and D values. Each neural network attempts to recognize one or more objects in the supplied image based on the image data 608 submitted. Each produces an output. The two outputs can be compared and/or combined, for purposes of confirming and/or augmenting each other's determination. For example, consider that the RGB neural network produces a result of having detected one package in a specific area of the image and the RGBD neural network produces a result having detected two packages in the same specific area. A comparison of the probabilities of the two neural networks (and a logic circuit) would declare resolve the difference and finalize the result as either one package or two.
The computer-vision module 114 is in communication with the QMM to receive a “DNN FAILS” signal in the event the DNN 112 fails to successfully recognize an object in the image. Upon receiving such a signal, the computer-vision module 114 outputs (line 328) an image corresponding to the original image in which the DNN could not identify an object. This image can become associated with labeling information 332 supplied by a human (e.g., in response to a prompt from the AI module when the DNN's object identification fails). This combination 610 of labeling information and image passes to storage, where it becomes part of the image database 122. Alternatively, the combination 610 includes the image and labeling information coming (line 334) from the QMM 312 (produced by the DNN 112) when the DNN successfully identifies an object during the second pass.
In addition, the image-preprocessing module 306 sends the original image 606-1 and the resized image 606-2 to the computer-vision module 114. The computer-vision module 114 includes a change-tracking module 700 in communication with a change-localization module 702. In one embodiment, the computer-vision module 114 performs a multi-pass authentication process when the DNN 112 fails to detect an object in the image 606-2. In the event of an unsuccessful object detection, the QMM signals the change-tracking module 700, which, in response, executes the change-tracking program code 226 (
The change-localization module 702 uses this information to produce an image 606-3 that focuses on the region 704 in the original image with the detected change. The focused image 606-3 has a resolution that matches the input resolution of the DNN 112. In order to attain this resolution, the change-localization module 702 may have to reduce or enlarge the size of the region 704 of change. The focused image 606-3 passes to the DNN 112, which attempts to detect an object in this image. The computer-vision module 114 sends the resized image 606-2 to the storage (local or remote) and marks the boundaries of the focus region 704 as those boundaries translate to the resized image 606-2. The boundary information includes a row, column, height, and width of the pertinent region within the resized image 606-2.
Within the storage, the resized image 606-2 is associated with the label name provided by human input (when the DNN fails to recognize an object in the focused image 606-3) or with the label produced by the DNN 112 (when the DNN successfully recognizes an object in the focused image 606-3). The resized image 606-2, the marked boundaries, and label information are used together in subsequent retraining of the DNN 112.
The object-identification system 100 registers (step 804) an identification of the person. The registration can occur automatically, that is, without the person's conscious involvement. For example, a sensor module 102 can wirelessly communicate with a device carried by the person, such as, for example, a key fob or a smartphone. Alternatively, the controller 104 can perform facial recognition. As other examples of techniques for obtaining the person's identification, the person can deliberately identify him or herself, such as offering a name tag for scanning, entering a PIN code or password, submitting biometric information (e.g., a fingerprint or retinal scan), speaking to allow for voice recognition. In another embodiment, the object-identification system 100 identifies the individual using skeletal tracking (i.e., the skeletal structure of the individual) and registers the skeletal structure. In addition to registering the person, the object-identification system 100 can record the person's time of arrival at the holding area.
At step 806, the object-identification system 100 associates the person with one or more objects in the holding area. The association can occur directly, from user input, or indirectly, based on an activity performed by the user and observed by the system. As an example of direct association, the system can expressly request that the person provide information about the purpose of the visit, such as depositing or removing an object, and the identity of each object the purpose involves. The person can provide this information through any number of input techniques, for example, scanning the label on a package to be deposited. Alternatively, the person can identify what the object is by typing in the name of the object or by speaking to the system, which uses voice recognition and speech-to-text conversion techniques. After receiving the information about each affected object, the system associates that object with the identity of the registered person.
As an example of indirect association, the object-identification system 100 can detect the activity performed by the person in the holding area. For example, through image processing, the system can detect that an object has been placed on or removed from a shelf and then associate the newly placed object, or the removed object, with the identity of the registered person.
At step 808, the object-identification system 100 attempts to recognize what the object is. Recognition may result from information supplied directly to the system by the user, for example, when the user enters that the “item is a microscope”; from a previous determination, for example, the system detects the removal of an object with an already known identity; or from object recognition, for example, the system executes its object recognition algorithm upon an image of the newly detected object. In one embodiment, the system automatically requests human interaction, namely, to ask the human to identify an object being deposited, moved, or removed. Such request can occur before, during, or after the system attempts its own object recognition.
A decision to request human interaction may be based on a threshold value derived by the controller 104 in its attempt at object recognition from a captured image. For example, if, at step 810, the threshold value exceeds a first (e.g., upper) threshold, the system considers an object to have been recognized with a high degree of confidence and may dispense with human interaction; if the threshold value is less than the first threshold but greater than a second (e.g., lower) threshold, the system considers an object to have been recognized, but with a moderate degree of confidence; if the threshold value falls below the second threshold, the system concludes it has not recognized any object in the image. The system may request that the person confirm or correct (step 812) the system's identification if the determined threshold value is below the upper threshold, but above the lower threshold, and request (step 814) that the person provide the identification if the determined threshold value is below the lower threshold. Fewer or more than two thresholds may be used without departing from the principles described herein. Further, the system may request confirmation even if the threshold value exceeds the upper threshold or request the object's identity in the event of an imprecise, incorrect, or unsuccessful object recognition.
The activity of the person in the holding area may change the layout of objects on the support surface. A new object has been placed, an object has been removed, an object has been moved to another location, or any combination thereof. The new arrangement of the objects produces different perspectives and varied angular irregularities in relation to the image and depth sensors of one or more sensor modules. Machine learning not only learns what an object looks like through both color and depth, it can now learn various perspectives on each object as they are placed in different locations in the area. This machine learning compensates for the dynamic perspectives of objects, seen by the image sensor, and learns how an identified object can be the same object if placed in different areas within the viewing area and at different angles, depths in the shelving. Accordingly, images now captured by the image sensors provide an opportunity to improve object recognition, through machine learning techniques. The system retrains the neural network with those newly captured images for which the neural network was unable to identify the object (at least initially) and needed the labeling information about the object provided by the user or by the neural network during a multi-pass authentication. The system can also record the person's time of departure when the person leaves the holding area, and then associate the person's time of arrival and time of departure with the object.
Consider the following illustrations as examples of operation of one embodiment of the object-identification system 100. Alice enters a room having several shelves. She is carrying a microscope and a smartphone. The smartphone is running Bluetooth®. The controller 104 connects to and communicates with the smartphone to establish the identity of the person as Alice. In addition, the controller establishes the time of Alice's entry into the room, for example, as 1:42 p.m., Thursday, Apr. 16, 2019. Alice places the microscope on one of the shelves. Through image processing of images captured by the image sensor, the controller detects the object and location of the microscope. In addition, the controller may employ machine learning to recognize the object as a microscope. The controller may ask Alice to confirm its determination, whether the controller has recognized the object correctly or not. If the controller was unable to recognize the placed object, the controller may ask Alice to identify the object, which she may input electronically or verbally, depending upon the configuration of the object-identification system. Alternatively, the system may be configured to ask Alice the identity of the object, irrespective of its own recognition of the object. The system can then, locally or remotely on the server, immediately, or later, train its neural network with the images captured of the microscope and with the information, if any, provided by Alice. Alice then departs the room, and the controller records the time of departure as 1:48 p.m., Thursday, Apr. 16, 2019.
Bob enters the room and submits his identification to the controller using a PIN code. The controller registers Bob and his time of entry as, for example, as 2:54 p.m., Thursday, Apr. 16, 2019. The controller identifies Bob and, from his pattern of past practices, recognizes his regular use of the microscope. The controller asks, audibly or by a message displayed on a display screen, if Bob is looking for the microscope. If Bob answers in the affirmative, the controller illuminates the light source and directs a light beam at the location on the shelves where the microscope resides. Bob removes the microscope from the shelf and departs the room with it. The system records Bob's time of departure as 2:56 p.m., Thursday, Apr. 16, 2019, and that Bob has taken the microscope. By linking the arrival of the microscope with Alice, the removal of the microscope with Bob, the times of such operations, the presence of the microscope in the interim, all confirmed by video recordings, the system has thus established a chain of custody of the microscope. This chain of custody principle can extend to other fields of endeavor, such as processes for handling evidence. In the present context, chain of custody means a chronological recording of the sequence of custody (possession) and locations of physical objects coming into, moving within, and going out of the holding area. The object-identification system knows who has brought certain pieces of evidence into the evidence room, taken evidence from the room, and the precise locations of the evidence within the room in the interim, even if moved to another section within sight of the image sensor.
If the QMM 312 determines (step 910) that the DNN successfully identified one or more objects in the image, the object-identification system 100 uses (step 912) the information about each identified object, for example, for object-tracking purposes. The specific use of the object information depends upon the application for which the object-identification system is being used.
If, instead, the QMM determines (step 910) that the DNN was unsuccessful in the attempt to identify an object in the image, the AI module asks (step 914) the human to identify the object. After the human supplies the requested information, the optionally preprocessed image (produced in step 904) is stored (step 916) in the image database 122 with the human-provided labeling information, for later use in retraining the DNN.
In one embodiment, shown in phantom in
For purposes of retraining the DNN 112 (
Based on the images in the image database, the DNN trainer 314 runs program code for neural network training 222 (
In general, the DNN trainer maintains a copy of the current weight file for the DNN. The retraining of the DNN can occur in whole or in part. When retraining in whole, the entire DNN is trained from scratch, that is, the current weight file is erased and replaced with a newly generated weight file. It's as though the DNN was again a blank slate and was being initially trained. This retraining uses the initial training set of images and each additional image added to the image database for not being initially identified.
When retraining in part, the retraining can focus on certain layers of the DNN. For example, consider a DNN with ten hidden layers; retraining can be performed on the seventh, eighth, and ninth hidden layers only, the operative principle being to avoid performing a full DNN training, which can be time consuming, when a focused retraining can suffice. In this example, only those parameter values in the current weight file that are associated with the neurons of the seventh, eighth, and ninth hidden layers are changed. The new weight file, produced by the DNN trainer and sent to the DNN, is a mix of the new parameter values for the neurons of the seventh, eighth, and ninth hidden layers and old parameter values for the remainder of the DNN.
Consider that, as an example of the machine learning in operation, Alice places a microscope on a shelf. At the time of registering Alice, if the object-identification system 100 does not recognize the object, the system asks Alice to identify the placed object; she may respond that the object is a microscope. The object-identification system 100 further captures one or more images of the object on the shelf and associates each captured image with the information provided by Alice (i.e., the object is a microscope). The DNN trainer uses each captured image and the information provided by Alice to train the neural network 112. This training may be the system's initial training for identifying microscopes, or cumulative to the system's present capability. In either case, after the training, the object-identification system is better suited for identifying microscopes.
Although described with respect to detecting, tracking, and recognizing objects, the machine-learning techniques described herein extend to the detecting, tracking, and recognizing faces, skeletal structure, body position, and movement of people in the captured images. In similar fashion as images of objects are used to train the deep neural networks to improve object recognition, images of faces can be used to train such networks to improve facial recognition for purposes of user registration, and images of skeletal features, such as hands, arms, and legs can be used to train such networks to improve for purposes of identifying and tracking individual persons and objects.
As will be appreciated by one skilled in the art, aspects of the systems described herein may be embodied as a system, method, and computer program product. Thus, aspects of the systems described herein may be embodied in entirely hardware, in entirely software (including, but not limited to, firmware, program code, resident software, microcode), or in a combination of hardware and software. All such embodiments may generally be referred to herein as a circuit, a module, or a system. In addition, aspects of the systems described herein may be in the form of a computer program product embodied in one or more computer readable media having computer readable program code embodied thereon.
Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer readable medium may be a non-transitory computer readable storage medium, examples of which include, but are not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination thereof.
As used herein, a computer readable storage medium may be any tangible medium that can contain or store a program for use by or in connection with an instruction execution system, apparatus, device, computer, computing system, computer system, or any programmable machine or device that inputs, processes, and outputs instructions, commands, or data. A non-exhaustive list of specific examples of a computer readable storage medium include an electrical connection having one or more wires, a portable computer diskette, a floppy disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), a USB flash drive, an non-volatile RAM (NVRAM or NOVRAM), an erasable programmable read-only memory (EPROM or Flash memory), a flash memory card, an electrically erasable programmable read-only memory (EEPROM), an optical fiber, a portable compact disc read-only memory (CD-ROM), a DVD-ROM, an optical storage device, a magnetic storage device, or any suitable combination thereof.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. As used herein, a computer readable storage medium is not a computer readable propagating signal medium or a propagated signal.
Program code may be embodied as computer-readable instructions stored on or in a computer readable storage medium as, for example, source code, object code, interpretive code, executable code, or combinations thereof. Any standard or proprietary, programming or interpretive language can be used to produce the computer-executable instructions. Examples of such languages include Python, C, C++, Pascal, JAVA, BASIC, Smalltalk, Visual Basic, and Visual C++.
Transmission of program code embodied on a computer readable medium can occur using any appropriate medium including, but not limited to, wireless, wired, optical fiber cable, radio frequency (RF), or any suitable combination thereof.
The program code may execute entirely on a user's device, partly on the user's device, as a stand-alone software package, partly on the user's device and partly on a remote computer or entirely on a remote computer or server. Any such remote computer may be connected to the user's device through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an ISP (Internet Service Provider)).
Additionally, the methods described herein can be implemented on a special purpose computer, a programmed microprocessor or microcontroller and peripheral integrated circuit element(s), an ASIC or other integrated circuit, a digital signal processor, a hard-wired electronic or logic circuit such as discrete element circuit, a programmable logic device such as PLD, PLA, FPGA, PAL, or the like. In general, any device capable of implementing a state machine that is in turn capable of implementing the proposed methods herein can be used to implement the principles described herein.
Furthermore, the disclosed methods may be readily implemented in software using object or object-oriented software development environments that provide portable source code that can be used on a variety of computer or workstation platforms. Alternatively, the disclosed system may be implemented partially or fully in hardware using standard logic circuits or a VLSI design. Whether software or hardware is used to implement the systems in accordance with the principles described herein is dependent on the speed and/or efficiency requirements of the system, the particular function, and the particular software or hardware systems or microprocessor or microcomputer systems being utilized. The methods illustrated herein however can be readily implemented in hardware and/or software using any known or later developed systems or structures, devices and/or software by those of ordinary skill in the applicable art from the functional description provided herein and with a general basic knowledge of the computer and image processing arts.
Moreover, the disclosed methods may be readily implemented in software executed on programmed general-purpose computer, a special purpose computer, a microprocessor, or the like. In these instances, the systems and methods of the principles described herein may be implemented as program embedded on personal computer such as JAVA® or CGI script, as a resource residing on a server or graphics workstation, as a plug-in, or the like. The system may also be implemented by physically incorporating the system and method into a software and/or hardware system.
While the aforementioned principles have been described in conjunction with a number of embodiments, it is evident that many alternatives, modifications, and variations would be or are apparent to those of ordinary skill in the applicable arts. References to “one embodiment” or “an embodiment” or “another embodiment” means that a particular, feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment described herein. A reference to a particular embodiment within the specification do not necessarily all refer to the same embodiment. The features illustrated or described in connection with one exemplary embodiment may be combined with the features of other embodiments. Accordingly, it is intended to embrace all such alternatives, modifications, equivalents, and variations that are within the spirit and scope of the principles described herein.
This application is a continuation of U.S. patent application Ser. No. 16/575,837 filed on Sep. 19, 2019 and titled “Machine-Learning-Assisted Self-Improving Object-identification System and Method,” which claims priority to and the benefit of U.S. Provisional Application No. 62/734,491, filed on Sep. 21, 2018 and titled “Machine-Learning-Assisted Self-Improving Object-identification System and Method”, the entirety of each of which is incorporated by reference herein for all purposes.
Number | Name | Date | Kind |
---|---|---|---|
2408122 | Wirkler | Sep 1946 | A |
3824596 | Guion et al. | Jul 1974 | A |
3940700 | Fischer | Feb 1976 | A |
4018029 | Safranski et al. | Apr 1977 | A |
4328499 | Anderson et al. | May 1982 | A |
4570416 | Shoenfeld | Feb 1986 | A |
5010343 | Andersson | Apr 1991 | A |
5343212 | Rose et al. | Aug 1994 | A |
5426438 | Peavey et al. | Jun 1995 | A |
5510800 | McEwan | Apr 1996 | A |
5545880 | Bu et al. | Aug 1996 | A |
5574468 | Rose | Nov 1996 | A |
5592180 | Yokev et al. | Jan 1997 | A |
5600330 | Blood | Feb 1997 | A |
5657026 | Culpepper et al. | Aug 1997 | A |
5671362 | Cowe et al. | Sep 1997 | A |
5923286 | Divakaruni | Jul 1999 | A |
5953683 | Hansen et al. | Sep 1999 | A |
6088653 | Sheikh et al. | Jul 2000 | A |
6101178 | Beal | Aug 2000 | A |
6167347 | Lin | Dec 2000 | A |
6255991 | Hedin | Jul 2001 | B1 |
6285916 | Kadaba et al. | Sep 2001 | B1 |
6292750 | Lin | Sep 2001 | B1 |
6409687 | Foxlin | Jun 2002 | B1 |
6417802 | Diesel | Jul 2002 | B1 |
6492905 | Mathias et al. | Dec 2002 | B2 |
6496778 | Lin | Dec 2002 | B1 |
6512748 | Mizuki et al. | Jan 2003 | B1 |
6593885 | Wisherd et al. | Jul 2003 | B2 |
6619550 | Good et al. | Sep 2003 | B1 |
6630904 | Gustafson et al. | Oct 2003 | B2 |
6634804 | Toste et al. | Oct 2003 | B1 |
6683568 | James et al. | Jan 2004 | B1 |
6697736 | Lin | Feb 2004 | B2 |
6720920 | Breed et al. | Apr 2004 | B2 |
6721657 | Ford et al. | Apr 2004 | B2 |
6744436 | Chirieleison, Jr. et al. | Jun 2004 | B1 |
6750816 | Kunysz | Jun 2004 | B1 |
6861982 | Forstrom et al. | Mar 2005 | B2 |
6867774 | Halmshaw et al. | Mar 2005 | B1 |
6988079 | Or-Bach et al. | Jan 2006 | B1 |
6989789 | Ferreol et al. | Jan 2006 | B2 |
7009561 | Menache et al. | Mar 2006 | B2 |
7104453 | Zhu et al. | Sep 2006 | B1 |
7143004 | Townsend et al. | Nov 2006 | B2 |
7168618 | Schwartz | Jan 2007 | B2 |
7190309 | Hill | Mar 2007 | B2 |
7193559 | Ford et al. | Mar 2007 | B2 |
7236091 | Kiang et al. | Jun 2007 | B2 |
7292189 | Orr et al. | Nov 2007 | B2 |
7295925 | Breed et al. | Nov 2007 | B2 |
7315281 | Dejanovic et al. | Jan 2008 | B2 |
7336078 | Merewether et al. | Feb 2008 | B1 |
7353994 | Farrall et al. | Apr 2008 | B2 |
7409290 | Lin | Aug 2008 | B2 |
7443342 | Shirai et al. | Oct 2008 | B2 |
7499711 | Hoctor et al. | Mar 2009 | B2 |
7533569 | Sheynblat | May 2009 | B2 |
7612715 | Macleod | Nov 2009 | B2 |
7646330 | Karr | Jan 2010 | B2 |
7689465 | Shakes et al. | Mar 2010 | B1 |
7844507 | Levy | Nov 2010 | B2 |
7868760 | Smith et al. | Jan 2011 | B2 |
7876268 | Jacobs | Jan 2011 | B2 |
7933730 | Li et al. | Apr 2011 | B2 |
7995109 | Kamada et al. | Aug 2011 | B2 |
8009918 | Van Droogenbroeck et al. | Aug 2011 | B2 |
8189855 | Opalach et al. | May 2012 | B2 |
8201737 | Palacios Durazo et al. | Jun 2012 | B1 |
8219438 | Moon et al. | Jul 2012 | B1 |
8269624 | Chen et al. | Sep 2012 | B2 |
8295542 | Albertson et al. | Oct 2012 | B2 |
8406470 | Jones et al. | Mar 2013 | B2 |
8457655 | Zhang et al. | Jun 2013 | B2 |
8619144 | Chang et al. | Dec 2013 | B1 |
8749433 | Hill | Jun 2014 | B2 |
8843231 | Ragusa et al. | Sep 2014 | B2 |
8860611 | Anderson et al. | Oct 2014 | B1 |
8957812 | Hill et al. | Feb 2015 | B1 |
9063215 | Perthold et al. | Jun 2015 | B2 |
9092898 | Fraccaroli et al. | Jul 2015 | B1 |
9120621 | Curlander et al. | Sep 2015 | B1 |
9141194 | Keyes et al. | Sep 2015 | B1 |
9171278 | Kong et al. | Oct 2015 | B1 |
9174746 | Bell et al. | Nov 2015 | B1 |
9269022 | Rhoads et al. | Feb 2016 | B2 |
9349076 | Liu et al. | May 2016 | B1 |
9424493 | He et al. | Aug 2016 | B2 |
9482741 | Min et al. | Nov 2016 | B1 |
9497728 | Hill | Nov 2016 | B2 |
9500396 | Yoon et al. | Nov 2016 | B2 |
9514389 | Erhan et al. | Dec 2016 | B1 |
9519344 | Hill | Dec 2016 | B1 |
9544552 | Takahashi | Jan 2017 | B2 |
9594983 | Alattar et al. | Mar 2017 | B2 |
9656749 | Hanlon | May 2017 | B1 |
9740937 | Zhang et al. | Aug 2017 | B2 |
9782669 | Hill | Oct 2017 | B1 |
9872151 | Puzanov et al. | Jan 2018 | B1 |
9904867 | Fathi et al. | Feb 2018 | B2 |
9911290 | Zalewski et al. | Mar 2018 | B1 |
9933509 | Hill et al. | Apr 2018 | B2 |
9961503 | Hill | May 2018 | B2 |
9996818 | Ren et al. | Jun 2018 | B1 |
10001833 | Hill | Jun 2018 | B2 |
10148918 | Seiger et al. | Dec 2018 | B1 |
10163149 | Famularo et al. | Dec 2018 | B1 |
10180490 | Schneider et al. | Jan 2019 | B1 |
10257654 | Hill | Apr 2019 | B2 |
10324474 | Hill et al. | Jun 2019 | B2 |
10332066 | Palaniappan et al. | Jun 2019 | B1 |
10366423 | Iso | Jul 2019 | B2 |
10373322 | Buibas et al. | Aug 2019 | B1 |
10399778 | Shekhawat et al. | Sep 2019 | B1 |
10416276 | Hill et al. | Sep 2019 | B2 |
10444323 | Min et al. | Oct 2019 | B2 |
10455364 | Hill | Oct 2019 | B2 |
10605904 | Min et al. | Mar 2020 | B2 |
10853757 | Hill et al. | Dec 2020 | B1 |
10949799 | Chaubard | Mar 2021 | B2 |
11080559 | Chaubard | Aug 2021 | B2 |
11361536 | Chakravarty | Jun 2022 | B2 |
11416805 | Piotrowski et al. | Aug 2022 | B1 |
20010027995 | Patel et al. | Oct 2001 | A1 |
20020021277 | Kramer et al. | Feb 2002 | A1 |
20020095353 | Razumov | Jul 2002 | A1 |
20020140745 | Ellenby et al. | Oct 2002 | A1 |
20020177476 | Chou | Nov 2002 | A1 |
20030024987 | Zhu | Feb 2003 | A1 |
20030053492 | Matsunaga | Mar 2003 | A1 |
20030110152 | Hara | Jun 2003 | A1 |
20030115162 | Konick | Jun 2003 | A1 |
20030120425 | Stanley et al. | Jun 2003 | A1 |
20030176196 | Hall et al. | Sep 2003 | A1 |
20030184649 | Mann | Oct 2003 | A1 |
20030195017 | Chen et al. | Oct 2003 | A1 |
20040002642 | Dekel et al. | Jan 2004 | A1 |
20040095907 | Agee et al. | May 2004 | A1 |
20040107072 | Dietrich et al. | Jun 2004 | A1 |
20040176102 | Lawrence et al. | Sep 2004 | A1 |
20040203846 | Caronni et al. | Oct 2004 | A1 |
20040267640 | Bong et al. | Dec 2004 | A1 |
20050001712 | Yarbrough | Jan 2005 | A1 |
20050057647 | Nowak | Mar 2005 | A1 |
20050062849 | Foth et al. | Mar 2005 | A1 |
20050074162 | Tu et al. | Apr 2005 | A1 |
20050143916 | Kim et al. | Jun 2005 | A1 |
20050154685 | Mundy et al. | Jul 2005 | A1 |
20050184907 | Hall et al. | Aug 2005 | A1 |
20050275626 | Mueller et al. | Dec 2005 | A1 |
20060013070 | Holm et al. | Jan 2006 | A1 |
20060022800 | Krishna et al. | Feb 2006 | A1 |
20060061469 | Jaeger et al. | Mar 2006 | A1 |
20060066485 | Min | Mar 2006 | A1 |
20060101497 | Hirt et al. | May 2006 | A1 |
20060192709 | Schantz et al. | Aug 2006 | A1 |
20060279459 | Akiyama et al. | Dec 2006 | A1 |
20060290508 | Moutchkaev et al. | Dec 2006 | A1 |
20070060384 | Dohta | Mar 2007 | A1 |
20070138270 | Reblin | Jun 2007 | A1 |
20070205867 | Kennedy et al. | Sep 2007 | A1 |
20070210920 | Panotopoulos | Sep 2007 | A1 |
20070222560 | Posamentier | Sep 2007 | A1 |
20070237356 | Dwinell et al. | Oct 2007 | A1 |
20080007398 | DeRose et al. | Jan 2008 | A1 |
20080035390 | Wurz | Feb 2008 | A1 |
20080048913 | Macias et al. | Feb 2008 | A1 |
20080143482 | Shoarinejad et al. | Jun 2008 | A1 |
20080150678 | Giobbi et al. | Jun 2008 | A1 |
20080154691 | Wellman et al. | Jun 2008 | A1 |
20080156619 | Patel et al. | Jul 2008 | A1 |
20080174485 | Carani et al. | Jul 2008 | A1 |
20080183328 | Danelski | Jul 2008 | A1 |
20080204322 | Oswald et al. | Aug 2008 | A1 |
20080266253 | Seeman et al. | Oct 2008 | A1 |
20080281618 | Mermet et al. | Nov 2008 | A1 |
20080316324 | Rofougaran et al. | Dec 2008 | A1 |
20090043504 | Bandyopadhyay et al. | Feb 2009 | A1 |
20090073428 | Magnus et al. | Mar 2009 | A1 |
20090114575 | Carpenter et al. | May 2009 | A1 |
20090121017 | Cato et al. | May 2009 | A1 |
20090149202 | Hill et al. | Jun 2009 | A1 |
20090224040 | Kushida et al. | Sep 2009 | A1 |
20090243932 | Moshfeghi | Oct 2009 | A1 |
20090245573 | Saptharishi et al. | Oct 2009 | A1 |
20090323586 | Hohl et al. | Dec 2009 | A1 |
20100019905 | Boddie et al. | Jan 2010 | A1 |
20100076594 | Salour et al. | Mar 2010 | A1 |
20100090852 | Eitan et al. | Apr 2010 | A1 |
20100097208 | Rosing et al. | Apr 2010 | A1 |
20100103173 | Lee et al. | Apr 2010 | A1 |
20100103989 | Smith et al. | Apr 2010 | A1 |
20100123664 | Shin et al. | May 2010 | A1 |
20100159958 | Naguib et al. | Jun 2010 | A1 |
20110002509 | Nobori et al. | Jan 2011 | A1 |
20110006774 | Baiden | Jan 2011 | A1 |
20110037573 | Choi | Feb 2011 | A1 |
20110066086 | Aarestad et al. | Mar 2011 | A1 |
20110166694 | Griffits et al. | Jul 2011 | A1 |
20110187600 | Landt | Aug 2011 | A1 |
20110208481 | Slastion | Aug 2011 | A1 |
20110210843 | Kummetz | Sep 2011 | A1 |
20110241942 | Hill | Oct 2011 | A1 |
20110256882 | Markhovsky et al. | Oct 2011 | A1 |
20110264520 | Puhakka | Oct 2011 | A1 |
20110286633 | Wang et al. | Nov 2011 | A1 |
20110313893 | Weik, III | Dec 2011 | A1 |
20110315770 | Patel et al. | Dec 2011 | A1 |
20120013509 | Wisherd et al. | Jan 2012 | A1 |
20120020518 | Taguchi | Jan 2012 | A1 |
20120081544 | Wee | Apr 2012 | A1 |
20120087572 | Dedeoglu et al. | Apr 2012 | A1 |
20120127088 | Pance et al. | May 2012 | A1 |
20120176227 | Nikitin | Jul 2012 | A1 |
20120184285 | Sampath et al. | Jul 2012 | A1 |
20120257061 | Edwards et al. | Oct 2012 | A1 |
20120286933 | Hsiao | Nov 2012 | A1 |
20120319822 | Hansen | Dec 2012 | A1 |
20130018582 | Miller et al. | Jan 2013 | A1 |
20130021417 | Ota et al. | Jan 2013 | A1 |
20130029685 | Moshfeghi | Jan 2013 | A1 |
20130036043 | Faith | Feb 2013 | A1 |
20130051624 | Iwasaki et al. | Feb 2013 | A1 |
20130063567 | Burns et al. | Mar 2013 | A1 |
20130073093 | Songkakul | Mar 2013 | A1 |
20130113993 | Dagit, III | May 2013 | A1 |
20130182114 | Zhang et al. | Jul 2013 | A1 |
20130191193 | Calman et al. | Jul 2013 | A1 |
20130226655 | Shaw | Aug 2013 | A1 |
20130281084 | Batada et al. | Oct 2013 | A1 |
20130293684 | Becker et al. | Nov 2013 | A1 |
20130293722 | Chen | Nov 2013 | A1 |
20130314210 | Schoner et al. | Nov 2013 | A1 |
20130335318 | Nagel et al. | Dec 2013 | A1 |
20130335415 | Chang | Dec 2013 | A1 |
20140022058 | Striemer et al. | Jan 2014 | A1 |
20140108136 | Zhao et al. | Apr 2014 | A1 |
20140139426 | Kryze et al. | May 2014 | A1 |
20140253368 | Holder | Sep 2014 | A1 |
20140270356 | Dearing et al. | Sep 2014 | A1 |
20140300516 | Min et al. | Oct 2014 | A1 |
20140317005 | Balwani | Oct 2014 | A1 |
20140330603 | Corder et al. | Nov 2014 | A1 |
20140357295 | Skomra et al. | Dec 2014 | A1 |
20140361078 | Davidson | Dec 2014 | A1 |
20150009949 | Khoryaev et al. | Jan 2015 | A1 |
20150012396 | Puerini et al. | Jan 2015 | A1 |
20150019391 | Kumar et al. | Jan 2015 | A1 |
20150029339 | Kobres et al. | Jan 2015 | A1 |
20150039458 | Reid | Feb 2015 | A1 |
20150055821 | Fotland | Feb 2015 | A1 |
20150059374 | Hebel | Mar 2015 | A1 |
20150085096 | Smits | Mar 2015 | A1 |
20150091757 | Shaw et al. | Apr 2015 | A1 |
20150130664 | Hill et al. | May 2015 | A1 |
20150133162 | Meredith et al. | May 2015 | A1 |
20150134418 | Leow et al. | May 2015 | A1 |
20150169916 | Hill et al. | Jun 2015 | A1 |
20150170002 | Szegedy et al. | Jun 2015 | A1 |
20150202770 | Patron et al. | Jul 2015 | A1 |
20150210199 | Payne | Jul 2015 | A1 |
20150221135 | Hill et al. | Aug 2015 | A1 |
20150227890 | Bednarek et al. | Aug 2015 | A1 |
20150248765 | Criminisi et al. | Sep 2015 | A1 |
20150254906 | Berger et al. | Sep 2015 | A1 |
20150278759 | Harris et al. | Oct 2015 | A1 |
20150310539 | McCoy et al. | Oct 2015 | A1 |
20150323643 | Hill et al. | Nov 2015 | A1 |
20150341551 | Perrin et al. | Nov 2015 | A1 |
20150362581 | Friedman et al. | Dec 2015 | A1 |
20150371178 | Abhyanker et al. | Dec 2015 | A1 |
20150371319 | Argue et al. | Dec 2015 | A1 |
20150379366 | Nomura et al. | Dec 2015 | A1 |
20160035078 | Lin et al. | Feb 2016 | A1 |
20160063610 | Argue et al. | Mar 2016 | A1 |
20160093184 | Locke et al. | Mar 2016 | A1 |
20160098679 | Levy | Apr 2016 | A1 |
20160140436 | Yin et al. | May 2016 | A1 |
20160142868 | Kulkarni et al. | May 2016 | A1 |
20160150196 | Horvath | May 2016 | A1 |
20160156409 | Chang | Jun 2016 | A1 |
20160178727 | Bottazzi | Jun 2016 | A1 |
20160195602 | Meadow | Jul 2016 | A1 |
20160232857 | Tamaru | Aug 2016 | A1 |
20160238692 | Hill et al. | Aug 2016 | A1 |
20160248969 | Hurd | Aug 2016 | A1 |
20160256100 | Jacofsky et al. | Sep 2016 | A1 |
20160286508 | Khoryaev et al. | Sep 2016 | A1 |
20160300187 | Kashi et al. | Oct 2016 | A1 |
20160335593 | Clarke et al. | Nov 2016 | A1 |
20160366561 | Min et al. | Dec 2016 | A1 |
20160370453 | Boker et al. | Dec 2016 | A1 |
20160371574 | Nguyen et al. | Dec 2016 | A1 |
20170030997 | Hill et al. | Feb 2017 | A1 |
20170031432 | Hill | Feb 2017 | A1 |
20170066597 | Hiroi | Mar 2017 | A1 |
20170117233 | Anayama et al. | Apr 2017 | A1 |
20170123426 | Hill et al. | May 2017 | A1 |
20170140329 | Bernhardt et al. | May 2017 | A1 |
20170234979 | Mathews et al. | Aug 2017 | A1 |
20170261592 | Min et al. | Sep 2017 | A1 |
20170280281 | Pandey et al. | Sep 2017 | A1 |
20170293885 | Grady et al. | Oct 2017 | A1 |
20170313514 | Lert, Jr. et al. | Nov 2017 | A1 |
20170323174 | Joshi et al. | Nov 2017 | A1 |
20170323376 | Glaser et al. | Nov 2017 | A1 |
20170350961 | Hill et al. | Dec 2017 | A1 |
20170351255 | Anderson et al. | Dec 2017 | A1 |
20170359573 | Kim et al. | Dec 2017 | A1 |
20170372524 | Hill | Dec 2017 | A1 |
20170374261 | Teich et al. | Dec 2017 | A1 |
20180003962 | Urey et al. | Jan 2018 | A1 |
20180033151 | Matsumoto et al. | Feb 2018 | A1 |
20180060656 | Saptharishi et al. | Mar 2018 | A1 |
20180068100 | Seo | Mar 2018 | A1 |
20180068266 | Kirmani et al. | Mar 2018 | A1 |
20180094936 | Jones et al. | Apr 2018 | A1 |
20180108134 | Venable et al. | Apr 2018 | A1 |
20180139431 | Simek et al. | May 2018 | A1 |
20180164103 | Hill | Jun 2018 | A1 |
20180164112 | Chintakindi et al. | Jun 2018 | A1 |
20180197139 | Hill | Jul 2018 | A1 |
20180197218 | Mallesan et al. | Jul 2018 | A1 |
20180231649 | Min et al. | Aug 2018 | A1 |
20180242111 | Hill | Aug 2018 | A1 |
20180339720 | Singh | Nov 2018 | A1 |
20190029277 | Skr Dderdal et al. | Jan 2019 | A1 |
20190053012 | Hill | Feb 2019 | A1 |
20190073785 | Hafner et al. | Mar 2019 | A1 |
20190090744 | Mahfouz | Mar 2019 | A1 |
20190098263 | Seiger et al. | Mar 2019 | A1 |
20190138849 | Zhang | May 2019 | A1 |
20190295290 | Schena et al. | Sep 2019 | A1 |
20190394448 | Ziegler et al. | Dec 2019 | A1 |
20200005116 | Kuo | Jan 2020 | A1 |
20200011961 | Hill et al. | Jan 2020 | A1 |
20200012894 | Lee | Jan 2020 | A1 |
20220405704 | Kirmani et al. | Dec 2022 | A1 |
Number | Date | Country |
---|---|---|
108154465 | Jun 2018 | CN |
108229379 | Jun 2018 | CN |
108345869 | Jul 2018 | CN |
102017205958 | Oct 2018 | DE |
H05-210763 | Aug 1993 | JP |
H08-96138 | Apr 1996 | JP |
H117536 | Jan 1999 | JP |
2016139176 | Aug 2016 | JP |
2017157216 | Sep 2017 | JP |
2001006401 | Jan 2001 | WO |
2005010550 | Feb 2005 | WO |
2009007198 | Jan 2009 | WO |
2010110190 | Sep 2010 | WO |
Entry |
---|
Brown, Matthew and David G. Lowe “Automatic Panoramic Image Stitching using Invariant Features,” International Journal of Computer Vision, vol. 74, No. 1, pp. 59-73, 2007. |
Hill, et al. “Package Tracking Systems and Methods” U.S. Appl. No. 15/091,180, filed Apr. 5, 2016. |
Final Office Action in U.S. Appl. No. 15/416,379 dated Jan. 27, 2020; 15 pages. |
Non-Final Office Action in U.S. Appl. No. 15/416,379, dated Jun. 27, 2019; 12 pages. |
Morbella N50: 5-inch GPS Navigator User's Manual, Maka Technologies Group, May 2012. |
Final Office Action in U.S. Appl. No. 16/206,745 dated May 22, 2019; 9 pages. |
Non-Final Office Action in U.S. Appl. No. 15/416,366 dated Jun. 13, 2019; 11 pages. |
Non-Final Office Action in U.S. Appl. No. 16/206,745 dated Jan. 7, 2019; 10 pages. |
Non-Final Office Action in U.S. Appl. No. 15/259,474 dated May 29, 2019; 19 pages. |
Notice of Allowance in U.S. Appl. No. 15/270,749 dated Oct. 4, 2018; 5 pages. |
Li, et al. “Multifrequency-Based Range Estimation of RFID Tags,” IEEE International Conference on RFID, 2009. |
Non-Final Office Action in U.S. Appl. No. 15/270,749 dated Apr. 4, 2018; 8 pages. |
“ADXL202/ADXL210 Product Sheet,” Analog.com, 1999. |
Farrell & Barth, “The Global Positiong System & Interial Navigation”, 1999, McGraw-Hill; pp. 245-252. |
Grewal & Andrews, “Global Positioning Systems, Inertial Nagivation, and Integration”, 2001, John Weiley and Sons, pp. 252-256. |
Jianchen Gao, “Development of a Precise GPS/INS/On-Board Vehicle Sensors Integrated Vehicular Positioning System”, Jun. 2007, UCGE Reports No. 20555. |
Goodall, Christopher L., “Improving Usability of Low-Cost INS/GPS Navigation Systems using Intelligent Techniques”, Jan. 2009, UCGE Reports No. 20276. |
Debo Sun, “Ultra-Tight GPS/Reduced IMU for Land Vehicle Navigation”, Mar. 2010, UCGE Reports No. 20305. |
Adrian Schumacher, “Integration of a GPS aised Strapdown Inertial Navigation System for Land Vehicles”, Master of Science Thesis, KTH Electrical Engineering, 2006. |
Jennifer Denise Gautier, “GPS/INS Generalized Evaluation Tool (Giget) for the Design and Testing of Integrated Navigation Systems”, Dissertation, Stanford University, Jun. 2003. |
Farrell, et al., “Real-Time Differential Carrier Phase GPS=Aided INS”, Jul. 2000, IEEE Transactions on Control Systems Technology, vol. 8, No. 4. |
Filho, et al., “Integrated GPS/INS Navigation System Based on a Gyroscope-Free IMU”, Dincon Brazilian Conference on Synamics, Control, and Their Applications, May 22-26, 2006. |
International Search Report & Written Opinion in international patent application PCT/US12/64860, dated Feb. 28, 2013; 8 pages. |
Non-Final Office Action in U.S. Appl. No. 15/091,180, dated Jun. 27, 2019; 11 pages. |
Restriction Requirement in U.S. Appl. No. 15/091,180 dated Mar. 19, 2019; 8 pages. |
Dictionary Definition for Peripheral Equipment. (2001). Hargrave's Communications Dictionary, Wiley. Hoboken, NJ: Wiley. Retrieved from Https://search.credorefernce.com/content/entry/hargravecomms/peripheral_equioment/0 (Year:2001). |
Final Office Action in U.S. Appl. No. 15/091,180 dated Jan. 23, 2020; 17 pages. |
Corrected Notice of Allowability in U.S. Appl. No. 15/270,749 dated Oct. 30, 2018; 5 pages. |
Notice of Allowance in U.S. Appl. No. 15/416,366 dated Aug. 19, 2020; 13 pages. |
Non-Final Office Action in U.S. Appl. No. 15/091,180 dated Sep. 1, 2021. |
Ex Parte Quayle Action in U.S. Appl. No. 15/416,379 mailed on Oct. 5, 2021. |
Non-Final Office Action in U.S. Appl. No. 15/259,474 dated Aug. 26, 2021. |
Final Office Action in U.S. Appl. No. 15/259,474 dated Jan. 10, 2020; 19 pages. |
Final Office Action in U.S. Appl. No. 16/206,745 dated Feb. 5, 2020; 15 pages. |
Non-Final Office Action in U.S. Appl. No. 16/206,745 dated Oct. 18, 2019; 8 pages. |
Final Office Action in U.S. Appl. No. 15/416,366 dated Oct. 7, 2019; 14 pages. |
Non-Final Office Action in U.S. Appl. No. 15/416,366 dated Apr. 6, 2020; 13 pages. |
Final Office Action in U.S. Appl. No. 15/091,180, dated Mar. 10, 2021; 24 pages. |
Notice of Allowance and Fees Due in U.S. Appl. No. 16/206,745, dated Mar. 12, 2021; 9 pages. |
Final Office Action in U.S. Appl. No. 15/259,474, dated Mar. 9, 2021; 23 pages. |
Final Office Action in U.S. Appl. No. 15/861,414 dated Feb. 8, 2021. |
Non-Final Office Action in U.S. Appl. No. 15/091,180 dated Oct. 1, 2020. |
Non-Final Office Action in U.S. Appl. No. 16/206,745 dated Sep. 23, 2020. |
Non-Final Office Action in U.S. Appl. No. 15/259,474, dated Sep. 1, 2020; 17 pages. |
Final Office Action in U.S. Appl. No. 15/861,414 dated Nov. 17, 2020. |
Non-Final Office Action in U.S. Appl. No. 15/416,379 dated Oct. 2, 2020. |
Final Office Action in U.S. Appl. No. 15/416,379, dated May 13, 2021; 18 pages. |
Notice of Allowance in U.S. Appl. No. 15/416,379 dated Mar. 30, 2022. |
Final Office Action in U.S. Appl. No. 15/259,474 dated Feb. 8, 2022. |
Extended Search Report in European Patent Application No. 19861570.0 dated May 24, 2022. |
Notice of Allowance in U.S. Appl. No. 16/575,837 dated Feb. 17, 2022. |
Final Office Action in U.S. Appl. No. 16/575,837 dated Sep. 3, 2021. |
Non-Final Office Action in U.S. Appl. No. 16/575,837 dated Apr. 21, 2021. |
Szeliski, Richard “Image Alignment and Stitching: A Tutorial,:” Technical Report, MST-TR-2004-92, Dec. 10, 2006. |
Ku, Wei and Jane Mulligan “Performance Evaluation of Color Correction Approaches for Automatic Multi-view Image and Video Stitching,” International Converence on Computer Vision and Pattern Recognition (CVPR10), San Francisco, CA, 2010. |
International Search Report and Written Opinion in PCT/US2019/051874 dated Dec. 13, 2020; 9 pages. |
Non-Final Office Action in U.S. Appl. No. 15/861,414 dated Apr. 6, 2020; 14 pages. |
Wilde, Andreas, “Extended Tracking Range Delay-Locked Loop,” Proceedings IEEE International Conference on Communications, dated Jun. 1995, pp. 1051-1054. |
Welch, Greg and Gary Bishop, “An Introduction to the Kalman Filter,” Department of Computer Science, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599-3175, Updated: Monday, Jul. 24, 2006. |
Pourhomayoun, Mohammad and Mark Fowler, “Improving WLAN-based Indoor Mobile Positioning Using Sparsity,” Conference Record of the Forty Sixth Asilomar Conference on Signals, Systems and Computers, Nov. 4-7, 2012, pp. 1393-1396, Pacific Grove, California. |
Schmidt & Phillips, “INS/GPS Integration Architectures”, NATO RTO Lecture Seriers, First Presented Oct. 20-21, 2003. |
Yong Yang, “Tightly Coupled Mems INS/GPS Integration with INS Aided Receiver Tracking Loops”, Jun. 2008, UCGE Reports No. 20270. |
Sun, et al., “Analysis of the Kalman Filter With Different INS Error Models for GPS/INS Integration in Aerial Remote Sensing Applications”, Bejing, 2008, The International Archives of the Photogrammerty, Remote Sensing and Spatial Information Sciences vol. XXXVII, Part B5. |
Vikas Numar N., “Integration of Inertial Navigation System and Global Positioning System Using Kalman Filtering”, M.Tech Dissertation, Indian Institute of Technology, Bombay, Mumbai, Jul. 2004. |
Santiago Alban, “Design and Performance of a Robust GPS/INS Attitude System for Automobile Applications”, Dissertation, Stanford University, Jun. 2004. |
Proakis, John G. and Masoud Salehi, “Communication Systems Engineering”, Second Edition, Prentice-Hall, Inc., Upper Saddle River, New Jersey, 2002. |
Non-Final Office Action in U.S. Appl. No. 16/437,767, dated Jul. 15, 2020; 19 pages. |
International Search Report and Written Opinion in PCT/US2020/013280 dated Mar. 10, 2020; 9 pages. |
Raza, Rana Hammad “Three Dimensional Localization and Tracking for Site Safety Using Fusion of Computer Vision and RFID,” 2013, Dissertation, Michigan State University. |
Non-Final Office Action in U.S. Appl. No. 15/861,414 dated Aug. 26, 2021. |
International Preliminary Report on Patentability in PCT/US2020/013280 dated Jul. 22, 2021. |
Final Office Action in U.S. Appl. No. 16/437,767 dated Feb. 5, 2021. |
International Preliminary Report on Patentability in PCT/US2019/051874 dated Apr. 1, 2021. |
Non-Final Office Action in U.S. Appl. No. 16/740,679 dated Jan. 6, 2021. |
Notice of Allowance and Fees Due in U.S. Appl. No. 16/437,767, dated May 14, 2021; 8 pages. |
Notice of Allowance and Fees Due in U.S. Appl. No. 16/740,679, dated Apr. 20, 2021; 15 pages. |
Final Office Action in U.S. Appl. No. 15/861,414 dated Mar. 16, 2022. |
First Office Action in Japanese Patent Application No. 2021515457, dated Jun. 20, 2023. |
Yoshimoto, et al., “Object Recognition System using Deep Learning with Depth Image for Home Service Robots”, 2017, IEICE Technical Report, SIS2017-55 (Dec. 2017), pp. 123-128. |
“Darknet: Open Source Neural Networks in C”, https://pjreddie.com/darknet/, first accessed Sep. 2018, most recently accessed Jul. 20, 2023, 4 pages. |
“Caffe: Deep Learning Framework”, http://caffe.berkeleyvision.org/, first accessed Sep. 2018, most recently accessed Jul. 20, 2023, 4 pages. |
First Office Action in Chinese Patent Application No. 201980076678.9 dated Jul. 25, 2023. |
Second Office Action in Japanese Patent Application No. 2021515457, mailed on Feb. 6, 2024. |
Number | Date | Country | |
---|---|---|---|
20220309783 A1 | Sep 2022 | US |
Number | Date | Country | |
---|---|---|---|
62734491 | Sep 2018 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16575837 | Sep 2019 | US |
Child | 17838543 | US |