Environments in which objects are managed, such as retail facilities, warehousing and distribution facilities, and the like, may store such objects in regions such as aisles of shelf modules or the like. For example, a retail facility may include objects such as products for purchase, and a distribution facility may include objects such as parcels or pallets. For example, a given environment may contain a wide variety of objects with different sizes, shapes, and other attributes. Such objects may be supported on shelves in a variety of positions and orientations. The variable position and orientation of the objects, as well as variations in lighting and the placement of labels and other indicia on the objects and the shelves, can render detection of structural features, such as the ends of the aisles, difficult.
The accompanying figures, where like reference numerals refer to identical or functionally similar elements throughout the separate views, together with the detailed description below, are incorporated in and form part of the specification, and serve to further illustrate embodiments of concepts that include the claimed invention, and explain various principles and advantages of those embodiments.
Skilled artisans will appreciate that elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions of some of the elements in the figures may be exaggerated relative to other elements to help to improve understanding of embodiments of the present invention.
The apparatus and method components have been represented where appropriate by conventional symbols in the drawings, showing only those specific details that are pertinent to understanding the embodiments of the present invention so as not to obscure the disclosure with details that will be readily apparent to those of ordinary skill in the art having the benefit of the description herein.
Examples disclosed herein are directed to a method of detecting an end of an aisle of shelf modules in an imaging controller of a mobile automation apparatus, the method comprising: obtaining image data captured by an image sensor and a plurality of depth measurements captured by a depth sensor, the image data and the depth measurements corresponding to an area containing a portion of the aisle of shelf modules; obtaining locomotive data of the apparatus; generating a dynamic trust region based on the locomotive data; detecting an edge segment based on the image data and the plurality of depth measurements, the edge segment representing an edge of a support surface; and when the edge segment is located at least partially in the dynamic trust region, updating an estimated end of the aisle based on the detected edge segment.
Additional examples disclosed herein are directed to a mobile automation apparatus comprising: a locomotive assembly; an image sensor and a depth sensor; and an imaging controller configured to: obtain image data captured by an image sensor and a plurality of depth measurements captured by a depth sensor, the image data and the depth measurements corresponding to an area containing a portion of the aisle of shelf modules; obtain locomotive data of the apparatus; generate a dynamic trust region based on the locomotive data; detect an edge segment based on the image data and the plurality of depth measurements, the edge segment representing an edge of a support surface; and when the edge segment is located at least partially in the dynamic trust region, update an estimated end of the aisle based on the detected edge segment.
The client computing device 104 is illustrated in
The system 100 is deployed, in the illustrated example, in a retail facility including a plurality of support structures such as shelf modules 110-1, 110-2, 110-3 and so on (collectively referred to as shelf modules 110 or shelves 110, and generically referred to as a shelf module 110 or shelf 110—this nomenclature is also employed for other elements discussed herein). Each shelf module 110 supports a plurality of products 112. Each shelf module 110 includes a shelf back 116-1, 116-2, 116-3 and a support surface (e.g. support surface 117-3 as illustrated in
The shelf modules 110 (also referred to as sub-regions of the facility) are typically arranged in a plurality of aisles (also referred to as regions of the facility), each of which includes a plurality of modules 110 aligned end-to-end. In such arrangements, the shelf edges 118 face into the aisles, through which customers in the retail facility, as well as the apparatus 103, may travel. As will be apparent from
The apparatus 103 is equipped with a plurality of navigation and data capture sensors 108, such as image sensors (e.g. one or more digital cameras) and depth sensors (e.g. one or more Light Detection and Ranging (LIDAR) sensors, one or more depth cameras employing structured light patterns, such as infrared light, or the like). The apparatus 103 is deployed within the retail facility and, via communication with the server 101 and use of the sensors 108, navigates autonomously or partially autonomously along a length 119 of at least a portion of the shelves 110.
While navigating among the shelves 110, the apparatus 103 can capture images, depth measurements and the like, representing the shelves 110 (generally referred to as shelf data or captured data). Navigation may be performed according to a frame of reference 102 established within the retail facility. The apparatus 103 therefore tracks its pose (i.e. location and orientation) in the frame of reference 102. The apparatus 103 can navigate the facility by generating paths from origin locations to destination locations. For example, to traverse an aisle while capturing data representing the shelves 110 of that aisle, the apparatus 103 can generate a path that traverses the aisle.
The server 101 includes a special purpose controller, such as a processor 120, specifically designed to control and/or assist the mobile automation apparatus 103 to navigate the environment and to capture data. The processor 120 is interconnected with a non-transitory computer readable storage medium, such as a memory 122, having stored thereon computer readable instructions for performing various functionality, including control of the apparatus 103 to navigate the modules 110 and capture shelf data, as well as post-processing of the shelf data. The memory 122 can also store data for use in the above-mentioned control of the apparatus 103, such as a repository 123 containing a map of the retail environment and any other suitable data (e.g. operational constraints for use in controlling the apparatus 103, data captured by the apparatus 103, and the like).
The memory 122 includes a combination of volatile memory (e.g. Random Access Memory or RAM) and non-volatile memory (e.g. read only memory or ROM, Electrically Erasable Programmable Read Only Memory or EEPROM, flash memory). The processor 120 and the memory 122 each comprise one or more integrated circuits. In some embodiments, the processor 120 is implemented as one or more central processing units (CPUs) and/or graphics processing units (GPUs).
The server 101 also includes a communications interface 124 interconnected with the processor 120. The communications interface 124 includes suitable hardware (e.g. transmitters, receivers, network interface controllers and the like) allowing the server 101 to communicate with other computing devices—particularly the apparatus 103, the client device 104 and the dock 106—via the links 105 and 107. The links 105 and 107 may be direct links, or links that traverse one or more networks, including both local and wide-area networks. The specific components of the communications interface 124 are selected based on the type of network or other links that the server 101 is required to communicate over. In the present example, as noted earlier, a wireless local-area network is implemented within the retail facility via the deployment of one or more wireless access points. The links 105 therefore include either or both wireless links between the apparatus 103 and the mobile device 104 and the above-mentioned access points, and a wired link (e.g. an Ethernet-based link) between the server 101 and the access point.
The processor 120 can therefore obtain data captured by the apparatus 103 via the communications interface 124 for storage (e.g. in the repository 123) and subsequent processing (e.g. to detect objects such as shelved products in the captured data, and detect status information corresponding to the objects). The server 101 may also transmit status notifications (e.g. notifications indicating that products are out-of-stock, in low stock or misplaced) to the client device 104 responsive to the determination of product status data. The client device 104 includes one or more controllers (e.g. central processing units (CPUs) and/or field-programmable gate arrays (FPGAs) and the like) configured to process (e.g. to display) notifications received from the server 101.
Turning now to
The mast 205 also supports at least one depth sensor 209, such as a 3D digital camera capable of capturing both depth data and image data. The apparatus 103 also includes additional depth sensors, such as LIDAR sensors 211. In the present example, the mast 205 supports two LIDAR sensors 211-1 and 211-2. As shown in
The mast 205 also supports a plurality of illumination assemblies 213, configured to illuminate the fields of view of the respective cameras 207. That is, the illumination assembly 213-1 illuminates the field of view of the camera 207-1, and so on. The cameras 207 and lidars 211 are oriented on the mast 205 such that the fields of view of the sensors each face a shelf 110 along the length 119 of which the apparatus 103 is traveling. As noted earlier, the apparatus 103 is configured to track a pose of the apparatus 103 (e.g. a location and orientation of the center of the chassis 201) in the frame of reference 102, permitting data captured by the apparatus 103 to be registered to the frame of reference 102 for subsequent processing.
Referring to
The processor 300, when so configured by the execution of the application 308, may also be referred to as a controller 300. Those skilled in the art will appreciate that the functionality implemented by the processor 300 via the execution of the application 308 may also be implemented by one or more specially designed hardware and firmware components, such as FPGAs, ASICs and the like in other embodiments.
The memory 304 may also store a repository 312 containing, for example, a map of the environment in which the apparatus 103 operates, for use during the execution of the application 308 (i.e. during the detection of the end of the aisle). The apparatus 103 also includes a communications interface 316 enabling the apparatus 103 to communicate with the server 101 (e.g. via the link 105 or via the dock 106 and the link 107), for example to receive instructions to navigate to specified locations and initiate data capture operations. The application 308 can include a segment detector 320 configured to detect shelf edge segments, a trust region generator 324 to generate dynamic trust regions and determine whether the shelf edge segments are acceptable, and a feature detector 328 configured to detect end-of-aisle features, such as a vertical segment of a shelf module.
In addition to the sensors mentioned earlier, the apparatus 103 includes a motion sensor 318, such as one or more wheel odometers coupled to the locomotive assembly 203. The motion sensor 318 can also include, in addition to or instead of the above-mentioned wheel odometer(s), an inertial measurement unit (IMU) configured to measure acceleration along a plurality of axes.
The actions performed by the apparatus 103, and specifically by the processor 300 as configured via execution of the application 308, to detect ends of aisle will now be discussed in greater detail with reference to
At block 405, the processor 300 is configured to obtain image data and depth measurements captured, respectively, by an image sensor and a depth sensor and corresponding to an area containing a portion of the aisle. In particular, the area may contain shelf modules and support surfaces on the shelf modules. The image data and depth measurements obtained at block 405 are, for example, captured by the apparatus 103 and stored in the repository 132. The processor 300 is therefore configured, in the above example, to obtain the image data and the depth measurements by retrieving the image data and the depth measurements from the repository 132.
In some examples, the processor 300 can also be configured to perform one or more filtering operations on the depth measurements. For example, depth measurements greater than a predefined threshold may be discarded from the data captured at block 405. Such measurements may be indicative of surfaces beyond the shelf backs 116 (e.g. a ceiling, or a wall behind a shelf back 116). The predefined threshold may be selected, for example, as the sum of the known depth of a shelf 110 and the known width of an aisle.
The processor 300 is further configured, at block 405, to obtain locomotive data related to the movement and position of the apparatus 103. The locomotive data can include a velocity of the apparatus 103 and a pose of the apparatus 103. In particular, the pose can include a distance and a yaw of the apparatus 103 relative to an estimated shelf edge. Additionally, the pose can include a confidence level indicating a level of confidence (e.g. expressed as a fraction, percentage, or the like) in the accuracy of the distance and yaw values provided.
At block 410, the processor 300, and in particular trust region generator 324 generates a dynamic trust region based on the locomotive data obtained at block 405. Generally, the dynamic trust region is a three-dimensional space for assessing the usability of shelf edge segments in detecting the estimated end of the aisle, as will be described in greater detail below. The dynamic trust region may have a base represented by a two-dimensional shape, and a predetermined height (e.g. extending from the floor of the aisle to a top of the shelf module). Thus the processor 300 may control the shape and size of the dynamic trust region based on the locomotive data obtained at block 405 to accept data points which are expected to be more accurate.
For example, referring to
In particular, the width 512 of the base 510 may vary based on the current velocity of the apparatus 103 and the current pose of the apparatus 103. For example, when the apparatus 103 is travelling at a high velocity, the data collected is expected to be of poorer quality relative to the data collected when travelling at a low velocity. Thus, as the velocity of the apparatus 103 increases, the width 512 decreases, thereby reducing the area of the base 510. That is, the dynamic trust region 500 is increased in an inverse correlation with the velocity of the apparatus 103. Similarly, when the locomotive data of the apparatus 103 includes a pose having a low confidence level (i.e. higher uncertainty), the data collected is expected to be of poorer quality relative to the data collected with a pose with a high confidence level. Thus, as the confidence level decreases, the width 512 also decreases. That is, the dynamic trust region 500 is increased in a direct correlation with the confidence level of the pose of the apparatus. More generally, when the locomotive data is indicative of high quality data, the dynamic trust region 500 increases in size, and allows more data points to be accepted. In contrast, when the locomotive data is indicative of low quality data, the dynamic trust region 500 decreases in size.
In the present example, the altitude 514 of the base 510 is defined based on a predefined relationship with the width (i.e. ⅓ of the width 512). In other examples, the altitude 514 may also vary based on the locomotive parameters of the apparatus 103 or may be fixed.
The angle α of the base 510 may vary based on the current yaw of the apparatus 103 relative to the estimated shelf edge. For example, when the yaw is low relative to the estimated shelf edge (i.e. the apparatus 103 is travelling substantially parallel to the estimated shelf edge), the processor 300 may expect that the detected shelf edge segments are more likely to represent shelf edges than when the yaw is high. In particular, when the yaw is high, the apparatus 103 is more likely to detect segments representing, for example, edges of products on the shelves, in addition to the shelf edge segments. Thus, when yaw is high, the angle α may be increased, thereby skewing the base 510 to accept data points further along the length of the aisle, where detected edge segments are more likely to be shelf edge segments.
The processor 300 is further configured, at block 410, to determine a placement of the dynamic trust region. Specifically, the processor 300 obtains an accumulated segment representing the current shelf edge estimate, as determined based on previously detected and accepted shelf edge estimates. The processor 300 then situates the dynamic trust region relative to the accumulated segment.
For example, referring to
Returning to
In an embodiment, the processor 300 may detect the shelf edge segment based on detecting Hough lines and segmenting the depth measurements using the Hough lines as seeds. Specifically, the processor 300 first detects preliminary edges, for example using Canny edge detection, on the image data. In some examples, the processor 300 may first apply preprocessing operations, such as applying a greyscale and blurring the image to obtain only strong edges. The processor 300 is then configured to detect Hough lines based on the preliminary edges and filter out the Hough lines within a threshold angle of vertical (i.e. filter out Hough lines which are unlikely to represent shelf edges). The processor 300 overlays the Hough lines with the depth measurements (e.g. using a predefined correspondence between the image sensor and the depth sensor) and uses the corresponding depth measurements as seeds for segmenting the depth measurements into different object classes, where each object class represents distinct objects in the aisle (e.g. different shelf edges, products, or the like). The processor 300 selects the largest class satisfying predefined constraints (e.g. expected minimum point density, size, and shape) representing a shelf edge. The processor 300 then applies a line-fitting model to the class of depth measurements. The resulting line segment fitted to the class defines the detected shelf edge segment.
In other embodiments, other methods of detecting shelf edge segments are contemplated.
In some embodiments, the processor 300 may further be configured, at block 415, to detect additional aisle features defining the end of the aisle. In particular, the feature detector 328 may use the image data and the depth measurements to detect a vertical edge representing a vertical edge of a shelf module 110, which thus defines the end of the aisle.
At block 605, the processor 300 is configured to detect preliminary edges, for example using Canny edge detection, in the image data. In some examples, the processor 300 may first apply preprocessing operations, such as applying a greyscale and blurring the image data to obtain strong edges.
At block 610, the processor 300 is configured to apply a filter, such as a convolutional filter or a dilation filter, to the preliminary edges to increase the thickness of the preliminary edges.
At block 615, the processor 300 is configured to detect Hough lines based on the preliminary edges and select Hough lines representative of the end-of-aisle features. For example, the processor 300 may select Hough lines within a threshold angle of vertical for further processing and discard other Hough lines (e.g. Hough lines representing shelf edges).
At block 620, the processor 300 is configured to grow vertical segments based on the Hough lines and the filtered image data (i.e. the filtered preliminary edges). Specifically, the processor 300 overlays the Hough lines with the filtered image data and uses the corresponding pixels as seeds. The processor 300 then grows the segments vertically by determining if the upwards and/or downwards pixels are also edge pixels in the filtered image data. In particular, using the filtered image data increases the likelihood that edge pixels will have upwards and downwards neighbors that are edge pixels, allowing shorter vertical segments (e.g. representing objects further away) to be connected.
At block 625, the processor 300 determines whether any of the vertical segments are within a predefined threshold height. Specifically, the processor 300 overlays the vertical segments obtained at block 620 with the depth measurements and uses the depth measurements to determine the relative height of the vertical segments.
When one of the vertical segments is within a threshold height (e.g. approximately a known height of the shelf modules 110), the processor 300 proceeds to block 630 to identify the vertical segment as the end-of-aisle feature.
When none of the vertical segments is within the threshold height, the processor 300 ends the method 600 and returns to block 420
At block 420, the processor 300, and in particular trust region generator 324 determines whether the shelf edge segment detected at block 415 is within the dynamic trust region generated at block 410. In particular, the processor 300 determines whether the shelf edge segment is located at least partially in the dynamic trust region, based on the depth measurements. For example, shelf edge segments which have one endpoint within the dynamic trust region, or which cut through the dynamic trust region may be accepted at block 420. In some embodiments, at least a threshold proportion (e.g. above 50%) of the detected shelf edge segment must be contained in the dynamic trust region to be accepted at block 420.
If the determination at block 420 is affirmative, the method 400 proceeds to block 425. At block 425, the processor 300 is configured to update an estimated end of the aisle based on the detected edge segment. Specifically, the processor 300 adds the shelf edge segment detected at block 415 to the accumulated segment. For example, the processor 300 may be configured to extend the accumulated segment to the current shelf edge segment, for example, by connecting nearest endpoints of the accumulated segment and the current shelf edge segment. In other examples, the processor 300 may be configured to employ one or more line-fitting models based on the current shelf edge segment and at least a portion of the accumulated segment to extend the accumulated segment. The processor 300 may then estimate the endpoint of extended accumulated segment as the estimated end of the aisle.
For example, referring to
Returning again to
At block 435, the processor 300 determines whether the current distance to the estimated end of the aisle is less than a threshold distance (e.g. about 30 cm). Specifically, the processor 300 projects the pose of the apparatus 103 onto the accumulated segment and determines the current distance from the projected pose to the endpoint defining the estimated end of the aisle. If the current distance is not less than the threshold distance, the method 400 returns to block 405 to continue iteratively extending the accumulated segment representing the aisle edge until the accumulated segment cannot be extended further and the estimated end of the aisle is reached.
In some embodiments, the processor 300 may check the current distance to the estimated end of the aisle based on the end-of-aisle feature. Thus, for example, the processor 300 may identify a positive result if the current distance to the end of the aisle, as estimated by either the accumulated segment or the end-of-aisle feature, is less than a threshold distance.
At block 440, the processor 300 generates an indication that the apparatus 103 is within the threshold distance from the end of the aisle. The indication may be propagated to other components of the apparatus 103, such as a navigational controller, to initiate end-of-aisle operations, such as stopping, turning around, turning off lights, sensors, and the like. In particular, the processor 300 may generate different indications based on the apparatus 103 being within the threshold distance of the end-of-aisle feature, the estimated end of aisle based on the accumulated segment, or both.
The method 400 allows the end of the aisle to be estimated based on the estimated shelf edge, thus mitigating risks of false positive and false negative end of aisle identifications as compared to point cloud density methods. In particular, the effect of sparsely populated shelves having low density (false positives) and clutter beyond the end of an aisle having higher density (false negatives) is reduced.
In the foregoing specification, specific embodiments have been described.
However, one of ordinary skill in the art appreciates that various modifications and changes can be made without departing from the scope of the invention as set forth in the claims below. Accordingly, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of present teachings.
The benefits, advantages, solutions to problems, and any element(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential features or elements of any or all the claims. The invention is defined solely by the appended claims including any amendments made during the pendency of this application and all equivalents of those claims as issued.
Moreover in this document, relational terms such as first and second, top and bottom, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms “comprises,” “comprising,” “has”, “having,” “includes”, “including,” “contains”, “containing” or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises, has, includes, contains a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. An element proceeded by “comprises . . . a”, “has . . . a”, “includes . . . a”, “contains . . . a” does not, without more constraints, preclude the existence of additional identical elements in the process, method, article, or apparatus that comprises, has, includes, contains the element. The terms “a” and “an” are defined as one or more unless explicitly stated otherwise herein. The terms “substantially”, “essentially”, “approximately”, “about” or any other version thereof, are defined as being close to as understood by one of ordinary skill in the art, and in one non-limiting embodiment the term is defined to be within 10%, in another embodiment within 5%, in another embodiment within 1% and in another embodiment within 0.5%. The term “coupled” as used herein is defined as connected, although not necessarily directly and not necessarily mechanically. A device or structure that is “configured” in a certain way is configured in at least that way, but may also be configured in ways that are not listed.
It will be appreciated that some embodiments may be comprised of one or more specialized processors (or “processing devices”) such as microprocessors, digital signal processors, customized processors and field programmable gate arrays (FPGAs) and unique stored program instructions (including both software and firmware) that control the one or more processors to implement, in conjunction with certain non-processor circuits, some, most, or all of the functions of the method and/or apparatus described herein. Alternatively, some or all functions could be implemented by a state machine that has no stored program instructions, or in one or more application specific integrated circuits (ASICs), in which each function or some combinations of certain of the functions are implemented as custom logic. Of course, a combination of the two approaches could be used.
Moreover, an embodiment can be implemented as a computer-readable storage medium having computer readable code stored thereon for programming a computer (e.g., comprising a processor) to perform a method as described and claimed herein. Examples of such computer-readable storage mediums include, but are not limited to, a hard disk, a CD-ROM, an optical storage device, a magnetic storage device, a ROM (Read Only Memory), a PROM (Programmable Read Only Memory), an EPROM (Erasable Programmable Read Only Memory), an EEPROM (Electrically Erasable Programmable Read Only Memory) and a Flash memory. Further, it is expected that one of ordinary skill, notwithstanding possibly significant effort and many design choices motivated by, for example, available time, current technology, and economic considerations, when guided by the concepts and principles disclosed herein will be readily capable of generating such software instructions and programs and ICs with minimal experimentation.
The Abstract of the Disclosure is provided to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in various embodiments for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separately claimed subject matter.
Number | Name | Date | Kind |
---|---|---|---|
5209712 | Ferri | May 1993 | A |
5214615 | Bauer | May 1993 | A |
5408322 | Hsu et al. | Apr 1995 | A |
5414268 | McGee | May 1995 | A |
5423617 | Marsh et al. | Jun 1995 | A |
5534762 | Kim | Jul 1996 | A |
5566280 | Fukui et al. | Oct 1996 | A |
5704049 | Briechle | Dec 1997 | A |
5953055 | Huang et al. | Sep 1999 | A |
5988862 | Kacyra et al. | Nov 1999 | A |
6026376 | Kenney | Feb 2000 | A |
6034379 | Bunte et al. | Mar 2000 | A |
6075905 | Herman et al. | Jun 2000 | A |
6115114 | Berg et al. | Sep 2000 | A |
6141293 | Amorai-Moriya et al. | Oct 2000 | A |
6304855 | Burke | Oct 2001 | B1 |
6442507 | Skidmore et al. | Aug 2002 | B1 |
6549825 | Kurata | Apr 2003 | B2 |
6580441 | Schileru-Key | Jun 2003 | B2 |
6711293 | Lowe | Mar 2004 | B1 |
6721723 | Gibson et al. | Apr 2004 | B1 |
6721769 | Rappaport et al. | Apr 2004 | B1 |
6836567 | Silver et al. | Dec 2004 | B1 |
6995762 | Pavlidis et al. | Feb 2006 | B1 |
7090135 | Patel | Aug 2006 | B2 |
7137207 | Armstrong et al. | Nov 2006 | B2 |
7245558 | Willins et al. | Jul 2007 | B2 |
7248754 | Cato | Jul 2007 | B2 |
7277187 | Smith et al. | Oct 2007 | B2 |
7373722 | Cooper et al. | May 2008 | B2 |
7474389 | Greenberg et al. | Jan 2009 | B2 |
7487595 | Armstrong et al. | Feb 2009 | B2 |
7493336 | Noonan | Feb 2009 | B2 |
7508794 | Feather et al. | Mar 2009 | B2 |
7527205 | Zhu et al. | May 2009 | B2 |
7605817 | Zhang et al. | Oct 2009 | B2 |
7647752 | Magnell | Jan 2010 | B2 |
7693757 | Zimmerman | Apr 2010 | B2 |
7726575 | Wang et al. | Jun 2010 | B2 |
7751928 | Antony et al. | Jul 2010 | B1 |
7783383 | Eliuk et al. | Aug 2010 | B2 |
7839531 | Sugiyama | Nov 2010 | B2 |
7845560 | Emanuel et al. | Dec 2010 | B2 |
7885865 | Benson et al. | Feb 2011 | B2 |
7925114 | Mai et al. | Apr 2011 | B2 |
7957998 | Riley et al. | Jun 2011 | B2 |
7996179 | Lee et al. | Aug 2011 | B2 |
8009864 | Linaker et al. | Aug 2011 | B2 |
8049621 | Egan | Nov 2011 | B1 |
8091782 | Cato et al. | Jan 2012 | B2 |
8094902 | Crandall et al. | Jan 2012 | B2 |
8094937 | Teoh et al. | Jan 2012 | B2 |
8132728 | Dwinell et al. | Mar 2012 | B2 |
8134717 | Pangrazio et al. | Mar 2012 | B2 |
8189855 | Opalach et al. | May 2012 | B2 |
8199977 | Krishnaswamy et al. | Jun 2012 | B2 |
8207964 | Meadow et al. | Jun 2012 | B1 |
8233055 | Matsunaga et al. | Jul 2012 | B2 |
8260742 | Cognigni et al. | Sep 2012 | B2 |
8265895 | Willins et al. | Sep 2012 | B2 |
8277396 | Scott et al. | Oct 2012 | B2 |
8284988 | Sones et al. | Oct 2012 | B2 |
8423431 | Rouaix et al. | Apr 2013 | B1 |
8429004 | Hamilton et al. | Apr 2013 | B2 |
8463079 | Ackley et al. | Jun 2013 | B2 |
8479996 | Barkan et al. | Jul 2013 | B2 |
8520067 | Ersue | Aug 2013 | B2 |
8542252 | Perez et al. | Sep 2013 | B2 |
8571314 | Tao et al. | Oct 2013 | B2 |
8599303 | Stettner | Dec 2013 | B2 |
8630924 | Groenevelt et al. | Jan 2014 | B2 |
8743176 | Stettner et al. | Jun 2014 | B2 |
8757479 | Clark et al. | Jun 2014 | B2 |
8812226 | Zeng | Aug 2014 | B2 |
8923893 | Austin et al. | Dec 2014 | B2 |
8939369 | Olmstead et al. | Jan 2015 | B2 |
8954188 | Sullivan et al. | Feb 2015 | B2 |
8958911 | Wong et al. | Feb 2015 | B2 |
8971637 | Rivard | Mar 2015 | B1 |
8989342 | Liesenfelt et al. | Mar 2015 | B2 |
9007601 | Steffey et al. | Apr 2015 | B2 |
9037287 | Grauberger et al. | May 2015 | B1 |
9064394 | Trundle | Jun 2015 | B1 |
9070285 | Ramu et al. | Jun 2015 | B1 |
9072929 | Rush et al. | Jul 2015 | B1 |
9120622 | Elazary et al. | Sep 2015 | B1 |
9129277 | Macintosh | Sep 2015 | B2 |
9135491 | Morandi et al. | Sep 2015 | B2 |
9159047 | Winkel | Oct 2015 | B2 |
9171442 | Clements | Oct 2015 | B2 |
9247211 | Zhang et al. | Jan 2016 | B2 |
9329269 | Zeng | May 2016 | B2 |
9349076 | Liu et al. | May 2016 | B1 |
9367831 | Besehanic | Jun 2016 | B1 |
9380222 | Clayton et al. | Jun 2016 | B2 |
9396554 | Williams et al. | Jul 2016 | B2 |
9400170 | Steffey | Jul 2016 | B2 |
9424482 | Patel et al. | Aug 2016 | B2 |
9517767 | Kentley et al. | Dec 2016 | B1 |
9542746 | Wu et al. | Jan 2017 | B2 |
9549125 | Goyal et al. | Jan 2017 | B1 |
9562971 | Shenkar et al. | Feb 2017 | B2 |
9565400 | Curlander et al. | Feb 2017 | B1 |
9589353 | Mueller-Fischer et al. | Mar 2017 | B2 |
9600731 | Yasunaga et al. | Mar 2017 | B2 |
9600892 | Patel et al. | Mar 2017 | B2 |
9612123 | Levinson et al. | Apr 2017 | B1 |
9639935 | Douady-Pleven et al. | May 2017 | B1 |
9660338 | Wild et al. | May 2017 | B2 |
9697429 | Patel et al. | Jul 2017 | B2 |
9766074 | Roumeliotis et al. | Sep 2017 | B2 |
9778388 | Connor | Oct 2017 | B1 |
9779205 | Namir | Oct 2017 | B2 |
9791862 | Connor | Oct 2017 | B1 |
9805240 | Zheng et al. | Oct 2017 | B1 |
9811754 | Schwartz | Nov 2017 | B2 |
9827683 | Hance et al. | Nov 2017 | B1 |
9880009 | Bell | Jan 2018 | B2 |
9928708 | Lin et al. | Mar 2018 | B2 |
9953420 | Wolski et al. | Apr 2018 | B2 |
9980009 | Jiang et al. | May 2018 | B2 |
9994339 | Colson et al. | Jun 2018 | B2 |
9996818 | Ren et al. | Jun 2018 | B1 |
10019803 | Venable et al. | Jul 2018 | B2 |
10111646 | Nycz et al. | Oct 2018 | B2 |
10121072 | Kekatpure | Nov 2018 | B1 |
10127438 | Fisher et al. | Nov 2018 | B1 |
10197400 | Jesudason et al. | Feb 2019 | B2 |
10210603 | Venable et al. | Feb 2019 | B2 |
10229386 | Thomas | Mar 2019 | B2 |
10248653 | Blassin et al. | Apr 2019 | B2 |
10262294 | Hahn et al. | Apr 2019 | B1 |
10265871 | Hance et al. | Apr 2019 | B2 |
10289990 | Rizzolo et al. | May 2019 | B2 |
10336543 | Sills et al. | Jul 2019 | B1 |
10349031 | Deluca | Jul 2019 | B2 |
10352689 | Brown et al. | Jul 2019 | B2 |
10373116 | Medina et al. | Aug 2019 | B2 |
10394244 | Song et al. | Aug 2019 | B2 |
20010031069 | Kondo et al. | Oct 2001 | A1 |
20010041948 | Ross et al. | Nov 2001 | A1 |
20020006231 | Jayant et al. | Jan 2002 | A1 |
20020059202 | Hadzikadic et al. | May 2002 | A1 |
20020097439 | Braica | Jul 2002 | A1 |
20020146170 | Rom | Oct 2002 | A1 |
20020158453 | Levine | Oct 2002 | A1 |
20020164236 | Fukuhara et al. | Nov 2002 | A1 |
20030003925 | Suzuki | Jan 2003 | A1 |
20030094494 | Blanford et al. | May 2003 | A1 |
20030174891 | Wenzel et al. | Sep 2003 | A1 |
20040021313 | Gardner et al. | Feb 2004 | A1 |
20040084527 | Bong et al. | May 2004 | A1 |
20040131278 | Imagawa et al. | Jul 2004 | A1 |
20040240754 | Smith et al. | Dec 2004 | A1 |
20050016004 | Armstrong et al. | Jan 2005 | A1 |
20050114059 | Chang et al. | May 2005 | A1 |
20050174351 | Chang | Aug 2005 | A1 |
20050213082 | DiBernardo et al. | Sep 2005 | A1 |
20050213109 | Schell et al. | Sep 2005 | A1 |
20060032915 | Schwartz | Feb 2006 | A1 |
20060045325 | Zavadsky et al. | Mar 2006 | A1 |
20060106742 | Bochicchio et al. | May 2006 | A1 |
20060279527 | Zehner et al. | Dec 2006 | A1 |
20060285486 | Roberts et al. | Dec 2006 | A1 |
20070036398 | Chen | Feb 2007 | A1 |
20070074410 | Armstrong et al. | Apr 2007 | A1 |
20070272732 | Hindmon | Nov 2007 | A1 |
20080002866 | Fujiwara | Jan 2008 | A1 |
20080025565 | Zhang et al. | Jan 2008 | A1 |
20080027591 | Lenser et al. | Jan 2008 | A1 |
20080077511 | Zimmerman | Mar 2008 | A1 |
20080159634 | Sharma et al. | Jul 2008 | A1 |
20080164310 | Dupuy et al. | Jul 2008 | A1 |
20080175513 | Lai et al. | Jul 2008 | A1 |
20080181529 | Michel et al. | Jul 2008 | A1 |
20080183730 | Enga | Jul 2008 | A1 |
20080238919 | Pack | Oct 2008 | A1 |
20080294487 | Nasser | Nov 2008 | A1 |
20090009123 | Skaff | Jan 2009 | A1 |
20090024353 | Lee et al. | Jan 2009 | A1 |
20090057411 | Madej et al. | Mar 2009 | A1 |
20090059270 | Opalach et al. | Mar 2009 | A1 |
20090060349 | Linaker et al. | Mar 2009 | A1 |
20090063306 | Fano et al. | Mar 2009 | A1 |
20090063307 | Groenovelt et al. | Mar 2009 | A1 |
20090074303 | Filimonova et al. | Mar 2009 | A1 |
20090088975 | Sato et al. | Apr 2009 | A1 |
20090103773 | Wheeler et al. | Apr 2009 | A1 |
20090125350 | Lessing et al. | May 2009 | A1 |
20090125535 | Basso et al. | May 2009 | A1 |
20090152391 | McWhirk | Jun 2009 | A1 |
20090160975 | Kwan | Jun 2009 | A1 |
20090192921 | Hicks | Jul 2009 | A1 |
20090206161 | Olmstead | Aug 2009 | A1 |
20090236155 | Skaff | Sep 2009 | A1 |
20090252437 | Li et al. | Oct 2009 | A1 |
20090287587 | Bloebaum et al. | Nov 2009 | A1 |
20090323121 | Valkenburg et al. | Dec 2009 | A1 |
20100017407 | Beniyama et al. | Jan 2010 | A1 |
20100026804 | Tanizaki et al. | Feb 2010 | A1 |
20100070365 | Siotia et al. | Mar 2010 | A1 |
20100082194 | Yabushita et al. | Apr 2010 | A1 |
20100091094 | Sekowski | Apr 2010 | A1 |
20100118116 | Tomasz et al. | May 2010 | A1 |
20100131234 | Stewart et al. | May 2010 | A1 |
20100141806 | Uemura et al. | Jun 2010 | A1 |
20100161569 | Schreter | Jun 2010 | A1 |
20100171826 | Hamilton et al. | Jul 2010 | A1 |
20100208039 | Setettner | Aug 2010 | A1 |
20100214873 | Somasundaram et al. | Aug 2010 | A1 |
20100235033 | Yamamoto et al. | Sep 2010 | A1 |
20100241289 | Sandberg | Sep 2010 | A1 |
20100257149 | Cognigni et al. | Oct 2010 | A1 |
20100295850 | Katz et al. | Nov 2010 | A1 |
20100315412 | Sinha et al. | Dec 2010 | A1 |
20100326939 | Clark et al. | Dec 2010 | A1 |
20110047636 | Stachon et al. | Feb 2011 | A1 |
20110052043 | Hyung et al. | Mar 2011 | A1 |
20110093306 | Nielsen et al. | Apr 2011 | A1 |
20110137527 | Simon et al. | Jun 2011 | A1 |
20110168774 | Magal | Jul 2011 | A1 |
20110172875 | Gibbs | Jul 2011 | A1 |
20110216063 | Hayes | Sep 2011 | A1 |
20110242286 | Pace et al. | Oct 2011 | A1 |
20110246503 | Bender et al. | Oct 2011 | A1 |
20110254840 | Halstead | Oct 2011 | A1 |
20110286007 | Pangrazio et al. | Nov 2011 | A1 |
20110288816 | Thierman | Nov 2011 | A1 |
20110310088 | Adabala et al. | Dec 2011 | A1 |
20120017028 | Tsirkin | Jan 2012 | A1 |
20120019393 | Wolinsky et al. | Jan 2012 | A1 |
20120022913 | Volkmann et al. | Jan 2012 | A1 |
20120051730 | Cote et al. | Mar 2012 | A1 |
20120069051 | Hagbi et al. | Mar 2012 | A1 |
20120075342 | Choubassi et al. | Mar 2012 | A1 |
20120133639 | Kopf et al. | May 2012 | A1 |
20120169530 | Padmanabhan et al. | Jul 2012 | A1 |
20120179621 | Moir et al. | Jul 2012 | A1 |
20120185112 | Sung et al. | Jul 2012 | A1 |
20120194644 | Newcombe et al. | Aug 2012 | A1 |
20120197464 | Wang et al. | Aug 2012 | A1 |
20120201466 | Funayama et al. | Aug 2012 | A1 |
20120209553 | Doytchinov et al. | Aug 2012 | A1 |
20120236119 | Rhee et al. | Sep 2012 | A1 |
20120249802 | Taylor | Oct 2012 | A1 |
20120250978 | Taylor | Oct 2012 | A1 |
20120269383 | Bobbitt et al. | Oct 2012 | A1 |
20120287249 | Choo et al. | Nov 2012 | A1 |
20120307108 | Forutanpour | Dec 2012 | A1 |
20120323620 | Hofman et al. | Dec 2012 | A1 |
20130030700 | Miller et al. | Jan 2013 | A1 |
20130076586 | Karhuketo et al. | Mar 2013 | A1 |
20130090881 | Janardhanan et al. | Apr 2013 | A1 |
20130119138 | Winkel | May 2013 | A1 |
20130132913 | Fu et al. | May 2013 | A1 |
20130134178 | Lu | May 2013 | A1 |
20130138246 | Gutmann et al. | May 2013 | A1 |
20130142421 | Silver et al. | Jun 2013 | A1 |
20130144565 | Miller et al. | Jun 2013 | A1 |
20130154802 | O'Haire et al. | Jun 2013 | A1 |
20130156292 | Chang et al. | Jun 2013 | A1 |
20130162806 | Ding et al. | Jun 2013 | A1 |
20130176398 | Bonner et al. | Jul 2013 | A1 |
20130178227 | Vartanian et al. | Jul 2013 | A1 |
20130182114 | Zhang et al. | Jul 2013 | A1 |
20130226344 | Wong et al. | Aug 2013 | A1 |
20130228620 | Ahem et al. | Sep 2013 | A1 |
20130232039 | Jackson et al. | Sep 2013 | A1 |
20130235165 | Gharib et al. | Sep 2013 | A1 |
20130235206 | Smith et al. | Sep 2013 | A1 |
20130236089 | Litvak et al. | Sep 2013 | A1 |
20130278631 | Border et al. | Oct 2013 | A1 |
20130299306 | Jiang et al. | Nov 2013 | A1 |
20130299313 | Baek, IV et al. | Nov 2013 | A1 |
20130300729 | Grimaud | Nov 2013 | A1 |
20130303193 | Dharwada et al. | Nov 2013 | A1 |
20130321418 | Kirk | Dec 2013 | A1 |
20130329013 | Metois et al. | Dec 2013 | A1 |
20130341400 | Lancaster-Larocque | Dec 2013 | A1 |
20140002597 | Taguchi et al. | Jan 2014 | A1 |
20140003655 | Gopalakrishnan et al. | Jan 2014 | A1 |
20140003727 | Lortz et al. | Jan 2014 | A1 |
20140006229 | Birch et al. | Jan 2014 | A1 |
20140016832 | Kong et al. | Jan 2014 | A1 |
20140019311 | Tanaka | Jan 2014 | A1 |
20140025201 | Ryu et al. | Jan 2014 | A1 |
20140028837 | Gao et al. | Jan 2014 | A1 |
20140047342 | Breternitz et al. | Feb 2014 | A1 |
20140049616 | Stettner | Feb 2014 | A1 |
20140052555 | Macintosh | Feb 2014 | A1 |
20140086483 | Zhang et al. | Mar 2014 | A1 |
20140098094 | Neumann et al. | Apr 2014 | A1 |
20140100813 | Showering | Apr 2014 | A1 |
20140104413 | McCloskey et al. | Apr 2014 | A1 |
20140129027 | Schnittman | May 2014 | A1 |
20140156133 | Cullinane et al. | Jun 2014 | A1 |
20140161359 | Magri et al. | Jun 2014 | A1 |
20140192050 | Qiu et al. | Jul 2014 | A1 |
20140195374 | Bassemir et al. | Jul 2014 | A1 |
20140214547 | Signorelli et al. | Jul 2014 | A1 |
20140214600 | Argue et al. | Jul 2014 | A1 |
20140267614 | Ding et al. | Sep 2014 | A1 |
20140267688 | Aich et al. | Sep 2014 | A1 |
20140277691 | Jacobus et al. | Sep 2014 | A1 |
20140277692 | Buzan et al. | Sep 2014 | A1 |
20140279294 | Field-Darragh et al. | Sep 2014 | A1 |
20140300637 | Fan et al. | Oct 2014 | A1 |
20140316875 | Tkachenko et al. | Oct 2014 | A1 |
20140330835 | Boyer | Nov 2014 | A1 |
20140344401 | Varney et al. | Nov 2014 | A1 |
20140351073 | Murphy et al. | Nov 2014 | A1 |
20140369607 | Patel et al. | Dec 2014 | A1 |
20150015602 | Beaudoin | Jan 2015 | A1 |
20150019391 | Kumar et al. | Jan 2015 | A1 |
20150029339 | Kobres et al. | Jan 2015 | A1 |
20150032304 | Nakamura et al. | Jan 2015 | A1 |
20150039458 | Reid | Feb 2015 | A1 |
20150052029 | Wu et al. | Feb 2015 | A1 |
20150088618 | Basir et al. | Mar 2015 | A1 |
20150088701 | Desmarais et al. | Mar 2015 | A1 |
20150088703 | Yan | Mar 2015 | A1 |
20150092066 | Geiss et al. | Apr 2015 | A1 |
20150106403 | Haverinen et al. | Apr 2015 | A1 |
20150117788 | Patel et al. | Apr 2015 | A1 |
20150139010 | Jeong et al. | May 2015 | A1 |
20150154467 | Feng et al. | Jun 2015 | A1 |
20150161793 | Takahashi | Jun 2015 | A1 |
20150170256 | Pettyjohn et al. | Jun 2015 | A1 |
20150181198 | Baele et al. | Jun 2015 | A1 |
20150212521 | Pack et al. | Jul 2015 | A1 |
20150235157 | Avegliano et al. | Aug 2015 | A1 |
20150245358 | Schmidt | Aug 2015 | A1 |
20150262116 | Katircioglu et al. | Sep 2015 | A1 |
20150279035 | Wolski et al. | Oct 2015 | A1 |
20150298317 | Wang et al. | Oct 2015 | A1 |
20150310601 | Rodriguez et al. | Oct 2015 | A1 |
20150332368 | Vartiainen et al. | Nov 2015 | A1 |
20150352721 | Wicks et al. | Dec 2015 | A1 |
20150363625 | Wu et al. | Dec 2015 | A1 |
20150363758 | Wu et al. | Dec 2015 | A1 |
20150365660 | Wu et al. | Dec 2015 | A1 |
20150379704 | Chandrasekar et al. | Dec 2015 | A1 |
20160012588 | Taguchi et al. | Jan 2016 | A1 |
20160026253 | Bradski et al. | Jan 2016 | A1 |
20160044862 | Kocer | Feb 2016 | A1 |
20160061591 | Pangrazio et al. | Mar 2016 | A1 |
20160070981 | Sasaki et al. | Mar 2016 | A1 |
20160092943 | Vigier et al. | Mar 2016 | A1 |
20160104041 | Bowers et al. | Apr 2016 | A1 |
20160107690 | Oyama et al. | Apr 2016 | A1 |
20160112628 | Super et al. | Apr 2016 | A1 |
20160114488 | Mascorro Medina et al. | Apr 2016 | A1 |
20160129592 | Saboo et al. | May 2016 | A1 |
20160132815 | Itoko et al. | May 2016 | A1 |
20160134930 | Swafford | May 2016 | A1 |
20160150217 | Popov | May 2016 | A1 |
20160156898 | Ren et al. | Jun 2016 | A1 |
20160163067 | Williams et al. | Jun 2016 | A1 |
20160171336 | Schwartz | Jun 2016 | A1 |
20160171429 | Schwartz | Jun 2016 | A1 |
20160171707 | Schwartz | Jun 2016 | A1 |
20160185347 | Lefevre et al. | Jun 2016 | A1 |
20160191759 | Somanath et al. | Jun 2016 | A1 |
20160224927 | Pettersson | Aug 2016 | A1 |
20160253735 | Scudillo et al. | Sep 2016 | A1 |
20160253844 | Petrovskaya et al. | Sep 2016 | A1 |
20160259983 | Tani | Sep 2016 | A1 |
20160260054 | High et al. | Sep 2016 | A1 |
20160271795 | Vicenti | Sep 2016 | A1 |
20160313133 | Zeng et al. | Oct 2016 | A1 |
20160328618 | Patel et al. | Nov 2016 | A1 |
20160328767 | Bonner et al. | Nov 2016 | A1 |
20160353099 | Thomson et al. | Dec 2016 | A1 |
20160364634 | Davis et al. | Dec 2016 | A1 |
20170004649 | Collet Romea et al. | Jan 2017 | A1 |
20170011281 | Dijkman et al. | Jan 2017 | A1 |
20170011308 | Sun et al. | Jan 2017 | A1 |
20170032311 | Rizzolo et al. | Feb 2017 | A1 |
20170041553 | Cao et al. | Feb 2017 | A1 |
20170054965 | Raab et al. | Feb 2017 | A1 |
20170066459 | Singh | Mar 2017 | A1 |
20170074659 | Giurgiu et al. | Mar 2017 | A1 |
20170109940 | Guo et al. | Apr 2017 | A1 |
20170147966 | Aversa et al. | May 2017 | A1 |
20170150129 | Pangrazio | May 2017 | A1 |
20170178060 | Schwartz | Jun 2017 | A1 |
20170178227 | Gornish | Jun 2017 | A1 |
20170178310 | Gornish | Jun 2017 | A1 |
20170193434 | Shah et al. | Jul 2017 | A1 |
20170219338 | Brown et al. | Aug 2017 | A1 |
20170219353 | Alesiani | Aug 2017 | A1 |
20170227645 | Swope et al. | Aug 2017 | A1 |
20170227647 | Baik | Aug 2017 | A1 |
20170228885 | Baumgartner | Aug 2017 | A1 |
20170261993 | Venable et al. | Sep 2017 | A1 |
20170262724 | Wu et al. | Sep 2017 | A1 |
20170280125 | Brown et al. | Sep 2017 | A1 |
20170286773 | Skaff et al. | Oct 2017 | A1 |
20170286901 | Skaff | Oct 2017 | A1 |
20170323253 | Enssle et al. | Nov 2017 | A1 |
20170323376 | Glaser et al. | Nov 2017 | A1 |
20170337508 | Bogolea et al. | Nov 2017 | A1 |
20170372481 | Onuki | Dec 2017 | A1 |
20180001481 | Shah et al. | Jan 2018 | A1 |
20180005035 | Bogolea et al. | Jan 2018 | A1 |
20180005176 | Williams et al. | Jan 2018 | A1 |
20180020145 | Kotfis et al. | Jan 2018 | A1 |
20180051991 | Hong | Feb 2018 | A1 |
20180053091 | Savvides et al. | Feb 2018 | A1 |
20180053305 | Gu et al. | Feb 2018 | A1 |
20180075403 | Mascorro Medina et al. | Mar 2018 | A1 |
20180089613 | Chen et al. | Mar 2018 | A1 |
20180101813 | Paat et al. | Apr 2018 | A1 |
20180108120 | Venable | Apr 2018 | A1 |
20180108134 | Venable et al. | Apr 2018 | A1 |
20180114183 | Howell | Apr 2018 | A1 |
20180130011 | Jacobsson | May 2018 | A1 |
20180143003 | Clayton et al. | May 2018 | A1 |
20180174325 | Fu et al. | Jun 2018 | A1 |
20180190160 | Bryan et al. | Jul 2018 | A1 |
20180197139 | Hill | Jul 2018 | A1 |
20180201423 | Drzewiecki et al. | Jul 2018 | A1 |
20180204111 | Zadeh et al. | Jul 2018 | A1 |
20180251253 | Taira et al. | Sep 2018 | A1 |
20180276596 | Murthy et al. | Sep 2018 | A1 |
20180281191 | Sinyavskiy et al. | Oct 2018 | A1 |
20180293442 | Fridental et al. | Oct 2018 | A1 |
20180293543 | Tiwari | Oct 2018 | A1 |
20180306958 | Goss et al. | Oct 2018 | A1 |
20180313956 | Rzeszutek et al. | Nov 2018 | A1 |
20180314260 | Jen et al. | Nov 2018 | A1 |
20180314908 | Lam | Nov 2018 | A1 |
20180315007 | Kingsford et al. | Nov 2018 | A1 |
20180315065 | Zhang et al. | Nov 2018 | A1 |
20180315173 | Phan et al. | Nov 2018 | A1 |
20180315865 | Haist et al. | Nov 2018 | A1 |
20180370727 | Hance et al. | Dec 2018 | A1 |
20190034864 | Skaff | Jan 2019 | A1 |
20190057588 | Savvides et al. | Feb 2019 | A1 |
20190065861 | Savvides et al. | Feb 2019 | A1 |
20190073554 | Rzeszutek | Mar 2019 | A1 |
20190073559 | Rzeszutek et al. | Mar 2019 | A1 |
20190073627 | Nakdimon et al. | Mar 2019 | A1 |
20190077015 | Shibasaki et al. | Mar 2019 | A1 |
20190087663 | Yamazaki et al. | Mar 2019 | A1 |
20190094876 | Moore et al. | Mar 2019 | A1 |
20190108606 | Komiyama | Apr 2019 | A1 |
20190178436 | Mao et al. | Jun 2019 | A1 |
20190180150 | Taylor | Jun 2019 | A1 |
20190197439 | Wang | Jun 2019 | A1 |
20190197728 | Yamao | Jun 2019 | A1 |
20190236530 | Cantrell et al. | Aug 2019 | A1 |
20190282000 | Swafford | Sep 2019 | A1 |
20190304132 | Yoda et al. | Oct 2019 | A1 |
20190310652 | Cao | Oct 2019 | A1 |
20190311486 | Phan | Oct 2019 | A1 |
20190392212 | Sawhney et al. | Dec 2019 | A1 |
20200118063 | Fu | Apr 2020 | A1 |
20200249692 | Thode | Aug 2020 | A1 |
20200279113 | Yanagi | Sep 2020 | A1 |
20200293766 | Huang | Sep 2020 | A1 |
20200314333 | Liang et al. | Oct 2020 | A1 |
20200334620 | Yanagi | Oct 2020 | A1 |
20200380706 | Gorodetsky | Dec 2020 | A1 |
20200380715 | Chan | Dec 2020 | A1 |
20210004610 | Huang | Jan 2021 | A1 |
Number | Date | Country |
---|---|---|
2835830 | Nov 2012 | CA |
3028156 | Jan 2018 | CA |
104200086 | Dec 2014 | CN |
107067382 | Aug 2017 | CN |
206952978 | Feb 2018 | CN |
766098 | Apr 1997 | EP |
1311993 | May 2007 | EP |
2309378 | Apr 2011 | EP |
2439487 | Apr 2012 | EP |
2472475 | Jul 2012 | EP |
2562688 | Feb 2013 | EP |
2662831 | Nov 2013 | EP |
2693362 | Feb 2014 | EP |
2323238 | Sep 1998 | GB |
2330265 | Apr 1999 | GB |
2014170431 | Sep 2014 | JP |
101234798 | Jan 2009 | KR |
1020190031431 | Mar 2019 | KR |
WO 9923600 | May 1999 | WO |
WO 2003002935 | Jan 2003 | WO |
WO 2003025805 | Mar 2003 | WO |
WO 2006136958 | Dec 2006 | WO |
WO 2007042251 | Apr 2007 | WO |
WO 2008057504 | May 2008 | WO |
WO 2008154611 | Dec 2008 | WO |
WO 2012103199 | Aug 2012 | WO |
WO 2012103202 | Aug 2012 | WO |
WO 2012154801 | Nov 2012 | WO |
WO 2013165674 | Nov 2013 | WO |
WO 2014066422 | May 2014 | WO |
WO 2014092552 | Jun 2014 | WO |
WO 2014181323 | Nov 2014 | WO |
WO 2015127503 | Sep 2015 | WO |
WO 2016020038 | Feb 2016 | WO |
WO 2017187106 | Nov 2017 | WO |
WO 2018018007 | Jan 2018 | WO |
WO 2018204308 | Nov 2018 | WO |
WO 2018204342 | Nov 2018 | WO |
WO 2019023249 | Jan 2019 | WO |
Entry |
---|
Dubois, M., et al., 'A comparison of geometric and energy-based point cloud semantic segmentation methods, European Conference on Mobile Robots (ECMR), pp. 88-93, 25-27, Sep. 2013. |
Duda, et al., “Use of the Hough Transformation to Detect Lines and Curves in Pictures”, Stanford Research Institute, Menlo Park, California, Graphics and Image Processing, Communications of the ACM, vol. 15, No. 1 (Jan. 1972). |
F.C.A. Groen et al., “The smallest box around a package,” Pattern Recognition, vol. 14, No. 1-6, Jan. 1, 1981, pp. 173-176, XP055237156, GB, ISSN: 0031-3203, DOI: 10.1016/0031-3203(81(90059-5 p. 176-p. 178. |
Federico Tombari et al. “Multimodal cue integration through Hypotheses Verification for RGB-D object recognition and 6DOF pose estimation”, IEEE International Conference on Robotics and Automation, Jan. 2013. |
Flores, et al., “Removing Pedestrians from Google Street View Images”, Computer Vision and Pattern Recognition Workshops, 2010 IEEE Computer Society Conference On, IEE, Piscataway, NJ, pp. 53-58 (Jun. 13, 2010). |
Glassner, “Space Subdivision for Fast Ray Tracing.” IEEE Computer Graphics and Applications, 4.10, pp. 15-24, 1984. |
Golovinskiy, Aleksey, et al. “Min-Cut based segmentation of point clouds.” Computer Vision Workshops (ICCV Workshops), 2009 IEEE 12th International Conference on. IEEE, 2009. |
Hackel et al., “Contour Detection in unstructured 3D point clouds,”IEEE, 2016 Conference on Computer vision and Pattern recognition (CVPR), Jun. 27-30, 2016, pp. 1-9. |
Hao et al., “Structure-based object detection from scene point clouds,” Science Direct, V191, pp. 148-160 (2016). |
Hu et al., “An improved method of discrete point cloud filtering based on complex environment,” International Journal of Applied Mathematics and Statistics, v48, il8 (2013). |
International Search Report and Written Opinion for International Patent Application No. PCT/US2013/070996 dated Apr. 2, 2014. |
International Search Report and Written Opinion for International Patent Application No. PCT/US2013/053212 dated Dec. 1, 2014. |
International Search Report and Written Opinion for corresponding International Patent Application No. PCT/US2016/064110 dated Mar. 20, 2017. |
International Search Report and Written Opinion for corresponding International Patent Application No. PCT/US2017/024847 dated Jul. 7, 2017. |
International Search Report and Written Opinion for International Application No. PCT/US2018/030419 dated Aug. 31, 2018. |
International Search Report and Written Opinion from International Patent Application No. PCT/US2018/030345 dated Sep. 17, 2018. |
International Search Report and Written Opinion from International Patent Application No. PCT/US2018/030360 dated Jul. 9, 2018. |
International Search Report and Written Opinion from International Patent Application No. PCT/US2018/030363 dated Jul. 9, 2018. |
International Search Report and Written Opinion for International Application No. PCT/US2019/025859 dated Jul. 3, 2019. |
International Search Report and Written Opinion from International Patent Application No. PCT/US2019/025849 dated Jul. 9, 2019. |
International Search Report and Written Opinion from International Patent Application No. PCT/US2019/049761 dated Nov. 15, 2019. |
International Search Report and Written Opinion from International Patent Application No. PCT/US2019/051312 dated Nov. 15, 2019. |
International Search Report and Written Opinion from International Patent Application No. PCT/US2019/054103 dated Jan. 6, 2020. |
International Search Report and Written Opinion from International Patent Application No. PCT/US2019/064020 dated Feb. 19, 2020. |
International Search Report and Written Opinion for International Patent Application No. PCT/US2020/028133 dated Jul. 24, 2020. |
International Search Report and Written Opinion from International Patent Application No. PCT/US2020/029134 dated Jul. 27, 2020. |
International Search Report and Written Opinion from International Patent Application No. PCT/US2020/028183 dated Jul. 24, 2020. |
International Search Report and Written Opinion from International Patent Application No. PCT/US2020/035285 dated Aug. 27, 2020. |
Jadhav et al. “Survey on Spatial Domain dynamic template matching technique for scanning linear barcode,” International Journal of science and research v 5 n 3, Mar. 2016)(Year: 2016). |
Jian Fan et al.: “Shelf detection via vanishing point and radial projection”, 2014 IEEE International Conference on image processing (ICIP), IEEE, (Oct. 27, 2014), pp. 1575-1578. |
Kang et al., “Kinematic Path-Tracking of Mobile Robot Using Iterative learning Control”, Journal of Robotic Systems, 2005, pp. 111-121. |
Kay et al. “Ray Tracing Complex Scenes.” Acm Siggraph Computer Graphics, vol. 20, No. 4, ACM, pp. 269-278, 1986. |
Kelly et al., “Reactive Nonholonomic Trajectory Generation via Parametric Optimal Control”, International Journal of Robotics Research, vol. 22, No. 7-8, pp. 583-601 (Jul. 30, 2013). |
Lari, Z., et al., “An adaptive approach for segmentation of 3D laser point cloud.” International Archives of the Photogrammertry, Remote sensing and spatial information Sciences, vol. XXXVIII-5/W12, 2011, ISPRS Calgary 2011 Workshop, Aug. 29-31, 2011, Calgary, Canada. |
Lecking et al.: “Localization in a wide range of industrial environments using relative 3D ceiling features”, IEEE, pp. 333-337 (Sep. 15, 2008). |
Lee et al. “Statistically Optimized Sampling for Distributed Ray Tracing.” ACM Siggraph Computer Graphics, vol. 19, No. 3, ACM, pp. 61-67, 1985. |
Li et al., “An improved RANSAC for 3D Point cloud plane segmentation based on normal distribution transformation cells,” Remote sensing, V9: 433, pp. 1-16 (2017). |
Likhachev, Maxim, and Dave Ferguson. “Planning Long dynamically feasible maneuvers for autonomous vehicles.” The international journal of Robotics Reasearch 28.8 (2009): 933-945. (Year:2009). |
Marder-Eppstein et al., “The Office Marathon: robust navigation in an indoor office environment,” IEEE, 2010 International conference on robotics and automation, May 3-7, 2010, pp. 300-307. |
Mcnaughton, Matthew, et al. “Motion planning for autonomous driving with a conformal spatiotemporal lattice.” Robotics and Automation (ICRA), 2011 IEEE International Conference on. IEEE, 2011. (Year: 2011). |
Meyersohn, “Walmart turns to robots and apps in stores”, https://www.cnn.com/2018/12/07/business/walmart-robot-janitors-dotcom-store/index.html, Oct. 29, 2019. |
Mitra et al., “Estimating surface normals in noisy point cloud data,” International Journal of Computational geometry & applications, Jun. 8-10, 2003, pp. 322-328. |
N.D.F. Campbell et al. “Automatic 3D Object Segmentation in Multiple Views using Volumetric Graph-Cuts”, Journal of Image and Vision Computing, vol. 28, Issue 1, Jan. 2010, pp. 14-25. |
Ni et al., “Edge Detection and Feature Line Tracing in 3D-Point Clouds by Analyzing Geometric Properties of Neighborhoods,” Remote Sensing, V8 19, pp. 1-20 (2016). |
Norriof et al., “Experimental comparison of some classical iterative learning control algorithms”, IEEE Transactions on Robotics and Automation, Jun. 2002, pp. 636-641. |
Notice of allowance for U.S. Appl. No. 13/568,175 dated Sep. 23, 2014. |
Notice of allowance for U.S. Appl. No. 13/693,503 dated Mar. 11, 2016. |
Notice of allowance for U.S. Appl. No. 14/068,495 dated Apr. 25, 2016. |
Notice of allowance for U.S. Appl. No. 14/518,091 dated Apr. 12, 2017. |
Notice of allowance for U.S. Appl. No. 15/211,103 dated Apr. 5, 2017. |
Olson, Clark F., et al. “Wide-Baseline Stereo Vision for terrain Mapping” in Machine Vision and Applications, Aug. 2010. |
Oriolo et al., “An iterative learning controller for nonholonomic mobile Robots”, the international Journal of Robotics Research, Aug. 1997, pp. 954-970. |
Ostafew et al., “Visual Teach and Repeat, Repeat, Repeat: Iterative learning control to improve mobile robot path tracking in challenging outdoor environment”, 1EEE/RSJ International Conference on Intelligent robots and Systems, Nov. 2013, p. 176-. |
Park et al., “Autonomous mobile robot navigation using passive rfid in indoor environment,” IEEE, Transactions on industrial electronics, vol. 56, issue 7, pp. 2366-2373 (Jul. 2009). |
Perveen et al. (An overview of template matching methodologies and its application, International Journal of Research in Computer and Communication Technology, v2n10, Oct. 2013) (Year: 2013). |
Pivtoraiko et al., “Differentially constrained mobile robot motion planning in state lattices”, journal of field robotics, vol. 26, No. 3, 2009, pp. 308-333. |
Pratt W K Ed: “Digital Image processing, 10-image enhancement, 17-image segmentation”, 2001-01-01, Digital Image Processing: PIKS Inside, New York: John Wily & Sons, US, pp. 243-258, 551. |
Puwein, J., et al.“Robust Multi-view camera calibration for wide-baseline camera networks,”in IEEE Workshop on Applications of computer vision (WACV), Jan. 2011. |
Rusu, et al. “How to incrementally register pairs of clouds,” PCL Library, retrieved from internet on Aug. 22, 2016 [http://pointclouds.org/documentation/tutorials/pairwise_incremental_registration.php. |
Rusu, et al. “Spatial Change detection on unorganized point cloud data,” PCL Library, retrieved from internet on Aug. 19, 2016 [http://pointclouds.org/documentation/tutorials/octree_change.php]. |
Schnabel et al. “Efficient RANSAC for Point-Cloud Shape Detection”, vol. 0, No. 0, pp. 1-12 (1981). |
Senthilkumaran, et al., “Edge Detection Techniques for Image Segmentation-A Survey of Soft Computing Approaches”, International Journal of Recent Trends in Engineering, vol. 1, No. 2 (May 2009). |
Szeliski, “Modified Hough Transform”, Computer Vision. Copyright 2011, pp. 251-254. Retrieved on Aug. 17, 2017 [http://szeliski.org/book/drafts/SzeliskiBook_20100903_draft.pdf]. |
Tahir, Rabbani, et al., “Segmentation of point clouds using smoothness constraint,”International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences 36.5 (Sep. 2006): 248-253. |
Trevor et al., “Tables, Counters, and Shelves: Semantic Mapping of Surfaces in 3D,” Retrieved from Internet Jul. 3, 2018 @ http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.703.5365&rep=repl&type=p. |
Tseng, et al., “A Cloud Removal Approach for Aerial Image Visualization”, International Journal of Innovative Computing, Information & Control, vol. 9, No. 6, pp. 2421-2440 (Jun. 2013). |
Uchiyama, et al., “Removal of Moving Objects from a Street-View Image by Fusing Multiple Image Sequences”, Pattern Recognition, 2010, 20th International Conference On, IEEE, Piscataway, NJ pp. 3456-3459 (Aug. 23, 2010). |
United Kingdom Intellectual Property Office, “Combined Search and Examination Report” for GB Patent Application No. 1813580.6 dated Feb. 21, 2019. |
United Kingdom Intellectual Property Office, Combined Search and Examination Report dated Jan. 22, 2016 for GB Patent Application No. 1417218.3. |
United Kingdom Intellectual Property Office, Combined Search and Examination Report dated Jan. 22, 2016 for GB Patent Application No. 1521272.3. |
United Kingdom Intellectual Property Office, Combined Search and Examination Report dated Mar. 11, 2015 for GB Patent Application No. 1417218.3. |
United Kingdom Intellectual Property Office, Combined Search and Examination Report dated May 13, 2020 for GB Patent Application No. 1917864.9. |
Varol Gul et al.: “Product placement detection based on image processing”, 2014 22nd Signal Processing and Communication Applications Conference (SIU), IEEE, 2014-04-23. |
Varol Gul et al.: “Toward Retail product recognition on Grocery shelves”, Visual Communications and image processing; Jan. 20, 2004; San Jose, (Mar. 4, 2015). |
Weber et al., “Methods for Feature Detection in Point clouds,” visualization of large and unstructured data sets—IRTG Workshop, pp. 90-99 (2010). |
Zhao Zhou et al.: “An Image contrast Enhancement Algorithm Using PLIP-based histogram Modification”, 2017 3rd IEEE International Conference on Cybernetics (Cybcon), IEEE, (2017-06-21). |
Ziang Xie et al., “Multimodal Blending for High-Accuracy Instance Recognition”, 2013 IEEE RSJ International Conference on Intelligent Robots and Systems, p. 2214-2221. |
Fan Zhang et al., “Parallax-tolerant Image Stitching”, 2014 Computer Vision Foundation, pp. 4321-4328. |
Kaimo Lin et al., “SEAGULL: Seam-guided Local Alignment for Parallax-tolerant Image Stitching”, Retrieved on Nov. 16, 2020 [http://publish.illinois.edu/visual-modeling-and-analytics/files/2016/08/Seagull.pdf]. |
Julio Zaragoza et al., “As-Projective-As-Possible Image Stitching with Moving DLT”, 2013 Computer Vision Foundation, pp. 2339-2346. |
Zeng et al., Multi-view Self Supervised Deep Learning for 6D Pose Estimation in the Amazon Picking Challenge, May 7, 2017. Retrieved on Nov. 16, 2019 [https://arxiv.org/pdf/1609.09475.pdf]. |
“Fair Billing with Automatic Dimensioning” pp. 1-4, undated, Copyright Mettler-Toledo International Inc. |
“Plane Detection in Point Cloud Data” dated 2010-01-25 by Michael Ying Yang and Wolfgang Forstner, Technical Report 1, 2010, University of Bonn. |
“Swift Dimension” Trademark Omniplanar, Copyright 2014. |
Ajmal S. Mian et al., “Three-Dimensional Model Based Object Recognition and Segmentation in Cluttered Scenes”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, No. 10, Oct. 2006. |
Batalin et al., “Mobile robot navigation using a sensor network,” IEEE, International Conference on robotics and automation, Apr. 26, May 1, 2004, pp. 636-641. |
Bazazian et al., “Fast and Robust Edge Extraction in Unorganized Point clouds,” IEEE, 2015 International Conference on Digital Image Computing: Techniques and Applicatoins (DICTA), Nov. 23-25, 2015, pp. 1-8. |
Biswas et al. “Depth Camera Based Indoor Mobile Robot Localization and Navigation” Robotics and Automation (ICRA), 2012 IEEE International Conference on IEEE, 2012. |
Bohm, Multi-Image Fusion for Occlusion-Free Faade Texturing, International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, pp. 867-872 (Jan. 2004). |
Bristow et al., “A Survey of Iterative Learning Control”, IEEE Control Systems, Jun. 2006, pp. 96-114. |
Buenaposada et al. “Realtime tracking and estimation of plane pose” Proceedings of the ICPR (Aug. 2002) vol. II, IEEE pp. 697-700. |
Carreira et al., “Enhanced PCA-based localization using depth maps with missing data,” IEEE, pp. 1-8, Apr. 24, 2013. |
Chen et al. “Improving Octree-Based Occupancy Maps Using Environment Sparsity with Application to Aerial Robot Navigation” Robotics and Automation (ICRA), 2017 IEEE. |
Cleveland Jonas et al.: “Automated System for Semantic Object Labeling with Soft-Object Recognition and Dynamic Programming Segmentation”, IEEE Transactions on Automation Science and Engineering, IEEE Service Center, New York, NY (2017-04-01). |
Cook et al., “Distributed Ray Tracing ACM SIGGRAPH Computer Graphics”, vol. 18, No. 3, ACM pp. 137-145, 1984. |
Datta, A., et al. “Accurate camera calibration using iterative refinement of control points,” in Computer Vision Workshops (ICCV Workshops), 2009. |
Deschaud, et al., “A Fast and Accurate Place Detection algoritm for large noisy point clouds using filtered normals and voxel growing,” 3DPVT, May 2010, Paris, France, [hal-01097361]. |
Douillard, Bertrand, et al. “On the segmentation of 3D Lidar point clouds.” Robotics and Automation (ICRA), 2011 IEEE International Conference on IEEE, 2011. |
Number | Date | Country | |
---|---|---|---|
20200380715 A1 | Dec 2020 | US |