The invention generally relates to automated, robotic and other processing systems, and relates in particular to automated and robotic systems intended for use in environments requiring, for example, that a variety of objects (e.g., articles, parcels or packages) be processed, e.g., sorted and/or otherwise distributed to several output destinations.
Many object distribution systems receive objects in an organized or disorganized stream that may be provided as individual objects or objects aggregated in groups such as in bags, arriving on any of several different conveyances, commonly a conveyor, a truck, a pallet, a Gaylord, or a bin. Each object must then be distributed to the correct destination container, as determined by identification information associated with the object, which is commonly determined by a label printed on the object. The destination container may take many forms, such as a bag or a bin.
The processing of such objects has traditionally been done by human workers that scan the objects, e.g., with a hand-held barcode scanner, and then place the objects at assigned locations. For example, many order fulfillment operations achieve high efficiency by employing a process called wave picking. In wave picking, orders are picked from warehouse shelves and placed at locations (e.g., into bins) containing multiple orders that are sorted downstream. At the processing stage individual objects are identified, and multi-object orders are consolidated, for example into a single bin or shelf location, so that they may be packed and then shipped to customers. The processing (e.g., sorting) of these objects has traditionally been done by hand. A human sorter picks an object from an incoming bin, finds a barcode on the object, scans the barcode with a handheld barcode scanner, determines from the scanned barcode the appropriate bin or shelf location for the article, and then places the article in the so-determined bin or shelf location where all objects for that order have been defined to belong. Automated systems for order fulfillment have also been proposed. See for example, U.S. Patent Application Publication No. 2014/0244026, which discloses the use of a robotic arm together with an arcuate structure that is movable to within reach of the robotic arm.
In conventional parcel sortation systems, human workers or automated systems typically retrieve objects in an arrival order, and sort each object into a collection bin based on a set of given heuristics. For instance, all objects of like type might go to a collection bin, or all objects in a single customer order, or all objects destined for the same shipping destination, etc. The human workers or automated systems are required to receive objects and to move each to their assigned collection bin. If the number of different types of input (received) objects is large, a large number of collection bins is required.
Such a system has inherent inefficiencies as well as inflexibilities since the desired goal is to match incoming objects to assigned collection bins. Such systems may require a large number of collection bins (and therefore a large amount of physical space, large capital costs, and large operating costs) in part, because sorting all objects to all destinations at once is not always most efficient.
Certain partially automated sortation systems involve the use of recirculating conveyors and tilt trays, where the tilt trays receive objects by human sortation (human induction), and each tilt tray moves past a scanner. Each object is then scanned and moved to a pre-defined location assigned to the object. The tray then tilts to drop the object into the location. Further, partially automated systems, such as the bomb-bay style recirculating conveyor, involve having trays open doors on the bottom of each tray at the time that the tray is positioned over a predefined chute, and the object is then dropped from the tray into the chute. Again, the objects are scanned while in the tray, which assumes that any identifying code is visible to the scanner.
Such partially automated systems are lacking in key areas. As noted, these conveyors have discrete trays that can be loaded with an object; they then pass through scan tunnels that scan the object and associate it with the tray in which it is riding. When the tray passes the correct bin, a trigger mechanism causes the tray to dump the object into the bin. A drawback with such systems however, is that every divert requires an actuator, which increases the mechanical complexity and the cost per divert can be very high.
An alternative is to use human labor to increase the number of diverts, or collection bins, available in the system. This decreases system installation costs, but increases the operating costs. Multiple cells may then work in parallel, effectively multiplying throughput linearly while keeping the number of expensive automated diverts at a minimum. Such diverts do not ID an object and cannot divert it to a particular spot, but rather they work with beam breaks or other sensors to seek to ensure that indiscriminate bunches of objects get appropriately diverted. The lower cost of such diverts coupled with the low number of diverts keep the overall system divert cost low.
Unfortunately, these systems don't address the limitations to total number of system bins. The system is simply diverting an equal share of the total objects to each parallel manual cell. Thus, each parallel sortation cell must have all the same collection bin designations; otherwise an object might be delivered to a cell that does not have a bin to which that object is mapped. There remains a need for a more efficient and more cost-effective object sortation system that sorts objects of a variety of sizes and weights into appropriate collection bins or trays of fixed sizes, yet is efficient in handling objects of such varying sizes and weights.
In accordance with an embodiment, the invention provides a semi-autonomous processing system for processing objects. The semi-autonomous processing system includes an input conveyance system for moving objects to a presentation area, a perception system including perception units that are directed toward a detection area for providing perception data regarding an object in the presentation area, at least two transport systems, each of which is adapted to receive the object and move the object in either of reciprocal directions, and a manual workstation area between the perception area the at least two transport systems.
In accordance with another embodiment, the invention provides a semi-autonomous processing system for processing objects. The semi-autonomous processing system includes an input conveyance system for moving objects to a presentation area, a perception system including perception units that are directed toward a detection area for providing perception data regarding an object in the presentation area, and at least two transport systems, each of which is adapted to receive the object and move the object in either of reciprocal directions, wherein the semi-autonomous system includes no automated system for moving the object from the presentation area to either of the two transport systems.
In accordance with a further embodiment, the invention provides a method for providing semi-autonomous processing of objects. The method includes the steps of moving objects on an input conveyance system to a presentation area, providing perception data regarding an object in the presentation area, receiving the object in one of at least two transport systems, and moving the object in either of reciprocal directions, wherein the method includes no automated system for moving the object from the perception area to either of the two transport systems.
The following description may be further understood with reference to the accompanying drawings in which:
The drawings are shown for illustrative purposes only.
Processing objects in a distribution center (e.g., sorting) is one application for automatically identifying and moving objects. In a shipping distribution center for example, objects commonly arrive in trucks, are conveyed to sortation stations where they are processed, e.g., sorted, according to desired destinations, aggregated in bags, and then loaded in trucks for transport to the desired destinations. Another application may be in the shipping department of a retail store or order fulfillment center, which may require that objects be processed for transport to different shippers, or to different distribution centers of a particular shipper. In a shipping or distribution center the objects may take form of plastic bags, boxes, tubes, envelopes, or any other suitable container, and in some cases may also include objects not in a container. In a shipping or distribution center the desired destination is commonly obtained by reading identifying information printed on the object or on an attached label. In this scenario the destination corresponding to identifying information is commonly obtained by querying the customer's information system. In other scenarios the destination may be written directly on the object, or may be known through other means.
Applicants have discovered that when automating sortation of objects, there are a few main things to consider: 1) the overall system throughput (objects sorted per hour), 2) the number of diverts (i.e., number of discrete locations to which an object can be routed), 3) the total area of the sortation system (square feet), and 4) the annual costs to run the system (man-hours, electrical costs, cost of disposable components).
In accordance with various embodiments, therefore, the invention provides a method of taking individual objects from an organized or disorganized stream of objects, providing a generally singulated stream of objects, identifying individual objects, and processing them to desired destinations. The invention further provides methods for identifying an object being processed by a human worker, for conveying objects from one point to the next, and for transferring objects from one conveyance system to another for placement at destination locations.
Important components of a semi-automated object identification and processing system, in accordance with an embodiment of the present invention, include an input conveyance system, a perception system, a primary transport system, and secondary transport systems.
The system also includes an identification system 18 that includes a depth detection system and a perception system as discussed in more detail below. Generally, a human worker in a workstation area 21 lifts an object from the sloped surface 16, and once the object is identified (as optionally indicated by a feedback device 20 such as a light or a speaker), a pair of lights (e.g., pair 22, pair 24 or pair 26) is illuminated to show the worker where to place the object. Each pair of lights 22, 24, 26 is associated with a shuttle wing 32, 34, 36 that includes a shuttle carriage 28, 38, 48, that rides on a track 30, 40, 50 between rows of destination bins 42, 44, 46 that may be provided on carts 54. For example, each cart may support two destination bins as shown. Once a pair of lights (22, 24, 26) is illuminated, the human worker places the object in the associated carriage. The system then detects this placement, and moves the shuttle carriage to be adjacent a desired destination bin, and tilts the carriage to drop the object in the bin as discussed in more detail below. Operation of the system may be governed by a processing system 52 that includes one or more computer processing systems.
With reference to
The system will then continue to scan the field until it detects an object has been lifted from the sloped surface 16 and is being moved closer to the detection system 60 as shown in
Once the area of the object 64 is identified, the system will then maintain a view of this perception area 63 of the object 64 as shown in
In addition to indicating when an identifying indicia is detected, the feedback system 20 can provide other information to the worker, such as an indication that the system has isolated a lifted object and is searching for an identifying indicia, a status indicator showing that more than one object is present in the presentation area 16, or an indication that the lifted object has been removed from the presentation area 16. These indications can be through a color changing light, a series of lights aligned with respective text, a display screen, a projection on the presentation area, auditory cues, or a combination thereof. While the feedback system 20 is shown in
An important aspect of certain embodiments of the present invention, is the ability to identify via barcode or other visual markings of objects by employing a perception system that may quickly scan the object as held by a human worker. Automated scanning systems would be unable to see barcodes on objects that are presented in a way that their barcodes are not exposed or visible without rotation. The system therefore is designed to view an object from a large number of different views very quickly, reducing or eliminating the possibility of the system not being able to view identifying indicia on an object.
It is also proposed that key features in the perception system are the specific design of the perception system so as to maximize the probability of a successful scan, while simultaneously minimizing the average scan time. The probability of a successful scan and the average scan time make up key performance characteristics. These key performance characteristics are determined by the configuration and properties of the perception system, as well as the object set and how they are marked. The key performance characteristics may be optimized for a given item set and method of labeling. Parameters of the optimization for a system include how many scanners, where and in what orientation to place them, and what sensor resolutions and fields of view for the scanners to use. Optimization can be done through trial and error, or by simulation with models of the object.
Optimization through simulation may employ a scanner performance model. A scanner performance model provides the range of positions, orientations and barcode element size that an identifying symbol can be detected and decoded by the scanner, where the barcode element size is the size of the smallest feature on the symbol. These are typically rated at a minimum and maximum range, a maximum skew angle, a maximum pitch angle, and a minimum and maximum tilt angle.
Performance requirements for such camera-based scanners are that they are able to detect symbols within some range of distances as long as both pitch and skew of the plane of the symbol are within the range of plus or minus 45 degrees, while the tilt of the symbol can be arbitrary (between 0 and 360 degrees). The scanner performance model predicts whether a given symbol in a given position and orientation will be detected.
The scanner performance model is coupled with a model of where symbols would expect to be positioned and oriented. A symbol pose model is the range of all positions and orientations, in other words, poses in which a symbol will expect to be found. For the scanner, the symbol pose model is itself a combination of an article gripping model, which predicts how objects will be held by the robotic system, as well as a symbol-item appearance model, which describes the possible placements of the symbol on the object. For the scanner, the symbol pose model is itself a combination of the symbol-item appearance model, as well as an inbound-object pose model, which models the distribution of poses over which inbound articles are presented to the scanner. These models may be constructed empirically, modeled using an analytical model, or approximate models may be employed using simple sphere models for objects and uniform distributions over the sphere as a symbol-item appearance model.
With reference to
The feedback system 20 indicates the status of the pick to the worker through audio or visual cues. For example, distinct cues can be provided for when motion is detected, when one object has been detected, when multiple objects are detected, when the identity of an object is detected (which can indicate which object is identified, for example, by projecting a light onto the object, or using speech to identify the object), when an object is lifted, and where to route a lifted object if it has been identified. If any identifying indicia is found, the system will indicate that the object has been identified (step 81), indicate a routing location for the object by, for example, illuminating a pair of wing location lights and prompting the human worker to move the object to the carriage of the associated wing location (step 82). In certain embodiments, the system confirms that the object has been placed in the routing location, for example, with beam breaks or force sensors on the carriage (step 83). Once the object is confirmed to be at the routing location, the feedback system is reset (step 86), and the process can end (step 88). If, after a predetermined amount of time, the object is not confirmed to be at the routing location, the feedback system 20 can indicate an error, and the process will halt until the worker resolves the error, either by placing the object in the routing location or otherwise updating the system with the status of the object (e.g. damaged, sent to manual sortation, reintroduced to input area, lost, etc.).
The feedback system can also instruct or otherwise indicate to a worker that the object has not yet been identified (step 84), and other areas of the object need to be presented to the perception units 62, such as by turning the object or flattening the indicia, in order to capture the identifying indicia. The system will maintain a view of the general area of the object to permit this rotation. The process continues to loop until either the object is removed from the view (step 85), or any identifying indicia is found and the process continues to step 81. If the object is removed from view (for example, if placed in a manual sortation location, or in some embodiments, if placed back onto the surface of the presentation area 16), the feedback system will be reset, and the process will end.
Once the process is ended, the infeed conveyor can advance and provide the presentation area with another object, and the process can begin again. In some cases, as later discussed with reference to
As referred to above in connection with step 83, carriage (e.g., 28, 38, 48) may include beam break sensors 92, 94 as shown in
With reference to
In accordance with further embodiments, the destination bins (e.g., boxes) may be provided in a box tray including inner sides for receiving a box, and a kicker plate that is engageable with a box kicker. With reference to
Following displacement of the bin onto the conveyor 100 (as shown in
If a next location is available (step 208), the system the assigns a next location to the object (step 216), and the object is then placed in that location (step 218). The number of objects at the location is then updated (step 220), and if the location is then full (step 222), the system identifies that the location is ready for further processing (step 226). The further processing may, for example, include collecting the objects at the location for transport to a shipping location. If the location is not full, the system then determines, based on prior knowledge and/or heuristics, whether the location is unlikely to receive a further object (step 224). If it is not likely to receive a further object, the system identifies that the location is ready for further processing (step 226). If it is likely to receive a further object, the system returns to receiving a new object (step 202).
If in step 208 a next location is not available, the system may (either with or without input from a human) determine whether to retry identifying the object (step 210). If so, then the system would return the object to the input stream (step 212) to be again received at a later time (step 202). If it is determined that the object would not be reintroduced to the input stream for identification, the system would place the object in a manual sorting area for sortation by a human (step 214), and the process would continue with a new object.
The system also includes an identification system 318 that includes a depth detection system and a perception system as discussed above with reference to
The identification system 318 includes a depth detection system and a plurality of perception units as discussed above that are generally directed toward the sloped surface 316. As discussed above with reference to
The system will then continue to scan the field until it detects an object being moved closer to the detection system. The significance of this is that the system will thereby singulate an object that a human worker has lifted and thereby selected for processing. At this time, the system will concentrate on the area of the object identified as being lifted, and thereby exclude other areas of the field of view as shown discussed above. In particular, the object is identified as being lifted, and the system will exclude other areas of view, which includes another object even though an indicia label is visible on the object while none is yet visible on the object as discussed above.
Once the area of the object is identified, the system will then maintain a view of the general area of the object until identifying indicia is perceived or the object is removed from view. In particular, if identifying indicia is not facing the perception units, the human worker may rotate the item as shown discussed above until identifying indicia is detected by the perception units. In this way, a human worker may lift an object and rotate the object if needed until the system detects identifying indicia, and an optional light may be illuminated or change color (e.g., to green) or a display device 328 may provide information to indicate that the object is identified.
With further reference to
The identification system 418 includes a depth detection system and a plurality of perception units as discussed above that are generally directed toward the presentation area 416. As discussed above with reference to
The system will then continue to scan the field until it detects an object being moved closer to the detection system. Again, the significance of this is that the system will thereby singulate an object that a human worker has lifted and thereby selected for processing. At this time, the system will concentrate on the area of the object identified as being lifted, and thereby exclude other areas of the field of view as shown discussed above. In particular, the object is identified as being lifted, and the system will exclude other areas of view, which includes another object even though an indicia label is visible on the object while none is yet visible on the object as discussed above.
Once the area of the object is identified, the system will then maintain a view of the general area of the object until identifying indicia is perceived or the object is removed from view. In particular, if identifying indicia is not facing the perception units, the human worker may rotate the item as shown discussed above until identifying indicia is detected by the perception units. In this way, a human worker may lift an object and rotate the object if needed until the system detects identifying indicia, and an optional light may be illuminated or change color (e.g., to green) or a display device 428 may provide information to indicate that the object is identified. As seen in
Systems of various embodiments provide numerous advantages because of the inherent dynamic flexibility. The flexible correspondence between sorter outputs and destinations provides that there may be fewer sorter outputs than destinations, so the entire system may require less space. The flexible correspondence between sorter outputs and destinations also provides that the system may choose the most efficient order in which to handle objects, in a way that varies with the particular mix of objects and downstream demand. The system is also easily scalable, by adding shuttle wings and destination stations, and more robust since the failure (or off-line status) of a single destination location might be handled dynamically without even stopping the system. It should be possible for sorters to exercise discretion in the order of objects, favoring objects that need to be handled quickly.
Systems of the invention are highly scalable in terms of sorts-per-hour as well as the number of storage bins and destination bins that may be available. The system provides in a specific embodiment an input system that interfaces to the customer's conveyors and containers, stores objects for feeding into the system, and feeds those objects into the system at a moderate and controllable rate. In one embodiment, the interface to the customer's process takes the form of a dumper from a Gaylord, but many other embodiments are possible. In one embodiment, feeding into the system is by an inclined cleated conveyor with overhead flow restrictors, e.g., baffles. In accordance with certain embodiments, the system feeds objects in at a modest controlled rate. Many options are available, including variations in the conveyor slope and speed, the presence, size and structure of cleats and baffles, and the use of sensors to monitor and control the feed rate.
The system includes in a specific embodiment a primary perception system that monitors the stream of objects on the primary conveyor. Where possible the primary perception system may identify the object to speed or simplify subsequent operations. For example, knowledge of the objects on the primary conveyor may enable the system to make better choices regarding which objects to move to provide a singulated stream of objects.
Systems of various embodiments provide numerous advantages because of the inherent dynamic flexibility. The flexible correspondence between sorter outputs and destinations provides that there may be fewer sorter outputs than destinations, so the entire system may require less space. The flexible correspondence between sorter outputs and destinations also provides that the system may choose the most efficient order in which to handle objects, in a way that varies with the particular mix of objects and downstream demand. The system is also easily scalable, by adding sorters, and more robust since the failure of a single sorter might be handled dynamically without even stopping the system. It should be possible for sorters to exercise discretion in the order of objects, favoring objects that need to be handled quickly, or favoring objects for which the given sorter may have a specialized gripper.
The operations of the systems described herein are coordinated by the central control system 52, 358 and 458 as shown in
Those skilled in the art will appreciate that numerous modification and variations may be made to the above disclosed embodiments without departing from the spirit and scope of the present invention.
The present application is a continuation of U.S. patent application Ser. No. 16/668,880, filed Oct. 30, 2019, which claims priority to U.S. Provisional Patent Application Ser. No. 62/752,607 filed Oct. 30, 2018, the disclosures of which are hereby incorporated by reference in their entireties.
Number | Name | Date | Kind |
---|---|---|---|
4537554 | Collins, Jr. | Aug 1985 | A |
4722653 | Williams et al. | Feb 1988 | A |
4759439 | Hartlepp | Jul 1988 | A |
4846335 | Hartlepp | Jul 1989 | A |
4895242 | Michel | Jan 1990 | A |
4963251 | Böhm et al. | Oct 1990 | A |
5190162 | Hartlepp | Mar 1993 | A |
5242045 | Kakida | Sep 1993 | A |
5352081 | Tanaka | Oct 1994 | A |
5419457 | Ross et al. | May 1995 | A |
5595263 | Pignataro | Jan 1997 | A |
5685687 | Frye | Nov 1997 | A |
5794788 | Massen | Aug 1998 | A |
5794789 | Payson et al. | Aug 1998 | A |
5839566 | Bonnet | Nov 1998 | A |
5860900 | Dunning et al. | Jan 1999 | A |
5979633 | Bonnet | Nov 1999 | A |
6079570 | Oppliger et al. | Jun 2000 | A |
6189702 | Bonnet | Feb 2001 | B1 |
6208908 | Boyd et al. | Mar 2001 | B1 |
6323452 | Bonnet | Nov 2001 | B1 |
6499604 | Kitson | Dec 2002 | B1 |
6513641 | Affaticati | Feb 2003 | B1 |
7650982 | Tachibana | Jan 2010 | B2 |
7887130 | Zvolena | Feb 2011 | B1 |
9102336 | Rosenwinkel | Aug 2015 | B2 |
10438034 | Wagner et al. | Oct 2019 | B2 |
10494192 | DeWitt et al. | Dec 2019 | B2 |
10576621 | Wagner et al. | Mar 2020 | B2 |
10583553 | Wagner et al. | Mar 2020 | B2 |
10596696 | Wagner et al. | Mar 2020 | B2 |
10639678 | Cherry et al. | May 2020 | B2 |
10646991 | Wagner et al. | May 2020 | B2 |
10730078 | Wagner et al. | Aug 2020 | B2 |
10843333 | Wagner et al. | Nov 2020 | B2 |
10875057 | Wagner et al. | Dec 2020 | B2 |
10913614 | Wagner et al. | Feb 2021 | B2 |
11055504 | Wagner et al. | Jul 2021 | B2 |
11080496 | Wagner et al. | Aug 2021 | B2 |
11200390 | Wagner et al. | Dec 2021 | B2 |
11219311 | Kondziela | Jan 2022 | B1 |
11472633 | Wagner | Oct 2022 | B2 |
20020092801 | Dominguez | Jul 2002 | A1 |
20020113365 | Britton et al. | Aug 2002 | A1 |
20020170850 | Bonham et al. | Nov 2002 | A1 |
20020179502 | Cerulti et al. | Dec 2002 | A1 |
20040195320 | Ramsager | Oct 2004 | A1 |
20060182543 | Schaefer | Aug 2006 | A1 |
20070215435 | Tachibana | Sep 2007 | A1 |
20110094854 | Hayduchok | Apr 2011 | A1 |
20120128454 | Hayduchok et al. | May 2012 | A1 |
20140244026 | Neiser | Aug 2014 | A1 |
20150081090 | Dong | Mar 2015 | A1 |
20150104286 | Hansl et al. | Apr 2015 | A1 |
20150114799 | Hansl et al. | Apr 2015 | A1 |
20160167227 | Wellman et al. | Jun 2016 | A1 |
20160274586 | Stubbs et al. | Sep 2016 | A1 |
20170021499 | Wellman et al. | Jan 2017 | A1 |
20170057756 | Dugat et al. | Mar 2017 | A1 |
20170066594 | Milo et al. | Mar 2017 | A1 |
20170066597 | Hiroi | Mar 2017 | A1 |
20170106532 | Wellman et al. | Apr 2017 | A1 |
20170121113 | Wagner et al. | May 2017 | A1 |
20170136632 | Wagner et al. | May 2017 | A1 |
20170157649 | Wagner | Jun 2017 | A1 |
20170225330 | Wagner et al. | Aug 2017 | A1 |
20170349385 | Moroni | Dec 2017 | A1 |
20180057264 | Wicks et al. | Mar 2018 | A1 |
20180085788 | Engel et al. | Mar 2018 | A1 |
20180127219 | Wagner et al. | May 2018 | A1 |
20180148272 | Wagner et al. | May 2018 | A1 |
20180186572 | Issing | Jul 2018 | A1 |
20180244473 | Mathi et al. | Aug 2018 | A1 |
20180273298 | Wagner et al. | Sep 2018 | A1 |
20180327198 | Wagner et al. | Nov 2018 | A1 |
20180330134 | Wagner et al. | Nov 2018 | A1 |
20180333749 | Wagner et al. | Nov 2018 | A1 |
20190127147 | Wagner et al. | May 2019 | A1 |
20190337723 | Wagner et al. | Nov 2019 | A1 |
20200016746 | Yap et al. | Jan 2020 | A1 |
20200017314 | Rose et al. | Jan 2020 | A1 |
20200130951 | Wagner | Apr 2020 | A1 |
20200143127 | Wagner et al. | May 2020 | A1 |
20200151407 | Wagner et al. | May 2020 | A1 |
20210039140 | Geyer | Feb 2021 | A1 |
20220363488 | Wagner | Nov 2022 | A1 |
Number | Date | Country |
---|---|---|
102005032533 | Jan 2007 | DE |
102014111396 | Feb 2016 | DE |
H0619602 | Mar 1994 | JP |
20160148397 | Dec 2016 | KR |
2008089150 | Jul 2008 | WO |
2008091733 | Jul 2008 | WO |
2010099873 | Sep 2010 | WO |
2015035300 | Mar 2015 | WO |
2016100235 | Jun 2016 | WO |
2018175466 | Sep 2018 | WO |
Entry |
---|
Communication pursuant to Rules 161(1) and 162 EPC issued by the European Patent Office in related European Application No. 19809224.9 dated Jun. 9, 2021, 3 pages. |
International Search Report and Written Opinion issued by the International Searching Authority dated Feb. 13, 2020 in related International Application No. PCT/US2019/058845, 15 pages. |
Non-Final Office Action issued by the United States Patent and Trademark Office in related U.S. Appl. No. 16/668,880 dated Feb. 2, 2022, 11 pages. |
Notice on the First Office Action, and its English translation, issued by the China National Intellectual Property Administration in related Chinese Patent Application No. 201980072702.1 dated Mar. 23, 2022, 27 pages. |
Notification Concerning Transmittal of International Preliminary Report on Patentability (Chapter I of the Patent Cooperation Treaty) and the International Preliminary Report on Patentability issued by the International Bureau of WIPO in related International Application No. PCT/US2019/058845 dated May 14, 2021, 9 pages. |
Examiner's Report issued by the Innovation, Science and Economic Development Canada (Canadian Intellectual Property Office) in related Canadian Patent Application No. 3,117,829 dated Sep. 21, 2022, 4 pages. |
Number | Date | Country | |
---|---|---|---|
20220363488 A1 | Nov 2022 | US |
Number | Date | Country | |
---|---|---|---|
62752607 | Oct 2018 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16668880 | Oct 2019 | US |
Child | 17873824 | US |