The present disclosure relates generally to capturing dimension data indicative of the dimensions of an object associated with an electro-optically readable code and, more particularly, to an apparatus for, and a method of, estimating the dimensions or volume of the object in automatic response to reading the code associated with the object.
Determining the dimensions or volume of an object, such as a shipping package, a mailing parcel, or a pallet loaded with a plurality of objects as freight or cargo, is desirable, especially in the transportation and shipping industries, where the cost for transporting and delivering the objects is at least partially dependent on their dimensions. Each such object is generally associated with an electro-optically readable code that identifies the object when read by an electro-optical scanner or reader. Three-dimensional (3D) cameras have also been employed in both handheld and fixed devices to capture dimension data indicative of the dimensions of an object over a field of view. Although generally satisfactory for its intended purpose, the known 3D camera is not altogether satisfactory when multiple objects are contained in its field of view, since the camera cannot readily distinguish between the object to be dimensioned, i.e., the primary object or main target of interest, and other secondary objects whose dimensions are not wanted.
Accordingly, there is a need to estimate the dimensions or volume of a main object of interest, especially when other secondary objects are nearby, in an accurate, rapid, and efficient manner.
The accompanying figures, where like reference numerals refer to identical or functionally similar elements throughout the separate views, together with the detailed description below, are incorporated in and form part of the specification, and serve to further illustrate embodiments of concepts that include the claimed invention, and explain various principles and advantages of those embodiments.
Skilled artisans will appreciate that elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions and locations of some of the elements in the figures may be exaggerated relative to other elements to help to improve understanding of embodiments of the present invention.
The apparatus and method components have been represented where appropriate by conventional symbols in the drawings, showing only those specific details that are pertinent to understanding the embodiments of the present invention so as not to obscure the disclosure with details that will be readily apparent to those of ordinary skill in the art having the benefit of the description herein.
One aspect of the present disclosure relates to an apparatus for estimating the dimensions of an object associated with an electro-optically readable code, e.g., a Universal Product Code (UPC) bar code symbol, an Electronic Product Code (EPC) symbol, or a like code standard, that identifies the object. A handheld device is aimable at a scene containing the object supported on a base surface. A scanner, e.g., an electro-optical reader, is supported by the device, scans the scene over a field of view to obtain a position of a reference point of the code associated with the object, and reads the code. A dimensioning sensor, e.g., a three-dimensional (3D) camera, is supported by the device and, in automatic response to the reading of the code, captures a 3D point cloud of data points of the scene. A controller clusters the point cloud into data clusters, locates the reference point of the code in one of the data clusters, extracts from the point cloud the data points of the one data cluster belonging to the object, and processes the extracted data points belonging to the object to estimate the dimensions of the object.
In a preferred embodiment, the controller detects a base plane indicative of the base surface from the point cloud, processes the extracted data points belonging the object to obtain a convex hull, and fits a bounding box of minimum volume to enclose the convex hull. The bounding box has a pair of mutually orthogonal planar faces. The controller orients one of the faces to be generally perpendicular to the base plane, and simultaneously orients the other of the faces to be generally parallel to the base plane. Advantageously, the dimensioning sensor captures each data point to include data indicative of a length (x), a width (y), and a depth (z) of the object, and the controller locates length (x) and width (y) coordinates of the reference point of the code.
In accordance with another aspect of this disclosure, a method of estimating dimensions of an object associated with an electro-optically readable code, is performed by aiming a handheld device at a scene containing the object supported on a base surface; and by scanning the scene over a field of view of a scanner supported by the device to obtain a position of a reference point of the code associated with the object, and by reading the code. The method is further performed by capturing, in automatic response to the reading of the code, a three-dimensional (3D) point cloud of data points of the scene with a dimensioning sensor supported by the device; clustering the point cloud into data clusters; locating the reference point of the code in one of the data clusters; extracting from the point cloud the data points of the one data cluster belonging to the object; and processing the extracted data points belonging to the object to estimate the dimensions of the object.
Turning now to the drawings,
As also shown in
The device 10 is a handheld, portable device having a handle that can be gripped by a user, and a manually actuatable trigger 22. The handheld device 10 is thus held by the user and aimed at a scene containing the object. Although the computer 14 has been illustrated as a desktop computer, it will be understood that the computer could also be a laptop computer, a smartphone, or a tablet. Although the handheld device 10 and the computer 14 have been illustrated as separate units, they can also be integrated into a single unit.
As shown in
As also shown in
Turning now to the flow chart of
In automatic response to reading the code 24, the camera 12 captures, in step 104, a three-dimensional (3D) point cloud of data points over a field of view of the scene containing the objects 30, 32 and the background 34 on which the objects 30, 32 are positioned. For ease of visualization,
In step 106, a base plane indicative of the base surface 34 is detected from the data points. In a preferred embodiment, the detecting of the base plane is performed by determining from the data points the plane having the largest area in the field of view, e.g., by executing a random sampling consensus (RANSAC) algorithm. Details of plane detection by using the RANSAC algorithm can be had by reference to “Plane Detection in Point Cloud Data”, by Yang et al., Technical Report No. 1, Department of Photogrammetry, University of Bonn, Jan. 25, 2010, the entire contents of which is incorporated herein by reference thereto. Once the base plane has been detected, its data points can be removed from the 3D point cloud. This leaves only the data points corresponding to the main object 30 and the secondary object 32 for further processing.
In step 108, the remaining data points are clustered, e.g., by Euclidean clustering. Clustering is a well established technique in which a multitude of data points are organized into groups or data clusters that share some similarity, e.g., a distance or closeness to each other. Now that the data points have been clustered, each of the multiple objects 30, 32 in the field of view have now been located. In step 110, the main object 30 is extracted. This is performed by locating the reference point P in one of the data clusters. This one data cluster is the one that belongs to the main object 30. Thus, the data points of the object of interest, i.e., the main object 30, is extracted, and all the data points of the secondary object 32 are discarded.
The extracted data points of the data cluster belonging to the main object 30 is depicted in
In step 114, a bounding box 38 (see
In the foregoing specification, specific embodiments have been described. However, one of ordinary skill in the art appreciates that various modifications and changes can be made without departing from the scope of the invention as set forth in the claims below. Accordingly, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of present teachings.
The benefits, advantages, solutions to problems, and any element(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential features or elements of any or all the claims. The invention is defined solely by the appended claims including any amendments made during the pendency of this application and all equivalents of those claims as issued.
Moreover in this document, relational terms such as first and second, top and bottom, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms “comprises,” “comprising,” “has,” “having,” “includes,” “including,” “contains,” “containing,” or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises, has, includes, contains a list of elements does not include only those elements, but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. An element proceeded by “comprises . . . a,” “has . . . a,” “includes . . . a,” or “contains . . . a,” does not, without more constraints, preclude the existence of additional identical elements in the process, method, article, or apparatus that comprises, has, includes, or contains the element. The terms “a” and “an” are defined as one or more unless explicitly stated otherwise herein. The terms “substantially,” “essentially,” “approximately,” “about,” or any other version thereof, are defined as being close to as understood by one of ordinary skill in the art, and in one non-limiting embodiment the term is defined to be within 10%, in another embodiment within 5%, in another embodiment within 1%, and in another embodiment within 0.5%. The term “coupled” as used herein is defined as connected, although not necessarily directly and not necessarily mechanically. A device or structure that is “configured” in a certain way is configured in at least that way, but may also be configured in ways that are not listed.
It will be appreciated that some embodiments may be comprised of one or more generic or specialized processors (or “processing devices”) such as microprocessors, digital signal processors, customized processors, and field programmable gate arrays (FPGAs), and unique stored program instructions (including both software and firmware) that control the one or more processors to implement, in conjunction with certain non-processor circuits, some, most, or all of the functions of the method and/or apparatus described herein. Alternatively, some or all functions could be implemented by a state machine that has no stored program instructions, or in one or more application specific integrated circuits (ASICs), in which each function or some combinations of certain of the functions are implemented as custom logic. Of course, a combination of the two approaches could be used.
Moreover, an embodiment can be implemented as a computer-readable storage medium having computer readable code stored thereon for programming a computer (e.g., comprising a processor) to perform a method as described and claimed herein. Examples of such computer-readable storage mediums include, but are not limited to, a hard disk, a CD-ROM, an optical storage device, a magnetic storage device, a ROM (Read Only Memory), a PROM (Programmable Read Only Memory), an EPROM (Erasable Programmable Read Only Memory), an EEPROM (Electrically Erasable Programmable Read Only Memory) and a Flash memory. Further, it is expected that one of ordinary skill, notwithstanding possibly significant effort and many design choices motivated by, for example, available time, current technology, and economic considerations, when guided by the concepts and principles disclosed herein, will be readily capable of generating such software instructions and programs and ICs with minimal experimentation.
The Abstract of the Disclosure is provided to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in various embodiments for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separately claimed subject matter.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2015/055982 | 10/16/2015 | WO | 00 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2016/089483 | 6/9/2016 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
5408322 | Hsu et al. | Apr 1995 | A |
5988862 | Kacyra et al. | Nov 1999 | A |
6115114 | Berg et al. | Sep 2000 | A |
6995762 | Pavlidis et al. | Feb 2006 | B1 |
7137207 | Armstrong et al. | Nov 2006 | B2 |
7248754 | Cato | Jul 2007 | B2 |
7277187 | Smith et al. | Oct 2007 | B2 |
7373722 | Cooper et al. | May 2008 | B2 |
7474389 | Greenberg et al. | Jan 2009 | B2 |
7487595 | Armstrong et al. | Feb 2009 | B2 |
7527205 | Zhu et al. | May 2009 | B2 |
7605817 | Zhang et al. | Oct 2009 | B2 |
7647752 | Magnell | Jan 2010 | B2 |
7726575 | Wang et al. | Jun 2010 | B2 |
8094937 | Teoh et al. | Jan 2012 | B2 |
8132728 | Dwinell et al. | Mar 2012 | B2 |
8134717 | Pangrazio et al. | Mar 2012 | B2 |
8199977 | Krishnaswamy et al. | Jun 2012 | B2 |
8265895 | Willins et al. | Sep 2012 | B2 |
8284988 | Sones et al. | Oct 2012 | B2 |
8463079 | Ackley et al. | Jun 2013 | B2 |
8479996 | Barkan et al. | Jul 2013 | B2 |
8542252 | Perez et al. | Sep 2013 | B2 |
8599303 | Stettner | Dec 2013 | B2 |
8660338 | Ma et al. | Feb 2014 | B2 |
8743176 | Stettner et al. | Jun 2014 | B2 |
8757479 | Clark et al. | Jun 2014 | B2 |
8812226 | Zeng | Aug 2014 | B2 |
8989342 | Liesenfelt et al. | Mar 2015 | B2 |
9007601 | Steffey et al. | Apr 2015 | B2 |
9070285 | Ramu | Jun 2015 | B1 |
9129277 | MacIntosh | Sep 2015 | B2 |
9329269 | Zeng | May 2016 | B2 |
9396554 | Williams et al. | Jul 2016 | B2 |
9400170 | Steffey | Jul 2016 | B2 |
9549125 | Goyal et al. | Jan 2017 | B1 |
9562971 | Shenkar et al. | Feb 2017 | B2 |
9600892 | Patel et al. | Mar 2017 | B2 |
9778388 | Connor | Oct 2017 | B1 |
9791862 | Connor | Oct 2017 | B1 |
20010041948 | Ross et al. | Nov 2001 | A1 |
20020164236 | Fukuhara et al. | Jul 2002 | A1 |
20020158453 | Levine | Oct 2002 | A1 |
20040240754 | Smith et al. | Feb 2004 | A1 |
20050016004 | Armstrong et al. | Jan 2005 | A1 |
20050114059 | Chang et al. | May 2005 | A1 |
20060106742 | Bocchicchio et al. | May 2006 | A1 |
20070074410 | Armstrong et al. | Apr 2007 | A1 |
20070272732 | Hindmon | Nov 2007 | A1 |
20080238919 | Pack | Oct 2008 | A1 |
20090088975 | Sato et al. | Apr 2009 | A1 |
20090103773 | Wheeler et al. | Apr 2009 | A1 |
20090152391 | McWhirk | Jun 2009 | A1 |
20090323121 | Valkenburg et al. | Dec 2009 | A1 |
20100026804 | Tanizaki et al. | Feb 2010 | A1 |
20100091094 | Sekowski | Apr 2010 | A1 |
20100118116 | Tomasz | May 2010 | A1 |
20100131234 | Stewart | May 2010 | A1 |
20100208039 | Stellner | Aug 2010 | A1 |
20100295850 | Katz | Nov 2010 | A1 |
20100315412 | Sinha et al. | Dec 2010 | A1 |
20110047636 | Stachon et al. | Feb 2011 | A1 |
20110052043 | Hung | Mar 2011 | A1 |
20110137527 | Simon et al. | Jun 2011 | A1 |
20110168774 | Magal | Jul 2011 | A1 |
20110172875 | Gibbs | Jul 2011 | A1 |
20110216063 | Hayes | Sep 2011 | A1 |
20110286007 | Pangrazio et al. | Nov 2011 | A1 |
20110310088 | Adabala et al. | Dec 2011 | A1 |
20120075342 | Choubassi et al. | Mar 2012 | A1 |
20120179621 | Moir et al. | Jul 2012 | A1 |
20120185112 | Sung et al. | Jul 2012 | A1 |
20120209553 | Doytchinov et al. | Aug 2012 | A1 |
20120236119 | Rhee | Sep 2012 | A1 |
20120249802 | Taylor | Oct 2012 | A1 |
20120250978 | Taylor | Oct 2012 | A1 |
20120287249 | Choo et al. | Nov 2012 | A1 |
20130144565 | Miller | Jun 2013 | A1 |
20130156292 | Chang et al. | Jun 2013 | A1 |
20130162806 | Ding et al. | Jun 2013 | A1 |
20130228620 | Ahem et al. | Sep 2013 | A1 |
20130236089 | Litvak et al. | Sep 2013 | A1 |
20130299306 | Jiang et al. | Nov 2013 | A1 |
20130299313 | Baek, IV et al. | Nov 2013 | A1 |
20130321418 | Kirk | Dec 2013 | A1 |
20130329013 | Metois et al. | Dec 2013 | A1 |
20130341400 | Lancaster-Larocque | Dec 2013 | A1 |
20140002597 | Taguchi | Jan 2014 | A1 |
20140028837 | Gao et al. | Jan 2014 | A1 |
20140049616 | Stettner | Feb 2014 | A1 |
20140098094 | Neumann | Apr 2014 | A1 |
20140100813 | Showering | Apr 2014 | A1 |
20140104413 | McCloskey et al. | Apr 2014 | A1 |
20140192050 | Qiu | Jul 2014 | A1 |
20140267614 | Ding et al. | Sep 2014 | A1 |
20140267688 | Aich et al. | Sep 2014 | A1 |
20140300637 | Fan | Oct 2014 | A1 |
20140351073 | Murphy et al. | Nov 2014 | A1 |
20150015602 | Beaudoin | Jan 2015 | A1 |
20150088618 | Basir et al. | Mar 2015 | A1 |
20150092066 | Geiss et al. | Apr 2015 | A1 |
20150106403 | Haverinen et al. | Apr 2015 | A1 |
20150154467 | Feng | Jun 2015 | A1 |
20150161793 | Takahashi | Jun 2015 | A1 |
20150181198 | Baele et al. | Jun 2015 | A1 |
20150379704 | Chandrasekar et al. | Dec 2015 | A1 |
20160044862 | Kocer | Feb 2016 | A1 |
20160061591 | Pangrazio et al. | Mar 2016 | A1 |
20160070981 | Sasaki et al. | Mar 2016 | A1 |
20160012588 | Taguchi | Apr 2016 | A1 |
20160107690 | Oyama et al. | Apr 2016 | A1 |
20160150217 | Popov | May 2016 | A1 |
20160156898 | Ren et al. | Jun 2016 | A1 |
20160163067 | Williams | Jul 2016 | A1 |
20170004649 | Collet Romea et al. | Jan 2017 | A1 |
20170041553 | Cao et al. | Feb 2017 | A1 |
20170150129 | Pangrazio | May 2017 | A1 |
20170227647 | Baik | Aug 2017 | A1 |
20170228885 | Baumgartner | Aug 2017 | A1 |
Number | Date | Country |
---|---|---|
2835830 | Nov 2012 | CA |
104200086 | Dec 2014 | CN |
2439487 | Apr 2012 | EP |
2562688 | Feb 2013 | EP |
2693362 | Feb 2014 | EP |
2003002935 | Jan 2003 | WO |
2008057504 | May 2008 | WO |
2008154611 | Dec 2008 | WO |
2013165674 | Nov 2013 | WO |
2014092552 | Jun 2014 | WO |
2015127503 | Sep 2015 | WO |
2016020038 | Feb 2016 | WO |
Entry |
---|
International Search Report and Written Opinion for International Patent Application No. PCT/US2017/024847 dated Jul. 7, 2017. |
“Fair Billing with Automatic Dimensioning” pp. 1-4, undated, Copyright Mettler-Toledo International Inc. |
“Swift Dimension” Trademark Omniplanar, Copyright 2014. |
“Plane Detection in Point Cloud Data” dated Jan. 25, 2010 by Michael Ying Yang and Wolfgang Forstner, Technical Report 1, 2010, University of Bonn. |
Brown et al., U.S. Appl. No. 15/078,074, filed Mar. 23, 2016. |
Brown et al., U.S. Appl. No. 15/008,710, filed Jan. 28, 2016. |
Lecking et al., “Localization in a wide range of industrial environments using relative 3D ceiling features,” IEEE, pp. 333-337, Sep. 15, 2008. |
Carreira et al., “Enhanced PCA-based localization using depth maps with missing data,” IEEE, pp. 1-8 ,Apr. 24, 2013. |
Clayton et al., U.S. Appl. No. 15/358,810, filed Nov. 22, 2016. |
Swope et al., U.S. Appl. No. 15/015,228, filed Feb. 4, 2016. |
Ziang Xie et al., “Multimodal Blending for High-Accuracy Instance Recognition”, 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2214-2221. |
N.D.F. Campbell et al. “Automatic 3D Object Segmentation in Multiple Views using Volumetric Graph-Cuts”, Journal of Image and Vision Computing, vol. 28, Issue 1, Jan. 2010, pp. 14-25. |
Federico Tombari et al. “Multimodal cue integration through Hypotheses Verification for RGB-D object recognition and 6DOF pose estimation”, IEEE International Conference on Robotics and Automation, Jan. 2013. |
Ajmal S. Mian et al., “Three-Dimensional Model Based Object Recognition and Segmentation in Cluttered Scenes”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, No. 10, Oct. 2006. |
Gu et al., U.S. Appl. No. 15/242,126, filed Aug. 19, 2016. |
Dubois, M., et al., “A comparison of geometric and energy-based point cloud semantic segmentation methods,” European Conference on Mobile Robots (ECMR), vol., No., pp. 88-93, Sep. 25-27, 2013. |
Lari, Z., et al., “An adaptive approach for segmentation of 3D laser point cloud.” International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. XXXVIII-5/W12, 2011, ISPRS Calgary 2011 Workshop, Aug. 29-31, 2011, Calgary, Canada. |
Rusu, et al. “Spatial change detection on unorganized point cloud data,” PCL Library, retrieved from Internet on Aug. 19, 2016 [http://pointclouds.org/documentation/tutorials/octree_change_php]. |
Tahir, Rabbani, et al., “Segmentation of point clouds using smoothness constraint,” International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences 36.5 (Sep. 2006): 248-253. |
Golovinskiy, Aleksey, et al. “Min-cut based segmentation of point clouds.” Computer Vision Workshops (ICCV Workshops), 2009 IEEE 12th International Conference on IEEE, 2009. |
Douillard, Bertrand, et al. “On the segmentation of 3D LIDAR point clouds.”Robotics and Automation (ICRA), 2011 IEEE International Conference on. IEEE, 2011. |
Puwein, J., et al., “Robust multi-view camera calibration for wide-baseline camera networks,” in IEEE Workshop on Applications of Computer Vision (WACV), Jan. 2011. |
Datta, A., et al., “Accurate camera calibration using iterative refinement of control points,” in Computer Vision Vorkshops (ICCV Workshops), 2009. |
Olson, Clark F., et al. “Wide-Baseline Stereo Vision for Terrain Mapping” in Machine Vision and Applications, Aug. 2010. |
Rusu, et al., “How to incrementally register pairs of clouds,” PCL Library, retrieved from the Internet on Aug. 22, 2016 from <http://pointclouds.org/documentation/tutorials/pairwise_incremental_registration.php>. |
Zheng et al., U.S. Appl. No. 15/131,856, filed Apr. 18, 2016. |
F.C.A. Groen et al., “The smallest box around a package,” Pattern Recognition, vol. 14, No. 1-6, Jan. 1, 1981, pp. 173-176, XP055237156, GB, ISSN: 0031-3203, DOI: 10.1016/0031-3203(81(90059-5 p. 176-p. 178. |
Schnabel et al. “Efficient RANSAC for Point-Cloud Shape Detection”, vol. 0, No. 0, pp. 1-12. |
Buenaposada et al. “Real-time tracking and estimation of plane pose” Proceedings of the ICPR (Aug. 2002) vol. II, IEEE pp. 697-700. |
Fu et al., U.S. Appl. No. 15/385,113, filed Dec. 20, 2016. |
International Search Report and Written Opinion for corresponding International Patent Application No. PCT/US2015/055982 dated Jan. 4, 2016. |
Number | Date | Country | |
---|---|---|---|
20170337704 A1 | Nov 2017 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14561329 | Dec 2014 | US |
Child | 15533294 | US |