This invention relates to vision systems that generate three-dimensional (3D) dimensions for objects in a scene, and more particularly to 3D vision systems adapted to operate on a moving line of differing-sized, generally rectangular objects.
Machine vision systems (also termed herein, “vision systems”) that perform measurement, inspection, alignment of objects and/or decoding of symbology (e.g. bar codes—also termed “ID Codes”) are used in a wide range of applications and industries. These systems are based around the use of an image sensor, which acquires images (typically grayscale or color, and in one, two or three dimensions) of the subject or object, and processes these acquired images using an on-board or interconnected vision system processor. The processor generally includes both processing hardware and non-transitory computer-readable program instructions that perform one or more vision system processes to generate a desired output based upon the image's processed information. This image information is typically provided within an array of image pixels each having various colors and/or intensities.
As described above, one or more vision system camera(s) can be arranged acquire two-dimensional (2D) or three-dimensional (3D) images of objects in an imaged scene. 2D images are typically characterized as pixels with an x and y component within an overall N×M image array (often defined by the pixel array of the camera image sensor). Where images are acquired in 3D, there is a height or z-axis component, in addition to the x and y components. 3D image data can be acquired using a variety of mechanisms/techniques, including triangulation of stereoscopic cameras, LiDAR, time-of-flight sensors and (e.g.) laser displacement profiling.
A common use for vision systems is to track and sort objects moving along a line (e.g. a conveyor) in manufacturing and logistics operations. The vision system camera(s) can be positioned over the line at an appropriate viewing angle to acquire any expected IDs on respective objects as they each move through the field of view. The focal distance of the reader with respect to the object can vary, depending on the placement of the reader with respect to the line and the size of the object.
In various logistics tasks, determining the size and relative shape, including the maximum thickness or height, of parcels (e.g. relatively cuboidal/rectangular-sided boxes, jiffy mailers, polybags, envelopes, etc.) on a conveyor is desirable. Such dimensions are used to provide proper handling as they are sent down the conveyor to further processes. However, it is often challenging to obtain accurate measurements where the 3D camera(s) overlying the conveyor experience(s) noise.
This invention overcomes disadvantages of the prior art by provision a system and method that employs a 3D vision system to accurately measure the length (sometimes termed “depth”), width and height of a typically cuboidal object (e.g. boxes, jiffy mailers, polybags, etc.) in the field of view in the presence of noise. Existing methods, such as a 3D blob tool, allow the detection of which points of an acquired 3D point cloud belong to the object and the calculation of their compact bounding box. Owing to the noisiness, coarse resolution, and other limitations of the imperfect 3D imagers, direct use of the dimensions of this bounding box proves unsuitable for meeting certain users' tight accuracy requirements, which can be in the range of 2.5 mm, or less. The invention consists of methods to compute refined estimates of the length, width, height dimensions of objects of rectangular footprint given the bounding box and the 3D points it contains. The system and method can employ include statistical measures that are applied to the boundary points or faces of the objects. In addition, the system and method can detect whether an object (box) top is relatively flat, and/or how much it swells, thereby providing a useful indication (i.e. box-top bulginess) of which statistics would produce more accurate box dimensions. The system and method further includes an intuitive and straightforward user interface for setting up the 3D vision system camera with respect to a moving conveyor so as to ensure that the field of view is adequately sized and other camera parameters are sufficient to accurately image objects having a range of expected dimensions.
In an illustrative embodiment, a system and method for estimating dimensions of an approximately cuboidal object from a 3D image of the object, acquired by an image sensor of the vision system processor, is provided. An identification module, associated with the vision system processor, automatically identifies a 3D region in the 3D image that contains the cuboidal object. A selection module, associated with the vision system processor, automatically selects 3D image data from the 3D image that corresponds to approximate faces or boundaries of the cuboidal object. An analysis module statistically analyzes, and generates statistics for, the selected 3D image data that correspond to approximate cuboidal object faces or boundaries. A refinement module, responsive to the analysis module, then chooses statistics that correspond to improved cuboidal dimensions from among cuboidal object height, width, and length, width and height, the improved cuboidal dimensions being provided as dimensions for the object. Illustratively, the identification module identifies the 3D region using a 3D connected component analysis and/or the selection module selects the 3D region by testing the 3D image data using the 3D connected component analysis. The 3D connected component analysis can be constructed and arranged to identify groups of voxels of the 3D image that are adjacent to each other and that excludes, from each one of the groups, any voxels whose distance from a respective of the groups exceeds an adjacency threshold. A length dimension and a width dimension of the bounding box can be refined using at least one of a points statistical analysis (PSA) and a boundary statistical analysis (BSA). The refinement module can use a least squares surface fitting process to refine a height dimension of the bounding box. Illustratively, a convexity process measures a degree of a convex shape along at least one surface of the object. The convexity process is constructed and arranged to determine a bulge in height along the at least one surface of the object. Additionally, the refinement process includes a height from bulginess process that refines the height dimension based on a bulginess estimate for the object. The convexity process can be constructed and arranged to (a) fit a plane with respect to boundary edges in the 3D image of the object that correspond to the top surface, (b) obtain a tallest point on the top surface, (c) obtain a tallest point on the boundary edges, and (d) determine a measure of convexity of the top surface using the relative tallest points. The refinement module can be constructed and arranged to adjust the improved cuboidal dimensions based on the determined convexity. Illustratively, the at least one surface is a top surface.
In exemplary embodiments, a user interface is provided, which displays a plurality of interface screens for setup and runtime operation of the system. The object therein moves along a conveyor surface with respect to a field of view of the image sensor. The displays for setup can include an application details display for determining optimal distance between the image sensor and the conveyor surface based upon at least one of parameters of a camera assembly having the image sensor, speed of the conveyor surface and width of the conveyor surface and a range of minimum and maximum size measurements for the object. Illustratively, the displays for setup can include a baseline display that determines a reference measurement based upon acquisition of a 3D image of the conveyor surface by the image sensor. The displays for setup can also include an AutoTune display that operates a process for determining measurements of a plurality of objects moving through the field of view of the image sensor on the conveyor surface and thereby refines estimation of object dimensions by the system.
The invention description below refers to the accompanying drawings, of which:
The camera 110 includes an internal (to its housing) and/or external vision system process(or) that receives image data 141 from the camera 110, and performs various vision system tasks upon the data in accordance with the system and method herein. The process(or) 140 includes underlying processes/processors or functional modules, including a set of vision system tools 142, which can comprise a variety of standard and custom tools that identify and analyze features in image data, including, but not limited to, edge detectors, blob tools, pattern recognition tools, deep learning networks, etc. The vision system process(or) can further include a dimensioning process(or) 144 in accordance with the system and method. This process(or) 144 performs various analysis and measurement tasks on features identified ion the 3D image data so as to determine the side and orientation of objects on the conveyor—as described in detail below. A user interface process(or) 146 is associated with the dimensioning process(or) 144, and can be part of the overall vision system processor, or can be provided on a separate computing device 150, such as a server (e.g. cloud-based or local), PC, laptop, tablet and/or smartphone. The computing device 150 is depicted (by way of non-limiting example) with a conventional display or touchscreen 152, keyboard 154 and mouse 156, which collectively provide a graphical user interface (GUI) functionality. A variety of interface devices and/or form factors can be provided in alternate implementations of the device 150. The GUI can be driven, in part, by a web browser application 158, which resides over a device operating system and displays web pages with control and data information from the process(or) 140 in accordance with an exemplary arrangement herein.
Note that the process(or) 140 can reside fully or partially on-board the housing of the camera assembly 110 and various process modules 142, 144 and 144 can be instantiated entirely or partially in either the on-board process(or) 140 or the remote computing device 150 as appropriate. In an exemplary embodiment, all vision system and interface functions can be instantiated on the on-board process(or) 140, and the computing device 150 can be employed primarily for training, monitoring and related operations with interface web pages (e.g. HTML) generated by the on-board-process(or) 140 and transmitted to the computing device via a wired or wireless network link 160. Alternatively, all or part of the process(or) 140 can reside in the computing device 150. The link 160 can provide vision system results (e.g. object/package dimensions represented by width W, length L and height H) 162 to a downstream utilization device or process. Such device/process can use results 162 to handle objects/packages—for example gating the conveyor 130 to direct objects/packages to differing destinations based on package size.
The conveyor 130 can include various sensors, such as a presence detector 170 to notify the process(or) 140 that an object has passed into the field of view FOV, and thereby trigger image acquisition by the camera assembly 110 with appropriate timing. Additionally, the conveyor 130 can include an encoder or other motion-measurement device that (optionally) transmits general speed and/or motion data/information 172 to the process(or) 140 that can be used to control operations in a manner clear to those of skill.
The operation of the system and method herein, as implemented by the camera assembly 110, with process(or) 140 and computing device 150 is described with reference to the procedure 200 of
Once setup is complete, the system can be operated in runtime, in which objects/packages are driven along the conveyor 130 (or other mechanism for presenting a stream of objects to the FOV—for example, moving carriages), and 3D images are acquired by the camera assembly 110 while such objects/packages each reside within the FOV (step 220). In step 230, the dimensioning process(or) 144, in combination with other vision tools 142, is used to determine the approximate length, width and height of the object from one or more acquired 3D images thereof. These dimensions can be characterized in terms of a local X, Y and Z coordinate system with respect to the individual object/package. The axes can follow, the edges/corners of the object's generally cuboidal shape as shown in
A. Overall Procedure
The dimensioning process(or) employs various statistical procedures to generate dimensions relative to 3D-imaged objects (e.g. packages shown herein) in both training and runtime. The procedures can be used variously in combination, or individually. They include (a) Points Statistical Analysis (PSA), (b) Boundary Statistical Analysis (BSA), (c) Height from Least Squares Surface Fitting and (d) Height from Bulginess. The first two procedures PSA (a) and BSA (b) are independent alternatives for refining the length and width dimensions of a bounding box for the object. The third procedure (c) is for refining the height dimension, and can be applied after applying either PSA or BSA. The fourth procedure (d) also refines the height dimension based on a bulginess estimate for the object. Notably, each of these procedures can be performed exclusively using acquired 3D point clouds, and free of any 2D images, of the object. The elements of the procedures (a)-(d) include (i) projections, (ii) statistics, (iii) histogram analysis, (iv) probability density estimation, and (v) regularized least squares fitting of planes.
With reference to the procedure 300 of
In
B. Statistical Analysis
The above described statistical analysis procedures (a)-(d) employed by the dimensioning process(or) in accordance with the overall procedures 300 and 400 are described further below, with reference to corresponding
1. Points Statistical Analysis (PSA)
In a noisy point cloud, the 3D points relating to an object are often spread over a region larger than the true extent of the object. PSA constructs histograms of values based on the locations of the points and, by their analysis, estimates the likely true extent of the object. Its steps are shown and described in the procedure 500 of
In step 510, the acquired 3D points of the object are projected onto a reference plane representing the surface on which the object rests (e.g. the conveyor), thereby obtaining an array of 2D points. The dimensions of the rectangle that best fits these 2D points are desired. Then in step 520, the two orthogonal directions in the reference plane along which will lie the sides of the desired rectangle are determined. This can be performed more particularly as follows: (a) starting with an orthogonal pair of directions along which lie sides of the input initial bounding box, consider a set of orthogonal pairs of directions by applying to the starting pair a sequence of small rotations in the reference plane; (b) along the directions in all these pairs, compute the 1D projections of the 2D points; and (c) identify the best orthogonal pair as the one in which the sum of the spans of its 1D projections is minimum.
In step 530, the procedure 500 marks three overlapping rectangular sub-regions that extend some distance into the interior of the footprint along each of the four sides of the initial bounding rectangle that bounds the footprint. In step 540, the procedure then constructs a histogram of the values of one of the 1D projections of the points lying in each one of these rectangular sub-regions. This step obtains a total of twelve histograms.
In step 550 of the PSA procedure 500, the histograms are each separately analyzed to estimate the location of one of the sides of the footprint. The analysis includes: (a) calculating thresholds based on the frequencies of histogram bins; (b) locating a local maximum in the histogram that also exceeds a threshold; (c) locating the bin whose frequency is half-down from that of the local maximum; (d) calculating the slope of the histogram at the bin of half-down frequency; (e) localizing, to sub-bin precision, the point where the frequency crosses the half-down point, and (f) declaring this cross-over point to mark the transition from inside the object to outside, thus obtaining a candidate estimate of the location of one of the sides of footprint of the object.
In step 560, for each side of the desired rectangle, the above-described analysis steps (a)-(f) yield three (3) candidate estimates for the location of that side. From those three, the outermost candidate estimate is selected so as to mitigate the possible effect of unwarranted erosion of portions of footprint sides due to imperfections in 3D imaging. Differencing the location estimates of opposite sides of the footprint yields the desired refined estimates of length and width dimensions of the object (step 570). This result is then delivered to the dimensioning process(or) 144.
2. Boundary Statistical Analysis (BSA)
With reference to the procedure 600 of
In step 630, the procedure 600 finds the rectangle with the minimum area among all possible rectangles that enclose the 2D boundary points. It then obtains the pair of orthogonal directions that correspond to sides of the minimum area rectangle. The procedure 600 constructs an estimate of the probability density of the locations of the sides of the rectangle representing the object's footprint (step 640). It regards the orthogonal directions identified above as being divided into bins, each with an associated frequency that is initially zero. Along each orthogonal direction, a set of 1D points is obtained by projecting the 2D boundary points. Then, for each 1D point, the bins lying within a neighborhood of a chosen size are identified (step 650). A determination is made whether each 1D point makes a contribution (decision step 660) to each neighboring bin. If yes, the bin frequency value for that 1D point is incremented by one (1) in step 662, and step 650 is repeated. After all 1D point have been considered (decision step 660), then the frequency of any bin is now proportional to an estimate of the probability that the boundary of the footprint falls in that bin.
When the incrementing of values yields a result in which all 1D points have contributed, then for each of the two arrays of frequencies, one per direction, the procedure 600 computes the associated cumulative distribution (step 670). This computation can proceed as follows: (a) find the locations closest to the first and last occupied bins where the frequency is a local maximum; (b) form two sequences of partial sums of frequencies from the first and last bins up to their respective nearest local maximum; and (c) normalize these two sequences by the final sum value of each sequence, obtaining values monotonically increasing from 0 to 1 as one moves in from either end of the array of bins.
Separately, according to step 680, for each orthogonal direction, the procedure finds, to sub-bin precision, the two locations where its cumulative distribution crosses a user-specified cutoff fraction—one near the first bin occupied and the other near the last. The distance between these two locations yields the desired estimate of one of the dimensions of the object. The cutoff points along the other direction thereby yield the other dimension (step 690) in the other orthogonal direction. Thus, both length and width estimates are obtained, and this result is provided to the dimensioning process(or) 144.
3. Height from Least Squares Surface Fitting
After determining length and width dimensions using PSA and/or BSA, described above, the relative height of the object/bounding box is determined. One exemplary procedure 700, described in
The procedure 700 begins in step 710, using (e.g.) a 3D blob tool to output a rectangle corresponding to the bottom face of the object's bounding box. In step 720 this rectangular domain is tessellated by an array of square sub-domains. With the intent of fitting a function to the 2D domain, and with the form of the function being planar in each sub-domain, the procedure sets up a system of simultaneous linear equations in step 730. The equations seek the minimum of the sum of the squares of the distances of the fitted surface to the 3D points and enforce continuity at the edges of the sub-domains. Then, a parameter is provided in step 740 to balance between minimizing the distance from the measured 3D points to the surface, and minimizing the surface curvature. The procedure 700 then invokes an iterative solver of the linear system in step 750. Upon convergence, the maximum residual error of the surface fit is computed in this step.
If the residual error is larger than a specified (fixed or user-input) threshold (decision step 760), the procedure 700 indicating an object whose top surface has high curvature, and reduces the size of the square sub-domains (step 770). The procedure then repeats steps 730-760. Once the maximum residual error falls beneath the threshold (decision step 760), the fitting concludes (step 780) and the procedure reports maximum height attained by the fitted surface as the object's height dimension estimate to the dimensioning process(or) 144.
4. Measuring Box Top Bulginess
The procedure then determines the measurement of convexivity by using the voxel peak Z (tallest point) on the top surface versus the boundary peak Z in step 850. This procedure includes the following steps: (a) compute the distance (termed dis1) from the voxel peak to the boundary plane; (b) compute the Z distance (termed dis2) from the voxel peak's Z to the boundary peak Z; (c) compute the bulginess value using (e.g.) a fuzzy inference system (FIS) with some fuzzy rules, which can be characterized as:
(i) if dis1 is Small and dis2 is Small then box top's bulge is Small;
(ii) if dis1 is Large and dis2 is Large then box top's bulge is Large; and
(iii) if dis1 is Large and dis2 is Small then box top's bulge is Large;
(d) for each input variable (dis1 and dis2), define two fuzzy sets Small and Large with membership functions respectively formulated by Z-shaped and S-shaped functions. For the output variable bulge, define two fuzzy sets Small and Large with linear membership functions. The maxima-based defuzzification result of the FIS gives the value of bulginess in the range from 0 to 1. This value is used to adjust the overall height dimension (step 860), which is reported as results by the dimensioning process(or) 144.
5. Adapted Bounding Box Height Computation
In an exemplary implementation, an adapted bounding box height computation can be employed to compute height of the object/bounding box. The following sub-steps can be employed: (a) compute the first object/box top surface Z value as follows:
(i) compute the histogram of Z values of the surface voxels;
(ii) search from the high end of the histogram to locate the first segment consisting of non-empty consecutive bins and including sufficient number of voxels; and
(iii) identify the maximum peak inside this segment and use the peak's corresponding location as the first Z value of the object/box top surface;
(b) Compute the second object/box top surface candidate Z as follows:
(i) perform patch segmentation on surface voxels.
(ii) identify the largest-size patch and locate its peak voxel that has the largest Z value; and
(iii) collect the (e.g.) 15×15 nearest neighbors of the peak voxel and compute their majority Z value using histogram analysis;
(c) compute the combined object/box top surface Z by a weighted average with the following equation:
Z=first candidate Z*(1−bulginess)+second candidate Z*bulginess;
and, (d) determine the object/box height using the Z distance from the combined Z to the box base which is aligned with workspace's X-Y plane.
Setup and operation of the vision system herein can be accomplished using the user interface process(or) 146 and linked computing device 150, with its various display and interface components 152, 154 and 156. At setup, the display 152 provides the user with various GUI screens that provide information and prompt the user for inputs. Notably, the GUI delivers interactive data to the user in relatively straightforward terminology that is adapted to the user (e.g. a logistics manager), rather than a vision system expert. The GUI presents a graphical representation that attempts to mimic the user's environment in which the vision system is employed. The GUI, thereby, assists and guides the user in the setup of the vision system, in part, by showing the limitations of the actual environment (conveyor width, camera parameters, camera distance, etc.) and directing the user where to locate and mount system components, such as the camera(s) and triggering device(s) (e.g. photodetectors).
The Application Details display 1000 (
With reference to the exemplary display 1001 of
With reference to the exemplary display 1011 of
Once all parameters are set, the user clicks the Next button 1090 to advance to the next step, the baseline procedure, which is shown in display 1100 of
An exemplary version of a baseline screen 1100 (
With reference to
Referring briefly to the display 1261 of
With the understanding of these criteria (display 1261), the user enters display 1270 of
In
The setup procedure follows the device mounting procedure, and is shown in the display 1278. The right hand window 1279 provides operational (e.g. toggle switch) settings that the user can specify based upon the desired physical setup of the system, which can include, but are not limited to, (a) the direction of conveyor travel 1279a, (b) the device orientation 1279b, (c) trigger position 1279c, (d) camera front or back trigger 1279d and (e) object leading or trailing edge trigger. Additional settings (not shown) can be used to direct the destination for image data or results. Each toggle affects a representation in the left hand window 1280. For example, the depicted position of toggle 1279a causes the object 1281 to move down the conveyor 1256, as represented by dashed line 1282. Switching the toggle switch 1279a causes the object movement to be depicted in the opposite direction. This ensures that the system expects an object to enter the scene from a given direction, and affects when triggers will cause acquisition to occur. Likewise, the toggle 1279 changes the orientation of the camera. This generally affects whether an image is acquired upright or inverted in the field of view, but can affect other aspects of image acquisition and analysis by the vision system process(or). As shown, the representative wire leads 1283 are on the right side of the camera 1284. In
As shown in
In
Having completed the setup procedure 1267, the user can now enter the trigger alignment procedure 1268, as shown in the GUI display 1287 of
The display 1293 of
Note that the various automated procedures carried out by the system can be made employing the above-described vision system processes operating on cuboidal objects via statistical analysis, and/or by other techniques known to those of skill. Once adjustment of measurement area and other environmental parameters is concluded, the user can click the next button 1250 and move to the optimize stage and associated screen(s) 1300, shown in
The optimize screen carries out an AutoTune function as shown generally in GUI display screen 1300 (
In operation, the AutoTune GUI screen 1300 prompts the user to conduct a test run, typically operating the conveyor at the specified speed and directing cuboidal objects having a plurality of sizes, shapes and/or bulginess (characteristics), one-by-one along the conveyor. As each object is placed, the user clicks the AutoTune button 1310. Buttons 1312 and 1314, respectively, allow the user to start the process over or discard data for a given test object. The objects can vary widely in characteristics within the overall min/max range(s) specified by the user during the initial Application Details phase (screen 1000 in
After the first object (Box #1) is tuned, the display screen 1300 reenables the buttons as shown in
In applying the AutoTune procedure, the user obtains an overall optimal exposure. Note that the final selected optimal exposure is calculated as the average of the two procedures/techniques: exposure with maximum points procedure, and exposure using ⅓ of the plateau region procedure, while each procedure is also an average of the optimal computational procedures over all the objects/boxes provided by the user. If the user is not satisfied with this value, he/she can also overwrite this value with a different 3D exposure using the override input block 1330. Having described the user interface associated with the AutoTune procedure, a more detailed description of the underlying computational procedures and techniques is now described.
A. Overall Optimal Exposure for Multiple Objects/Boxes
Each object/box can produce a different optimal exposure, and it is desirable to derive a trade-off that provides the global/overall optimal exposure(s). Some candidate computational procedures/techniques include: averaging all objects (assume every object is approximately equal in shape/size), maximum exposure of all objects (trying to capture the darkest object), and minimum exposure of all objects (trying to capture the most reflective/lightest object). A more flexible procedure/technique, however, is to tie together these procedures with a mathematical representation. In this exemplary arrangement, a well-known Gaussian function is employed to aggregate all computational procedures. The Gaussian function is defined below:
where x is the exposure values for various objects/boxes, μ is the center point, defined as a function of the biasing setting (0-100). If biasing is smaller than 50 (biasing toward lighter object or smaller exposure values), μ is minimum of the exposures. If biasing is equal or larger than 50 (biasing toward darker object or larger exposure values), μ is maximum of the exposures. σ is the variance of the Gaussian function, defined as biasing+0.01 if biasing is smaller than 50, and 100—biasing+0.01 if biasing is equal or larger than 50.
B. AutoTune Blob Tool Parameter
Besides simplification of the acquisition of the point cloud by automatically determining the 3D exposure, there are other (secondary) tool parameters that are typically difficult for the user to select. A particularly significant secondary tool parameter is one or more of the blob tool parameter(s), more specifically, voxel size (also termed “voxelSize”). This parameter is roughly determined by the working distance of the object to the image sensor, and can also be determined by the object/box type, color and/or material. To perform AutoTune on this parameter, the exemplary arrangement employs two datasets to evaluate the correlation between the voxelSize and the blob tool performance, i.e., measurement error.
1. Image Set 1: Blue Box Different Working Distance
By way of non-limiting example, in an experimental arrangement, a blue calibration box with known dimension (280 mm by 140 mm by 90 mm) is imaged using a 3D image sensor with a plurality of differing working distances, in which the working distance is defined as the distance from the camera assembly (e.g. camera face or image sensor plane) to the surface of the exemplary box. The relationship between the voxelSize vs blob bounding box volume, histogram frequency, and measurement error can be graphed and/or analyzed for the experimental arrangement. It is observed, for small voxelSize, the vision system blob tool cannot typically detect any blobs, and the measurement error is 1 or 100%. As the voxelSize increases, there is a turning point where the blob can be detected, and the blob bounding box volume approaches its steady state. The measurement error, on the other hand, reaches its minimum. As the voxelSize further increases, the measurement error and blob bounding box volume fluctuate around their steady state. In general, histogram frequency can be used as a measurement for such a turning point. The maximum of the histogram frequency normally indicates the turning point for the blob detection. It is also observed that there are some fluctuations around the turning point. This, if the turning point voxelSize is selected as the voxelSize for blob detection, sensitivity for the dimension measurement would be high, which is generally not preferred. As a trade-off the voxelSize that generates the maximum of the histogram frequency, plus an offset (e.g. an offset of 1 in this example), as the optimal voxelSize used in the exemplary arrangement of the system.
The relationship between the optimal voxelSize and working distance within the experimental data set can also be analyzed. Notably, if a robust correlation is determined, then it may be unnecessary to tune voxelSize for every box, and a fixed relationship can be employed to determine the optimal voxelSize for a median box size given a fixed working distance. This relationship is, in turn, determined by the system installation.
The table below shows the data, and
2. Image Set 2: Different Box on Conveyor Belt
Another dataset is generated experimentally, which contains 56 boxes on the same conveyor. The measurement result of five (5) sample boxes is graphed and analyzed. It is determined that similar conclusions can be drawn as image set 1. The scatter plot 1374 for this dataset is shown in
As can be observed, a rough linear (superimposed line 1376) relationship can be observed similar to image set 1 (
optimalVoxelSize=0.00396*WD+2.2157
optimalVoxelSize=0.00396*(WD′−400)+2.2157.
3. Effect of Box Orientation on Optimal VoxelSize
The orientation of the boxes, as well as its impact on the optimal voxelSize. Using Set30 from image set 2 is also analyzed herein, and ten (10) images are obtained. The measurement result reveal that, despite some noises in the histogram frequency, the actual turning point for all 10 orientations is relatively stable (around 3.7 to 3.8). Adding the offset of 1 to make the voxelSize more robust, the optimal voxelSize is around 4.7 to 4.8, regardless of box orientations, with minimal nose occurring, which can be addressed.
4. Effect of VoxelSize on Blob Tool Time
Another characteristic that is analyzed is the voxelSize impact the blob tool time measurement. In an exemplary arrangement three (3) boxes with different sizes in Image Set 2 are selected, a large box (Set41: 417×411×425), a medium box (Set3: 367×206×190), and a small box (Set2: 162×153×168).
5. Optimal Blob Voxel Size Procedure
It is recognized that for some use cases using the histogram frequency can yield a wrong result in determining the optimal blob voxel size. Thus, a more robust procedure/technique is contemplated according to an exemplary implementation. This is described further in the diagram 1386 of
C. Overall AutoTune Procedure
If the results or user determines that use of default values is preferable (decision step 1391), then these values for exposure and voxel size are set for subsequent runtime use (step 1392). Otherwise, the decision step 1391 branches to procedure step 1393, and the exposure and voxel size are set for runtime use based upon the computations of steps 1389 and 1390.
Other parameters (such as blob tool parameters including blob voxel size, blob noise threshold, and blob refinement parameters) can be optimized, using conventional or custom techniques clear to those of skill, given the point cloud acquired using the optimal 3D exposure.
The above-described process is repeated for additional objects of different cuboidal objects/boxes having differing colors, surface patterns, shapes and/or sizes as appropriate to the actual runtime environment, and the setting computed by steps 1389 and 1390 are continually updated with the computed values until one or more settings for each parameter are achieved.
After completing AutoTune, the user can enter the communications procedure. In
The second part of the communication screen 1500 (
As shown in the Run screen 1600 (
An additional interface screen 1700 is shown in
The above described system, method and interface provides an effective, robust and user-friendly arrangement for dimensioning cuboidal objects, such as packages, moving along a conveyor. The arrangement accommodates bulginess and other minor imperfections in such objects and generates accurate dimension results in a relatively rapid manner, using existing 3D camera equipment.
The foregoing has been a detailed description of illustrative embodiments of the invention. Various modifications and additions can be made without departing from the spirit and scope of this invention. Features of each of the various embodiments described above may be combined with features of other described embodiments as appropriate in order to provide a multiplicity of feature combinations in associated new embodiments. Furthermore, while the foregoing describes a number of separate embodiments of the apparatus and method of the present invention, what has been described herein is merely illustrative of the application of the principles of the present invention. For example, as used herein, the terms “process” and/or “processor” should be taken broadly to include a variety of electronic hardware and/or software based functions and components (and can alternatively be termed functional “modules” or “elements”). Moreover, a depicted process or processor can be combined with other processes and/or processors or divided into various sub-processes or processors. Such sub-processes and/or sub-processors can be variously combined according to embodiments herein. Likewise, it is expressly contemplated that any function, process and/or processor herein can be implemented using electronic hardware, software consisting of a non-transitory computer-readable medium of program instructions, or a combination of hardware and software. Additionally, as used herein various directional and dispositional terms such as “vertical”, “horizontal”, “up”, “down”, “bottom”, “top”, “side”, “front”, “rear”, “left”, “right”, and the like, are used only as relative conventions and not as absolute directions/dispositions with respect to a fixed coordinate space, such as the acting direction of gravity. Additionally, where the term “substantially” or “approximately” is employed with respect to a given measurement, value or characteristic, it refers to a quantity that is within a normal operating range to achieve desired results, but that includes some variability due to inherent inaccuracy and error within the allowed tolerances of the system (e.g. 1-5 percent). Accordingly, this description is meant to be taken only by way of example, and not to otherwise limit the scope of this invention.
Number | Name | Date | Kind |
---|---|---|---|
5095204 | Novini | Mar 1992 | A |
5644386 | Jenkins | Jul 1997 | A |
6253806 | Sperry | Jul 2001 | B1 |
6867772 | Kotcheff | Mar 2005 | B2 |
6918541 | Knowles | Jul 2005 | B2 |
7153378 | Sleiman | Dec 2006 | B2 |
7636449 | Mirtich | Dec 2009 | B2 |
8666142 | Shkolnik | Mar 2014 | B2 |
8794521 | Joussen | Aug 2014 | B2 |
8891864 | Pettigrew | Nov 2014 | B2 |
9031317 | Yakubovich | May 2015 | B2 |
9094588 | Silver | Jul 2015 | B2 |
9237331 | Heinzle | Jan 2016 | B2 |
9589165 | Reynolds | Mar 2017 | B2 |
9595134 | Ramalingam | Mar 2017 | B2 |
9836635 | Negro | Dec 2017 | B2 |
10192087 | Davis | Jan 2019 | B2 |
10507990 | Koga | Dec 2019 | B2 |
10520452 | Van Dael | Dec 2019 | B2 |
10621747 | Malisiewicz | Apr 2020 | B2 |
10789569 | Anor | Sep 2020 | B1 |
11335021 | Vaidya | May 2022 | B1 |
20010043738 | Sawhney | Nov 2001 | A1 |
20040240754 | Smith | Dec 2004 | A1 |
20070146491 | Tremblay | Jun 2007 | A1 |
20080123945 | Andrew | May 2008 | A1 |
20080302633 | Snow | Dec 2008 | A1 |
20090080706 | Tao | Mar 2009 | A1 |
20090118864 | Eldridge | May 2009 | A1 |
20100034440 | Zhan | Feb 2010 | A1 |
20100290665 | Sones | Nov 2010 | A1 |
20110103679 | Campbell | May 2011 | A1 |
20120236140 | Hazeyama | Sep 2012 | A1 |
20120313937 | Beeler | Dec 2012 | A1 |
20130027538 | Ding | Jan 2013 | A1 |
20130329013 | Metois | Dec 2013 | A1 |
20140050387 | Zadeh | Feb 2014 | A1 |
20140088765 | Valpola | Mar 2014 | A1 |
20140118558 | Imoto | May 2014 | A1 |
20140177979 | Whitman | Jun 2014 | A1 |
20140351073 | Murphy | Nov 2014 | A1 |
20150316904 | Govindaraj | Nov 2015 | A1 |
20160104021 | Negro | Apr 2016 | A1 |
20170302905 | Shteinfeld | Oct 2017 | A1 |
20180143003 | Clayton | May 2018 | A1 |
20180253857 | Driegen | Sep 2018 | A1 |
20180284741 | Cella | Oct 2018 | A1 |
20190011183 | Baumert | Jan 2019 | A1 |
20190122073 | Ozdemir | Apr 2019 | A1 |
20190202642 | Schroader | Jul 2019 | A1 |
20190256300 | Shimamura | Aug 2019 | A1 |
20190353631 | Koshnick | Nov 2019 | A1 |
20200039676 | Shamiss | Feb 2020 | A1 |
20200098122 | Dal Mutto | Mar 2020 | A1 |
20200118317 | Mysore Siddu | Apr 2020 | A1 |
20200394812 | Carey | Dec 2020 | A1 |
Number | Date | Country |
---|---|---|
104376558 | Feb 2017 | CN |
106091976 | Jul 2017 | CN |
107063099 | Aug 2017 | CN |
Entry |
---|
NN: “Datalogic DM3610 Two-Head Dimensioning System—Reference Manual”, Apr. 30, 2015 (Apr. 30, 2015), XP055780183, Retrieved from the Internet: URL:https://www.datalogic.com/upload/ia/manuals/DM3610%20Two-Head%20Dimensioning%20S ystem%20Reference%20Manual.pdf [retrieved on Feb. 26, 2021]. |
Ocak et al., “Image Processing Based Package Volume Detection with Kinect”, 2015 23nd Signal Processing and Communications Applications Conference (SIU), IEEE, May 16, 2015 (May 16, 2015), pp. 515-518, XP032787534, DOI: 10.1109/SIU.2015.7129873. |
M. Leo, P. Carcagni, C. Distante. “Robust estimation of object dimensions and external defect detection with a low-cost sensor”, Journal of Nondestructive Evaluation. Mar. 2017. |
“Cubiscan 210-DS.” Retrieved from archive.org capture dated Sep. 30, 2018 https://web.archive.org/web/20180930154350/https://cubiscan.com/dimensioning/cubiscan-210-ds/. |
“Dimensioning Systems Manufacturers—Parcle Dimensioning Systems.” Retrieved from archive.org capture dated Sep. 3, 2018 https://web.archive.org/web/20180903170813/https://www.falconautoonline.com/dimension-weight-scanning-systems/. |
N. Pears, P. Wright, & C. Bailey, “Practical Single View Metrology for Cuboids” 2 VISAPP 85-90 (Mar. 2007). |
F. Chen, G.M. Brown, & M. Song, “Ovreview of three-dimensional shape measurement using optical methods,” 39 Opt. Eng. 10-22 (Jan. 2000). |
S.W. Kwon, F. Bosche, C. Kim, C.T. Haas, & K.A. Liapi, “Fitting range data to primitives for rapid local 3D modeling using sparse range point clouds,” 13 Automation in Construction 67-81 (2004). |
Cui, An; A Machine Manufacturing Online Quality Monitoring Method Based On Visual System, Aug. 18, 2017 (Year: 2017). |
Number | Date | Country | |
---|---|---|---|
20200394812 A1 | Dec 2020 | US |