The present disclosure relates to an elevator 3D data processing device.
PTL 1 discloses an elevator data processing device. With the processing device, it is possible to calculate dimensions of structures in data of a shaft.
[PTL 1] Japanese Patent No. 6322544
However, the processing device described in PTL 1 needs to perform 3D model generation. Therefore, a heavy load is placed on the processing device.
The present disclosure has been made to solve the problem described above. An object of the present disclosure is to provide an elevator 3D data processing device that can reduce a load of calculation.
An elevator 3D data processing device according to the present disclosure includes: a projection image generating unit that, when 3D data of a shaft of an elevator is aligned with a preset coordinate system, generates a 2D projection image from the 3D data; a projection image feature extracting unit that extracts a feature of the projection image generated by the projection image generating unit; and a reference position specifying unit that specifies a reference position for processing of the 3D data from the feature extracted by the projection image feature extracting unit.
According to the present disclosure, the processing device generates the 2D projection image from the 3D data of the shaft. The processing device extracts the feature from the 2D projection image. The processing device specifies the reference position for the processing of the 3D data of the shaft from the feature of the 2D projection image. Therefore, it is possible to reduce the load of calculation.
Embodiments are explained with reference to the accompanying drawings. Note that, in the figures, the same or equivalent portions are denoted by the same reference numerals and signs. Redundant explanation of the portions is simplified or omitted as appropriate.
As shown in
The processing device 1 includes a reference straight line extracting unit 2, a first aligning unit 3, a reference plane extracting unit 4, a second aligning unit 5, a projection image generating unit 6, a projection image feature extracting unit 7, a reference position specifying unit 8, and a dimension calculating unit 9.
Subsequently, the 3D data input unit A is explained with reference to
In
Subsequently, the reference straight line extracting unit 2 and the first aligning unit 3 are explained with reference to
The reference straight line extracting unit 2 extracts a reference straight line for aligning the 3D point group of the shaft according to any one axis among an X axis, a Y axis, and a Z axis.
For example, as shown in
When a size of a projection area is represented as a size (in a shaft lateral width direction) Lx [m] of the projection area in the X-axis direction, a size (in a shaft height direction) Ly [m] in the Y-axis direction, and a size (in a shaft depth direction) Lz [m] in the Z-axis direction, the reference straight line extracting unit 2 determines a size of the projection image as width: αLx [pix] and height: αLy [pix]. Here, α is a scaling coefficient at a projection time.
For example, when the scaling coefficient a and the sizes Lx, Ly, and Lz of the projection area are determined beforehand and a setting position at a measurement time of the 3D data input unit A substantially coincides with the center of the shaft and the center of the 3D data input unit A substantially coincides with the height of the landing sill, the reference straight line extracting unit 2 regards each of center X and Y coordinates Cx and Cy of the projection area as 0 and determines only the remaining Cz based on information concerning a bounding box of a point group.
Specifically, the reference straight line extracting unit 2 calculates a bounding box targeting a point group obtained by applying down-sampling, noise removal processing, and the like to a measured point group and calculates a minimum value of a Z coordinate value. In the structure of the shaft, the vicinity of the minimum value of the Z coordinate value is a wall surface or the hatch door. Therefore, the reference straight line extracting unit 2 sets, as Cz, a Z coordinate value increased from the minimum value of the Z coordinate value by a margin decided beforehand.
For example, as a method not depending on the setting position at the measurement time of the 3D data input unit A, when α, Lx, Ly, and Lz are determined beforehand, the reference straight line extracting unit 2 determines positions of Cx, Cy, and Cz by urging a user to optionally designating point group data presented through a point group display device not shown in
When an original point group has color information and a projection image is a color image, the reference straight line extracting unit 2 directly adopts, as a pixel value, an RGB value of a point serving as a projection source. For example, when the projection image is a gray scale image, the reference straight line extracting unit 2 adopts, as a pixel value, a value obtained by converting the RGB value into a gray scale. For example, the reference straight line extracting unit 2 adopts a value of a Z coordinate value as a pixel value. For example, when the original point group has a gray scale value like a reflection intensity value of a laser, the reference straight line extracting unit 2 adopts the gray scale value as a pixel value. For example, the reference straight line extracting unit 2 separately creates each of the illustrated plurality of pixel values.
As shown in
For example, as shown in
For example, the reference straight line extracting unit 2 applies an edge detector in the horizontal direction to the projection image to detect an edge. The reference straight line extracting unit 2 extracts the boundary line between the landing sill and the hatch door by applying a straight line model to an edge image with RANSAC (Random Sample Consensus) or the like.
For example, as shown in
When the car rail is set as the reference straight line, assuming that the setting position at the measurement time of the 3D data input unit A is substantially the shaft center, the scaling coefficient a and the length (Lz) in the Z-axis direction of the projection area are determined beforehand. Lx is determined based on an inter-car rail distal end distance (BG) from a drawing at a new installation time. For example, Lx is a value obtained by adding a margin width to BG. Ly is determined based on the length in a Y direction of a bounding box of a point group from which noise is removed. For example, Ly is a value obtained by subtracting the margin width from the length in the box Y direction. Cx and Cz in the center of the area are set to 0. Cy is set to a center Y coordinate of the bounding box of the point group from which noise is removed.
The reference straight line extracting unit 2 generates a projection image by projecting a point group in the projection range onto a projection surface parallel to the XY plane. For example, the reference straight line extracting unit 2 generates a projection image with a projection direction set in a plus direction of the Z axis. For example, the reference straight line extracting unit 2 generates a projection image with the projection direction set in a minus direction of the Z axis.
Note that the reference straight line extracting unit 2 may determine Cx, Cy, and Cz by urging the user to optionally designate point group data presented through the point group display device.
As shown in
For example, the reference straight line extracting unit 2 applies an edge detector in the vertical direction to the projection image to detect an edge. For example, the reference straight line extracting unit 2 extracts an edge equivalent to each of the distal ends of the left and right car rails by performing filtering for leaving only an edge closest to each of the plus direction and the minus direction from a projection image X coordinate center. At this time, the reference straight line extracting unit 2 applies a linear model to an edge image with RANSAC or the like to extract a straight line.
At this time, the reference straight line extracting unit 2 only has to extract at least one of straight lines equivalent to the left and right car rails. The reference straight line extracting unit 2 may extract both of the straight lines as a set of parallel straight lines in a pairwise manner.
Note that the reference straight line extracting unit 2 may perform preprocessing such as noise removal processing by a median filter, expansion processing, contraction processing, or the like or edge coupling processing before linear model fitting. The reference straight line extracting unit 2 may extract a straight line with Hough transform. When a plurality of types are present in the projection image, the reference straight line extracting unit 2 may extract a straight line according to a plurality of kinds of edge information. For example, when a projection image having an RGB value of a point group as a pixel value and a projection image having a Z coordinate value as a pixel value are present, the reference straight line extracting unit 2 may detect edges from both the projection images and extract straight lines according to both pieces of edge information.
The first aligning unit 3 converts a coordinate such that the reference straight line becomes parallel to any axis of an XYZ coordinate system with respect to the point group data.
For example, when a landing sill upper end line is extracted as the reference straight line, the first aligning unit 3 calculates an angle a formed by the landing sill upper end line detected from a projection image and an image horizontal line and rotates the original point group data in the opposite direction by a around the Z axis.
For example, as shown in
For example, when left and right car rail distal end straight lines are separately extracted on the left and the right, the first aligning unit 3 calculates an average of angles α1 and α2 formed by the respective car rails with the image vertical line and rotates the original point group data in the opposite direction by an average angle of α1 and α2 around the Z axis.
Subsequently, the reference plane extracting unit 4 is explained with reference to
The reference plane extracting unit 4 extracts a plane for alighting 3D data of the shaft concerning the remaining two axes not set as a reference of alignment in the reference straight line. For example, the reference plane extracting unit 4 extracts a plane connecting the distal ends of the left and right car rails as a reference plane.
As shown in
As shown in
A standard of the car rails and a distance between the car rails are obtained from prior information. Therefore, the reference plane extracting unit 4 creates pairwise model images based on ideal views of the respective left and right car rails appearing in the projection images.
The reference plane extracting unit 4 performs, on the respective projection images, template matching using the model images of the left and right car rails as a reference plate. At this time, the reference plane extracting unit 4 not only changes only a position of the template but also rotates the template within a preset range to perform the template matching. For example, the reference plane extracting unit 4 scans the reference template to perform a coarse search and performs a dense search using the template rotated near a matching position of the coarse search.
Note that the reference plane extracting unit 4 may create model images separately for the left and the right.
As shown in
The reference plane extracting unit 4 calculates coordinates of the distal ends of the left and right car rails on the 3D coordinate. At this time, the reference plane extracting unit 4 determines Y coordinate values of the distal ends of the left and right car rails based on height of an original projection area. For example, the reference plane extracting unit 4 adopts an intermediate Y coordinate value of the original projection area as the Y coordinate values of the distal ends of the left and right car rails.
For example, as shown in
Subsequently, the second aligning unit 5 is explained with reference to
The second aligning unit 5 aligns point group data by rotating the point group data about a rotation axis about which the point group data is not rotated by the first aligning unit 3 such that the normal line of the reference plane becomes parallel to any one axis of the X axis, the Y axis, and the Z axis.
For example, in a state in which the first aligning unit 3 is performing rotation based on the upper end line of the landing sill and when the plane between the distal ends of the left and right car rails is set as the reference plane, as shown in
Note that the extraction of the reference straight line by the reference straight line extracting unit 2 to the alignment of the point group data by the second aligning unit 5 may be repeated.
The reference plane extracting unit 4 may extract the reference plane first. In this case, the first aligning unit 3 only has to align the 3D data of the shaft according to the reference plane extracted by the reference plane extracting unit 4. Thereafter, the reference straight line extracting unit 2 only has to extract the reference straight line of the shaft from the 3D data aligned by the first aligning unit 3. Thereafter, the second aligning unit 5 only has to align the 3D data aligned by the first aligning unit 3 according to the reference straight line extracted by the reference straight line extracting unit 2.
At this time, the reference plane extracting unit 4 may extract a reference plane having a normal line parallel to any one of the X axis, the Y axis, and the Z axis of the XYZ coordinate system. The reference straight line extracting unit 2 may extract a reference straight line orthogonal to the normal line of the plane extracted by the reference plane extracting unit.
For example, the reference plane extracting unit 4 may extract, as a reference plane, a plane parallel to a plane connecting linear distal ends of a pair of car rails. For example, the reference straight line extracting unit 2 may extract, as a reference straight line, a straight line parallel to the boundary line between the landing sill and the hatch door.
Subsequently, the projection image generating unit 6 is explained with reference to
As shown in
For example, a size of a projection image is determined from a bounding box surrounding entire point group data. For example, the size of the projection image is determined by a value decided beforehand.
For example, the projection image generating unit 6 calculates a bounding box for the point group data to which down-sampling and noise removal processing are applied. When a size in the X-axis direction is Lx [m], a size in the Y-axis direction is Ly [m], and a size in the Z-axis direction is Lz [m] in the bounding box, the projection image generating unit 6 sets width of a projection image having a hall direction as a projection direction to αLx [pix] and sets height of the projection image to αLy [pix]. Here, α is a scaling coefficient at a projection time.
For example, a range of point group data for performing projection is determined based on a coordinate value in an axial direction. In projection to the hall side wall surface, only a point group, Z-axis coordinate values of which are minus, is set as a projection target. If the range is further narrowed, for example, a point group, Z coordinate values of which are in a range of Za+β1 [mm] to Za−β2 [mm], in the point group, the Z-axis coordinate values of which are minus, is set as a projection target.
For example, Za is decided beforehand. For example, Za is decided by an intermediate value or an average value of Z coordinate values. For example, β1 and β2 are decided beforehand. For example, β1 and β2 are dynamically decided from a statistical amount such as dispersion of Z coordinate values of a point group, the Z coordinate values of which are minus. For example, projection targets of X coordinate values and Y coordinate values are narrowed down based on a preset sill.
A projection image at this time is the same projection image as that of
As shown in
Subsequently, the projection image feature extracting unit 7 is explained with reference to
The projection image feature extracting unit 7 extracts feature of a target serving as a start point of a dimension that should be calculated from a projection image.
For example, the projection image feature extracting unit 7 extracts an upper end line of the landing sill from a projection image to the wall surface on the hall side. The projection image feature extracting unit 7 extracts the upper end line of the landing sill with the same method as the method of the reference straight line extracting unit 2.
When the upper end line of the landing sill is already extracted, the projection image feature extracting unit 7 may omit the extraction processing or may perform the extraction processing again.
When original point group data is formed from point group data of a single floor, the projection image feature extracting unit 7 sets a processing target range in order to specify an approximate position of the landing sill from the viewpoint of erroneous detection prevention.
When the configurations, the sizes, and the like of parts configuring the landing sill are known from an item table or the like, as shown in
As shown in
Note that the projection image feature extracting unit 7 may perform preprocessing such as noise removal processing by a median filter, expansion processing, contraction processing, or the like or edge coupling processing before linear model fitting. The projection image feature extracting unit 7 may extract a straight line with Hough transform. When a plurality of types are present in the projection image, the projection image feature extracting unit 7 may extract a straight line according to a plurality of kinds of edge information. For example, when a projection image having an RGB value of a point group as a pixel value and a projection image having a Z coordinate value as a pixel value are present, the projection image feature extracting unit 7 may detect edges from both the projection images and extract straight lines according to both pieces of edge information.
When a projection image is formed from projection images of a plurality of floors, as shown in
The projection image feature extracting unit 7 creates the same initial template as the initial template in the case of the single floor. The projection image feature extracting unit 7 sets a processing target area around a landing sill of a certain floor by matching the initial template with a projection image. At this time, a scanning range of the initial template may be the entire projection image or the projection image feature extracting unit 7 may narrow down ranges obtained by equally dividing the projection image in the longitudinal direction and sets any one of the ranges as the scanning range.
As in the case of the single floor, the projection image feature extracting unit 7 extracts, by performing the linear model fitting or the like in the horizontal direction on a processing target area around a landing fill of a certain floor, an upper end line of the landing sill of the floor.
As shown in
For example, when the number of floors is K, the projection image feature extracting unit 7 scans the template on areas other than the area extracted as the template and determines higher-order K-1 areas having higher matching scores as processing target areas of the remaining floors. At this time, the projection image feature extracting unit 7 considers spatial proximity of matching positions rather than simply selecting the higher-order K-1 areas. For example, when there are L matching positions having excessively short spatial distances to a certain matching position among the higher-order K-1 areas, the projection image feature extracting unit 7 adopts only a position having the highest matching score as a result and excludes the remaining (L-1) areas from selection.
The projection image feature extracting unit 7 adopts, anew, matching positions initially having K to K-1+(L-1)-th score levels. The projection image feature extracting unit 7 finally determines K-1 matching positions by repeating the processing until there is no matching position having excessively short spatial distances to all the matching positions.
When an inter-floor distance of the shaft set as a target is roughly known from a drawing or the like at a new installation time, the projection extracting unit reduces a range of the template matching and reduces a calculation time by utilizing prior information, suppresses wrong handling of the template, and improves robustness of a result by using prior information. Specifically, as shown in
After setting the processing target areas concerning the other floors, the projection image feature extracting unit 7 performs, on the respective processing target areas, the same linear model fitting or the like based on edge detection in the lateral direction as the linear model fitting or the like in the case of the single floor and extracts upper end lines of landing sills concerning the respective floors.
Note that the projection image feature extracting unit 7 may apply a general linear (ax+by+c=0) model and extracts an upper end line of a landing sill as a straight line having a gradient.
The projection image feature extracting unit 7 extracts distal end positions of the left and right car rails from the projection image in the bottom surface direction like the reference plane extracting unit 4. When the reference plane extracting unit 4 is extracting a distal end position of a left guide rail, the projection image feature extracting unit 7 may omit the extraction processing or may perform the extraction processing again.
The projection image feature extracting unit 7 extracts left and right end lines of vertical columns from the projection image to the left and right wall surfaces or the rear wall surface.
In aligned point group data, the vertical columns are parallel to the Y axis. On the projection image, the left and right end lines of the vertical columns appear as vertical lines.
The projection image feature extracting unit 7 detects a relatively long straight line among straight lines in the longitudinal direction extracted from the projection image.
On the other hand, on the shaft left and right wall surfaces, since cables and the like positioned along the car rails and the walls are present, a large number of vertical edges by other structures appear as noise on the projection image. The projection image feature extracting unit 7 suppresses the influence of the noise and extracts the left and right end lines of the vertical columns.
Note that, even if the vertical columns are H steel, the vertical columns can be treated the same by this method.
As shown in
The margin only has to be set beforehand and may be set based on a distance measurement error characteristic of the 3D data input unit A. The rear surface position X coordinates of the car rails are estimated from standard information of the car rails if the distal end positions of the car rails are extracted. X coordinates of planes equivalent to the wall surfaces are calculated from an intermediate value, an average value, and the like of all X coordinate values concerning point groups present in farther positions than the car rail rear surface position X. Plane fitting may be performed on these partial point groups.
As shown in
The projection image feature extracting unit 7 decides a range for calculating a Z coordinate of a plane equivalent to the rear wall surface from circumscribed rectangle information or the like of a point group and calculates the Z coordinate of the plane equivalent to the rear wall surface from an intermediate value, an average value, and the like of Z coordinates concerning point groups in the range.
Note that the constant α may be determined based on width in the case in which the vertical columns are set most apart from the wall surface in the vertical column setting positions estimated from a general construction standard or the like or a value estimated from the drawing at the new installation time may be set as the constant α .
As shown in
Note that the projection image feature extracting unit 7 may extract a straight line as a model of a set of vertical parallel lines.
Note that the projection image feature extracting unit 7 may perform preprocessing such as noise removal processing by a median filter, expansion processing, contraction processing, or the like or edge coupling processing before linear model fitting. The projection image feature extracting unit 7 may extract a straight line with Hough transform. When a plurality of types are present in the projection image, the projection image feature extracting unit 7 may extract a straight line according to a plurality of kinds of edge information. For example, when a projection image having an RGB value of a point group as a pixel value and a projection image having a Z coordinate value as a pixel value are present, the projection image feature extracting unit 7 may detect edges from both the projection images and extract straight lines according to both pieces of edge information.
As shown in
As shown in
Note that, concerning a structure in which the left and right ends of the vertical columns, the upper and lower ends of the beam, and the like are detected as straight lines in the vertical direction or the horizontal direction on a projection image, all vertical lines and horizontal lines satisfying fixed conditions such as length and a gradient may be extracted from the projection image, as candidates. In this case, a group of these straight lines only has to be converted into a plane group in a 3D coordinate system and then presented to the user through a point group data display device. At this time, the user only has to select necessary straight lines through the point group data display device.
The reference position specifying unit 8 specifies positions serving as start points for calculating dimensions such as the upper end of the landing sill, the plane between the car rails, and the left and right ends of the vertical columns. For example, the reference position specifying unit 8 utilizes a extraction result of the projection image feature extracting unit 7 substantially as it is.
For example, the reference position specifying unit 8 converts landing sill upper end lines concerning the floors into an XYZ coordinate system on a projection image in the hall side wall surface direction. A dimension start point specifying unit calculates, as upper end planes of the landing sills, planes that pass straight lines and normal lines of which are orthogonal to the Z axis in the XYZ coordinate system. The dimension start point specifying unit sets the upper end planes of the landing sills concerning the floors as starts points of dimension calculation.
For example, the reference position specifying unit 8 extracts the distal end positions of the left and right car rails from a respective plurality of projection images. The reference position specifying unit 8 converts the distal end positions of the left and right car rails into an XYZ coordinate system. The reference position specifying unit 8 sets, as an inter-car rail plane, a result obtained by applying 3D plane fitting to a plurality of left and right car rail distal end positions on a 3D space. Note that, when an inter-car rail plane is already extracted, the extracted inter-car rail plane may be directly used.
Subsequently, the reference position specifying unit 8 is explained without referring to the drawings.
The reference position specifying unit 8 calculates average positions respectively for the left and right respective distal end positions in the XYZ coordinate system. The reference position specifying unit 8 sets the respective average positions as a left car rail distal end position and a right car rail distal end position. The reference position specifying unit 8 sets a plane passing the left car rail distal end position and orthogonal to the car rail plane as a left car rail distal end plane. The reference position specifying unit 8 sets a plane passing the right car rail distal end position and orthogonal to the car rail plane as a right car rail distal end plane. For example, the reference position specifying unit 8 sets these planes and positions as start points of dimension calculation.
For example, the reference position specifying unit 8 calculates planes that pass the straight lines and the normal lines of which are orthogonal to the X axis in the XYZ coordinate system as vertical column left and right end planes located in the front of the left and right wall surfaces. For example, the reference position specifying unit 8 calculates planes that pass the straight lines and the normal lines of which are orthogonal to the Z axis in the XYZ coordinate system as vertical column rear end planes located in the front of the rea wall surface. For example, the reference position specifying unit 8 sets the left and right end planes of the vertical columns as start points of dimension calculation.
For example, the reference position specifying unit 8 calculates planes that pass the straight lines and the normal lines of which are orthogonal to the X axis in the XYZ coordinate system as upper and lower end planes of beams located on the left and right wall surfaces or the rear wall surface. For example, the reference position specifying unit 8 sets the upper and lower end planes of the beams as start points of dimension calculation.
Subsequently, the dimension calculating unit 9 is explained with reference to
The dimension calculating unit 9 calculates dimensions based on the planes or the positions calculated as the start points of dimension calculation.
For example, as shown in
For example, the dimension calculating unit 9 calculates a distance between an inter-rail plane and a point group belonging to the rear wall surface. For example, the dimension calculating unit 9 calculates a distance between the inter-rail plane and a point group belonging to the landing sill. For example, the dimension calculating unit 9 calculates depth BH of the shaft by adding up the distance between the inter-rail plane and the point group belonging to the rear wall surface and the distance between the inter-rail plane and the point group belonging to the landing sill.
For example, the dimension calculating unit 9 calculates distances between the car rail distal end planes on the respective left and the right sides and point groups belonging the respective left and right wall surfaces. For example, the dimension calculating unit 9 calculates a distance between the left and right car rail distal end positions. For example, the dimension calculating unit 9 adds up the distances. For example, the dimension calculating unit 9 calculates lateral width AH of the shaft by adding left and right rail heights as standard information of the rails to the distances.
Note that, in the bottom floor, a wall surface below a floor level is sometimes waterproofed by mortar or the like. In this case, it is desirable to divide dimension calculation on upper and lower sides of the floor level. For example, 3D data of the shaft only has to be divided on the upper and lower sides across the upper end plane of the landing sill to separately perform measurement for the upper and lower sides.
For example, as shown in
According to the first embodiment explained above, the processing device 1 extracts a reference straight line and a reference plane of the shaft from 3D data of the shaft and aligns the 3D data of the shaft. Therefore, it is possible to align the 3D data of the shaft without requiring a special marker.
The processing device 1 extracts a reference straight line parallel to any one of the X axis, the Y axis, and the Z axis of the XYZ coordinate system. The processing device 1 extracts a reference plane having a normal line parallel to the extracted reference straight line. Therefore, it is possible to more accurately align the 3D data of the shaft.
The processing device 1 extracts, as the reference straight line, a straight line parallel to the boundary line between the landing sill and the hatch door. The processing device 1 extracts, as the reference plane, a plane parallel to the plane connecting the linear distal ends of the pair of car rails. Therefore, it is possible to more accurately align the 3D data of the shaft.
The processing device 1 generates a 2D projection image from the 3D data of the shaft. The processing device 1 extracts features from the 2D projection image. The processing device 1 specifies a reference position for processing of the 3D data of the shaft from the features of the 2D projection image. Therefore, it is possible to reduce a calculation load on the processing device 1. As a result, it is possible to reduce processing cost by the processing device 1.
The processing device 1 determines a pixel value of the projection image based on one of information concerning a color, information concerning reflection intensity, and information concerning a coordinate value of a projection point. Therefore, it is possible to more accurately extract the features of the 2D projection image.
The processing device 1 specifies a reference position in calculating dimensions of the shaft as the reference position for the processing of the 3D data. Therefore, it is possible to more accurately calculate dimensions of the structures in the shaft.
The processing device 1 generates a 2D projection image from the 3D data with the side surface or the floor surface of the shaft set as a projection direction. For example, the processing device 1 specifies the reference position based on a plane passing the upper end portion of the landing sill. For example, the processing device 1 specifies the reference position based on an inter-rail plane connecting the distal end portions of the pair of car rails. For example, the processing device 1 specifies the reference position based on a plane orthogonal to the inter-rail plane. For example, the processing device 1 specifies the reference position based on a plane passing the left and right end portions of the vertical column. For example, the processing device 1 specifies the reference position based on a plane passing the upper and lower end portions of the beam. Therefore, it is possible to more accurately calculate dimensions of various structures in the shaft.
Subsequently, an example of the processing device 1 is explained with reference to
Functions of the processing device 1 can be realized by a processing circuit. For example, the processing circuit includes at least one processor 100a and at least one memory 100b. For example, the processing circuit includes at least one dedicated hardware 200.
When the processing circuit includes the at least one processor 100a and the at least one memory 100b, the functions of the processing device 1 are realized by software, firmware, or a combination of the software and the firmware. At least one of the software and the firmware is described as a program. At least one of the software and the firmware is stored in the at least one memory 100b. The at least one processor 100a reads out and executes the program stored in the at least one memory 100b to thereby realize the functions of the processing device 1. The at least one processor 100a is referred to as central processing unit, arithmetic processing device, microprocessor, microcomputer, or DSP as well. For example, the at least one memory 100b is a nonvolatile or volatile semiconductor memory such as a RAM, a ROM, a flash memory, an EPROM, or an EEPROM, a magnetic disk, a flexible disk, an optical disc, a compact disc, a minidisc, a DVD, or the like.
When the processing circuit includes the at least one dedicated hardware 200, the processing circuit is realized by, for example, a single circuit, a composite circuit, a programmed processor, a parallel-programmed processor, an ASIC, an FPGA, or a combination of the foregoing. For example, each of the functions of the processing device 1 is realized by the processing circuit. For example, the functions of the processing device 1 are collectively realized by the processing circuit.
A part of the functions of the processing device 1 may be realized by the dedicated hardware 200 and the other part may be realized by the software or the firmware. For example, the functions of the projection image feature extracting unit 7 may be realized by the processing circuit functioning as the dedicated hardware 200. The functions other than the functions of the projection image feature extracting unit 7 may be realized by the at least one processor 100a reading out and executing the program stored in the at least one memory 100b.
In this way, the processing circuit realizes the functions of the processing device 1 with the hardware 200, the software, the firmware, or a combination of the foregoing.
In the second embodiment, the processing device 1 extracts, from a generated projection image, features for dividing shaft data for each of floors. For example, the processing device 1 detects one representative position of a landing sill peripheral pattern and extracts similar patterns of a representative position to specify other hall positions.
Although not illustrated, the projection image feature extracting unit 7 uses an initial template modeled on a sill rail and an apron. Note that an initial template in a wider range including a door structure and the like may be used.
In template matching, a scanning range of a template may be an entire projection image or may be narrowed down.
For example, the projection image feature extracting unit 7 may, with an image center set as a center, convert a size [mm] of a maximum distance conceivable as an inter-floor distance of a shaft into a size [pix] on an image and perform the template matching using, as a search range, a rectangular area having the size as a size in the longitudinal direction. Alternatively, the projection image feature extracting unit 7 may cause a user to select a point around a landing sill through a point group data display device and may determine a search range centering on the selected point.
For example, when prior information such as an item table is absent, unevenness as a structure is present in a boundary between the landing sill and a hatch door or a boundary between the landing sill and an apron. Therefore, when a pixel value of a projection image is based on color information or a Z coordinate value of a point group, a clear lateral edge appears on the projection image.
In this case, as shown in
The projection image feature extracting unit 7 sets the representative position as a reference and calculates image features of a peripheral area of the reference in the projection image. For example, the projection image feature extracting unit 7 extracts features in a rectangular area having a size set in advance centering on the representative position.
For example, as shown in
At this time, the reference position specifying unit 8 applies an edge detector in the horizontal direction to a processing target image to detect an edge in the lateral direction and applies a horizontal line model using RANSAC or the like to an edge image to extract a horizontal line.
On the projection image, after determining a position of the upper end line of the landing sill concern each of the floors, the reference position specifying unit 8 converts the position into information concerning an XYZ coordinate system and determines a division reference position. For example, the upper end line of the landing sill is a line parallel to a ZX plane in the XYZ coordinate system. A Y coordinate value of the upper end line of the landing sill is fixed. At this time, as shown in
As shown in
For example, when a Y coordinate of a certain division reference position is Ys [mm] and a preset margin value is γ [mm], the dimension calculating unit 9 only has to divide the point group data by a Y coordinate value of Ys+γ.
When a processing target point group is narrowed down as much as possible, the dimension calculating unit 9 only has to segment and divide point group data, a certain division reference position Y coordinate value of which is present in a range of Ys+γ[mm] to Ys+γ2 [mm], as point group data of a relevant floor.
Note that, when a marker such as a monochrome checker pattern is arranged, a correlation value with the same template image may be set as a feature that should be calculated and a result position calculated by the template matching may be set as the division reference position.
According to the second embodiment explained above, the processing device 1 specifies a reference position in dividing the shaft as a reference position for 3D data processing. Therefore, it is possible to accurately divide the shaft.
The processing device 1 sets the side surface of the shaft as a projection direction and generates a 2D projection image from the 3D data. Therefore, it is possible to easily specify the reference position in dividing the shaft.
The processing device 1 extracts a texture pattern around the landing sill as a feature of the projection image. Therefore, it is possible to accurately divide the shaft.
As shown in
Subsequently, the plane extracting unit 10 is explained with reference to
As shown in
At this time, the plane extracting unit 10 does not always need to perform processing targeting all point groups. For example, the plane extracting unit 10 may extract planes targeting partial point groups. For example, the plane extracting unit 10 may set, as a processing target, a point group surrounded by a bounding box having a preset size from a point group center. For example, the plane extracting unit 10 may calculate a bounding box for a point group obtained by removing noise in a measurement point group and narrow down the processing target point group based on a size of the bounding box and a direction of a main axis. For example, the plane extracting unit 10 may perform sub-sampling at a preset interval and set a curtailed point group as a processing target.
The plane extracting unit 10 determines a pair of planes most nearly in an orthogonal relation out of an extracted plurality of planes.
At this time, the plane extracting unit 10 may try all pairs in a round robin manner and determine a pair of planes most nearly in an orthogonal relation. The plane extracting unit 10 may determine a pair of planes orthogonal to each other most in a combination of three planes, angles of which are in a fixed range from 90 degrees.
Subsequently, the initial aligning unit 11 is explained with reference to
The initial aligning unit 11 rotates, based on the pair of planes extracted by the plane extracting unit 10, the point group data to be substantially orthogonal to the surfaces of the shaft and the axial directions of the XYZ coordinate plane. For example, an aimed posture in the point group data is shown in
For example, as shown in
For example, as shown in
Thereafter, as shown in
For example, as shown in
For example, the initial aligning unit 11 performs, concerning Na, conversion for causing Na to coincide with a matched axial direction and, thereafter, performs the conversion concerning Nb.
Note that axes to be caused to coincide may be uniform irrespective of a method of selecting a pair of planes.
For example, when the pair of planes is equivalent to the wall surface or the bottom surface of the shaft or a structure equivalent to the wall surface or the bottom surface, as shown in
In this case, as shown in
For example, as shown in
For example, a calculation result of the circumscribed rectangle is shown below.
X coordinate maximum value=1.4 [m]
X coordinate minimum value=−1.3 [m]
Y coordinate maximum value=1.2 [m]
Y coordinate minimum value=−1.1 [m]
Z coordinate maximum value=2.0 [m]
Z coordinate minimum value=−1.5 [m]
In this case, when a projection image in the X-axis plus direction is generated, in a projection area, α1, α2, and α3 are shown below as preset constant values.
Range of the X coordinate: 1.4−α1<x<1.4
Range of the Y coordinate: −1.0+α2<Y<1.2−α2
Range of the Z coordinate: −1.5+α3<z<2.0−α3
As a result, as shown in
For example, the labeling may be performed in a framework of image recognition based on image features obtained from the projection images. In particular, in the projection image of the front surface, characteristic texture patterns such as the hatch door and the landing sill are present. Therefore, the projection image of the front surface is desirable as a target of the labeling. In this case, if edge features and position-invariable and scale-invariable local features are learned from several patterns as learning data of types, sizes, and layouts of the hatch door and the landing sill, recognition processing is easily performed.
Actually, uncertainty of directions of the projection images needs to be considered. Therefore, as shown in
The initial aligning unit 11 may exclude a projection image having an excessively small number of projection points among the projection images from processing targets.
In point group data in which the bottom surface and the ceiling surface of the shaft are originally hardly measured, the number of points projected onto a projection image is small. In this case, the initial aligning unit 11 may consider that the projection image is unlikely to be the “front surface” and exclude the projection surface from the processing targets in advance.
The initial aligning unit 11 may analytically perform the identification based on extraction of a rectangle like the hatch door, extraction of a horizontal line like the upper end of the landing sill, and the like.
Thereafter, as shown in
As a result, as shown in
Note that, in the third embodiment, the reference straight line extracting unit 2 extracts a reference straight line from the point group data aligned by the initial aligning unit 11.
According to the third embodiment explained above, the processing device 1 extracts a pair of planes orthogonal to each other from the 3D data of the shaft. The processing device 1 aligns the 3D data of the shaft according to the pair of planes. Therefore, it is possible to align the 3D data of the shaft irrespective of a posture at a measurement time of a 3D input unit.
Note that, when the reference plane extracting unit 4 extracts a reference plane first, the reference plane extracting unit 4 only has to extract a reference plane of the shaft from the 3D data aligned by the initial aligning unit 11.
As explained above, the elevator 3D data processing device of the present disclosure can be used in a system that processes data.
1 Processing device, 2 Reference straight line extracting unit, 3 First aligning unit, 4 Reference plane extracting unit, 5 Second aligning unit, 6 Projection image generating unit, 7 Projection image feature extracting unit, 8 Reference position specifying unit, 9 Dimension calculating unit, 10 Plane extracting unit, 11 Initial aligning unit, 100a Processor, 100b Memory, 200 Hardware
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2020/017983 | 4/27/2020 | WO |