The present disclosure claims priority to Chinese Patent Application No. 202111133068.3 submitted to the Chinese Patent Office on Sep. 27, 2021, entitled “Method, Apparatus and device for photogrammetry, and Storage Medium”, which is incorporated by reference in its entirety.
The present disclosure relates to the technical field of photogrammetry, and in particular to a method, apparatus and device for photogrammetry, and a storage medium.
In the related art, in the photogrammetry implemented based on monocular cameras, it is generally necessary to manually arrange a large number of encoded points on a surface of a measured object. Moreover, in the process of arrangement, some experiences are required, such as density distribution and spatial position relationship of the arrangement. After the photogrammetry is completed, it is also necessary to recover these encoded points, and the processes of arrangement and recovery are very time-consuming. Furthermore, if positions of the encoded points are moved in a photographing process, it may also cause measurement failure or a decrease in measurement accuracy. Therefore, there is an urgent need for a new photogrammetry method to solve the above problems.
A first aspect according to the embodiments of the present disclosure provides a method for photogrammetry. The method includes the following steps:
A second aspect according to the embodiments of the present disclosure provides an apparatus for photogrammetry, including:
A third aspect according to the embodiments of the present disclosure provides a device for photogrammetry. The photogrammetry device includes a memory and a processor, where the memory stores a computer program, and the computer program, when executed by the processor, may implement the method described in the first aspect.
A fourth aspect according to the embodiments of the present disclosure provides a computer-readable storage medium. The storage medium stores a computer program, and the computer program, when executed by a processor, may implement the method described in the first aspect.
The accompanying drawings herein are incorporated in and constitute a part of the specification, illustrating embodiments consistent with the present disclosure, and explaining the principles of the present disclosure together with the specification.
In order to provide a clearer explanation of the embodiments of the present disclosure or technical solutions in the related art, the accompanying drawings required in the embodiments or description of the related art will be briefly introduced below. It is evident that for those of ordinary skill in the art, other drawings may also be obtained based on these accompanying drawings without any creative effort.
In order to make the understanding of the above objectives, features and advantages of the present disclosure clearer, technical solutions of the present disclosure will be further described below. It should be noted that embodiments of the present disclosure and features in the embodiments can be combined with each other in the case of no conflict.
Many specific details are set forth in the following description in order to fully understand the present disclosure, but the present disclosure may also be implemented in other ways different from those described herein. Definitely, embodiments of the specification are merely a part of the embodiments of the present discourse and not all the embodiments.
Step 101: Multiple groups of synchronous images of an object to be measured continuously photographed by a multi-view camera are obtained, where each group of synchronous images includes multiple images photographed by multiple cameras in the multi-view camera at a same moment, and multiple mark points are arranged on a surface of the object to be measured.
The multi-view camera referred to in the embodiments of the present disclosure can be understood as a camera combination including two or more cameras. Before photographing, the multi-view camera can be calibrated first based on a calibration method provided by relevant technologies to obtain internal parameters of each camera in the multi-view camera and relative external parameters between the cameras. For example, in a feasible calibration mode, the multi-view camera can be calibrated by the following method:
In a case that the three-dimensional coordinates and actual numbers of mark points on a calibration plate are given, by collecting multiple groups of images at different positions and angles and using a bundle adjustment algorithm, iterative optimization is performed to minimize an error between the coordinates of image points of the mark points on the images and coordinates of projection points of the three-dimensional coordinates of the mark points on the images, so as to obtain internal parameters of each camera in the multi-view camera as well as exterior parameters (namely, external parameters) of each camera relative to the calibration plate at each photographing position, thus determining relative external parameters between the cameras according to the external parameters of each camera relative to the calibration plate at each photographing position to complete camera calibration.
The mark points referred to in the embodiments of the present disclosure refer to patterns which are made of retroreflective materials and have marking functions (such as dots having specific sizes, but not limited to dots). In the embodiments of the present disclosure, different mark points may have different features, such as diameters and colors, but are not limited to diameters and colors.
Each group of synchronous images referred to in the embodiments of the present disclosure includes the multiple images photographed by the multiple cameras in the multi-view camera at the same moment, which can be understood as n images photographed by n cameras at the same moment under a same trigger signal, where each camera corresponds to an image, and n is a positive integer greater than or equal to 2. Taking a binocular camera as an example, the binocular camera obtains a first group of synchronous images at a first time, and the first group of synchronous images includes: a first image 11 obtained by a first camera at the first time and a second image 12 obtained by a second camera at the first time; and the binocular camera obtains a second group of synchronous images at a second time, and the second group of synchronous images includes: a first image 21 obtained by the first camera at the second time and a second image 22 obtained by the second camera at the second time. Certainly, the binocular camera is only taken as an example for exemplary description, but not uniquely limited in the embodiments of the present disclosure.
In an implementation of the embodiments of the present disclosure, the multiple mark points can be arranged on a surface of the object to be measured in advance, and a multi-view industrial camera, such as a charge coupled device (CCD) camera, can be used for photographing. The multi-view industrial camera is a key component in a machine vision system, and the most essential function of the multi-view industrial camera is to transform optical signals into ordered electrical signals. The object to be measured can be continuously photographed by moving the multi-view industrial camera to obtain the multiple groups of synchronous images of the object to be measured.
Step 102: Coordinates of image points corresponding to the mark points in the synchronous images are extracted for each group of synchronous images, and first three-dimensional coordinates of the mark points corresponding to the image points are reconstructed according to calibration data of the multi-view camera and the extracted coordinates of the image points to obtain multiple groups of three-dimensional mark points.
The “first three-dimensional coordinates” referred to in the embodiments of the present disclosure are only used for distinguishing three-dimensional coordinates obtained based on a three-dimensional reconstruction method and three-dimensional coordinates obtained based on other methods, and do not have any other meaning.
The first three-dimensional coordinates of the mark points can be understood as coordinates of the mark points in a world coordinate system, where the world coordinate system can be understood as a coordinate system established in the three-dimensional space to describe a position relationship between a camera and an object to be measured in the three-dimensional space. This coordinate system can be marked with OwXwYwZw, where Ow represents an origin of the coordinate system, Xw represents an x-axis component of the coordinate system, Yw represents a y-axis component of the coordinate system, and Zw represents a z-axis component of the coordinate system. The origin of the world coordinate system may be set according to actual needs. The coordinates of mark points in the three-dimensional space may be marked with (Xw, Yw, Zw).
A scenario in the space is projected by a pinhole model and imaged on a CCD, and then is collected and stored as an image. For the convenience of describing the image, it is necessary to define an image coordinate system. The image coordinate system is a two-dimensional coordinate system, an origin of the coordinate system is set at an upper left corner of the image, and an x axis and a y axis are coplanar with the synchronous image. The image point coordinates of mark points in the image refer to two-dimensional coordinates of image points corresponding to the mark points in the image coordinate system. In the image coordinate system, the coordinates of image points may be measured in pixels, and each pixel stores a gray value of the image.
In photogrammetry, a camera coordinate system and an image plane coordinate system are also needed. The camera coordinate system is mainly configured to complete the transformation from the world coordinate system to the image plane coordinate system, that is, complete the projection from three-dimensional coordinates to two-dimensional images. The image plane coordinate system can record two-dimensional information obtained from the projection of mark points to complete the transformation from the two-dimensional information to a synchronous image coordinate system.
The camera coordinate system is OcXcYcZc, an optical center of a camera is taken as an origin, an optical axis of the camera is selected as a Z axis, and a plane formed by an Xc axis and a Yc axis of the camera coordinate system is parallel to a surface of an image plane (namely synchronous image).
The plane where the image plane coordinate system is located is coplanar with the image plane, and the image plane coordinate system is defined as oxy. The origin o of the image plane coordinate system is selected at an intersection of the optical axis and the image plane. An effective focal length f of the camera defines distance from the optical center to the image plane, and the x-axis and y-axis directions of the image plane coordinate system are respectively consistent with pixel directions of a camera imaging device.
In this embodiment, first three-dimensional coordinates of the mark points included in the synchronous images in the world coordinate system are reconstructed based on a polar line matching method by extracting the coordinates of the image points in the synchronous images corresponding to the mark points in the synchronous images and according to the pre-obtained calibration data of the multi-view camera (including internal parameters of each camera and relative external parameter between cameras) and the coordinates of the image points of the mark points in the synchronous images. For example,
It should be noted that in an implementation of the embodiments of the present disclosure, the coordinates of the image points of the mark points in the synchronous images may be extracted by an edge extraction method. For example, edge extraction processing may be performed on the synchronous images first to obtain image points of the mark points in the synchronous images, and then, the coordinates of the image points in a coordinate system of the synchronous images are determined based on the positions of the image points of the mark points in the synchronous images. Certainly, this method is only one method for extracting the coordinates of image points of mark points, but is not a unique method.
Multiple groups of three-dimensional mark points referred to in the embodiments of the present disclosure refer to three-dimensional mark points obtained by processing each group of synchronous images in multiple groups of synchronous images.
In the embodiments of the present disclosure, a corresponding three-dimensional mark point can be computed based on each image point on each group of synchronous images, and three-dimensional mark points computed based on image points on different synchronous images may be the same.
Step 103: A mark point global framework corresponding to the mark points on the surface of the object to be measured is constructed based on the multiple groups of three-dimensional mark points.
In the embodiments of the present disclosure, the multiple groups of three-dimensional mark points can be stitched in a certain order based on the multiple groups of obtained three-dimensional mark points, so as to construct the mark point global framework corresponding to the mark points on the surface of the object to be measured.
In the embodiments of the present disclosure, the multiple groups of synchronous images obtained by continuously photographing the object to be measured provided with the multiple mark points on the surface by the multi-view camera are obtained, where each group of synchronous images includes the multiple images photographed by the multiple cameras in the multi-view camera at the same moment; the coordinates of image points corresponding to the mark points in each group of synchronous images are extracted, and first three-dimensional coordinates of the mark points corresponding to the image points are reconstructed according to the calibration data of the multi-view camera and the coordinates of the image points to obtain the multiple groups of three-dimensional mark points; and the mark point global framework corresponding to the mark points on the surface of the object to be measured is constructed based on the multiple groups of three-dimensional mark points. Matching of the same mark points can be achieved without encoded points or with a small number of encoded points, thus achieving the photogrammetry without encoded points or with a small number of encoded points, reducing workload of the measuring personnel in arranging encoded points, and improving measurement efficiency. Furthermore, because the photogrammetry of an object to be measured can be achieved without encoded points, inaccurate measurement due to movement of encoded points is avoided, and measurement accuracy is improved.
In some embodiments of the present disclosure, the mark point global framework corresponding to the mark points on the surface of the object to be measured is constructed based on multiple groups of three-dimensional mark points, and tracking and stitching processing and inter-group deduplication processing can be performed on the multiple groups of three-dimensional mark points to obtain the mark point global framework corresponding to the mark points on the surface of the object to be measured.
In some embodiments, tracking and stitching processing and inter-group deduplication processing are performed on the multiple groups of three-dimensional mark points to obtain the mark point global framework corresponding to the mark points on the surface of the object to be measured. A flowchart of a photogrammetry method provided in
Step 301: Multiple groups of synchronous images of an object to be measured continuously photographed by a multi-view camera are obtained, where each group of synchronous images includes multiple images photographed by multiple cameras in the multi-view camera at the same moment, and multiple mark points are arranged on a surface of the object to be measured.
Step 302: Coordinates of image points corresponding to the mark points in the synchronous images are extracted for each group of synchronous images, and first three-dimensional coordinates of the mark points corresponding to the image points are reconstructed according to calibration data of the multi-view camera and the coordinates of the image points to obtain multiple groups of three-dimensional mark points.
Step 303: Tracking and stitching processing is performed on the multiple groups of three-dimensional mark points to obtain a mark point original framework corresponding to the mark points on the surface of the object to be measured and numbers of the three-dimensional mark points in each group in the original framework.
In the embodiments of the present disclosure, spatial triangles can be established respectively based on the three-dimensional mark points in each group of synchronous images, then congruent triangles in inter-group triangles are matched, the three-dimensional mark points that form two congruent triangles are determined as the same three-dimensional mark points, and thus, two synchronous images are stitched based on the corresponding relationship between inter-group congruent triangles. Specifically, when the corresponding relationship cannot be found in inter-group matching, the corresponding relationship will be searched for this group in the global framework. In this way, tracking and stitching of the multiple groups of three-dimensional mark points can be achieved to obtain the mark point original framework.
During tracking of three-dimensional mark points, the features of each group of three-dimensional mark points, such as triangles formed between three-dimensional mark points, can be extracted first based on the characteristic that different three-dimensional mark points have different features, and then, the three-dimensional mark points with the feature similarity higher than or equal to a preset threshold are determined as the same three-dimensional mark points and the three-dimensional mark points with the feature similarity less than the preset threshold are determined as different three-dimensional mark points based on the features of each group of three-dimensional mark points, so the same three-dimensional mark points in each group are numbered with the same numbers, and different three-dimensional mark points in each group are numbered with different numbers, so as to obtain numbers of the three-dimensional mark points in each group in the mark point original framework. For example, a first group of synchronous images are images photographed for the first time during continuous photographing of cameras, and a second group of synchronous images are images photographed for the second time during continuous photographing of cameras; the first group of synchronous images have 5 three-dimensional mark points with different features, numbered as 1, 2, 3, 4 and 5 respectively; and the second group of synchronous images have 6 three-dimensional mark points with different features, where 5 three-dimensional mark points are the same as the 5 three-dimensional mark points in the first group of synchronous images respectively, so the 5 three-dimensional mark points in the second group of synchronous images are numbered as 1, 2, 3, 4 and 5 correspondingly. That is to say, the same three-dimensional mark points are numbered with the same numbers, the remaining three-dimensional mark point is numbered as 6, and different three-dimensional mark points are numbered with different numbers.
Step 304: Inter-group deduplication processing is performed on the numbers of the three-dimensional mark points in each group in the original framework to obtain a mark point global framework corresponding to the mark points on the surface of the object to be measured and unique numbers of the three-dimensional mark points in each group in the global framework.
In the embodiments of the present disclosure, each group of synchronous images are located in different coordinate systems before stitching. During stitching processing of each group of synchronous images, it is necessary to transform each group of synchronous images from respective different coordinate systems to the same coordinate system for stitching; and during transformation of different coordinate systems, there are system errors, and the stitching process may cause error accumulation. As a result, in the embodiments of the present disclosure, inter-group deduplication processing is performed according to the number of each three-dimensional mark point in the original framework, the three-dimensional coordinates of each three-dimensional mark point and the position of the image point of each three-dimensional mark point in each synchronous image, so as to obtain a mark point global framework corresponding to the mark points on the surface of the object to be measured and unique numbers of the three-dimensional mark points in each group in the global framework to ensure the uniqueness and consistency of the number of each three-dimensional mark point in the global framework, thus eliminating or reducing the cumulative error in transformation of coordinate systems for each group of synchronous images during stitching, ensuring accuracy of multiple groups of synchronous images during stitching, and ensuring the uniqueness of the numbers of the three-dimensional mark points in each group in the global framework. The inter-group deduplication processing may also be referred to as inter-frame deduplication processing. The method for performing inter-group deduplication processing in the embodiments of the present disclosure is similar to that in related technologies, and will not be described here.
Compared with the related art, the technical solutions provided in the embodiments of the present disclosure have the following advantages:
In the embodiments of the present disclosure, multiple groups of synchronous images obtained by continuously photographing an object to be measured provided with multiple mark points on a surface by a multi-view camera are obtained, where each group of synchronous images includes multiple images photographed by multiple cameras in the multi-view camera at a same moment; coordinates of image points corresponding to the mark points in each group of synchronous images are extracted, and first three-dimensional coordinates of the mark points corresponding to the image points are reconstructed according to calibration data of the multi-view camera and the coordinates of the image points to obtain multiple groups of three-dimensional mark points; tracking and stitching processing is performed on the multiple groups of three-dimensional mark points to obtain a mark point original framework corresponding to the mark points on the surface of the object to be measured and numbers of the three-dimensional mark points in each group in the original framework; and inter-group deduplication processing is performed on the numbers of the three-dimensional mark points in each group in the original framework to obtain a global framework and unique numbers of the three-dimensional mark points in each group in the global framework. According to the technical solutions provided in the embodiments of the present disclosure, matching of the same mark points can be achieved without encoded points or with a small number of encoded points by combining a multi-view measurement technology with a tracking and stitching technology and an inter-group deduplication technology, thus achieving the photogrammetry without encoded points or with a small number of encoded points, reducing workload of the measuring personnel in arranging encoded points, and improving measurement efficiency. Furthermore, because the photogrammetry of an object to be measured can be achieved without encoded points, inaccurate measurement due to movement of encoded points is avoided, and measurement accuracy is improved.
Step 401: Multiple groups of synchronous images of an object to be measured continuously photographed by a multi-view camera are obtained, where each group of synchronous images includes multiple images photographed by multiple cameras in the multi-view camera at a same moment, and multiple mark points are arranged on a surface of the object to be measured.
Step 402: Coordinates of image points corresponding to the mark points in the synchronous images are extracted for each group of synchronous images, and first three-dimensional coordinates of the mark points corresponding to the image points are reconstructed according to calibration data of the multi-view camera and the extracted coordinates of the image points to obtain multiple groups of three-dimensional mark points.
Step 403: Tracking and stitching processing is performed on the multiple groups of three-dimensional mark points to obtain a mark point original framework corresponding to the mark points on the surface of the object to be measured and numbers of the three-dimensional mark points in each group in the original framework.
Step 404: Inter-group deduplication processing is performed on the numbers of the three-dimensional mark points in each group in the original framework to obtain a global framework and unique numbers of the three-dimensional mark points in each group in the global framework.
Step 405: Bundle adjustment processing is performed on the first three-dimensional coordinates of each numbered three-dimensional mark point based on the coordinates of the image points and the first three-dimensional coordinates of each numbered three-dimensional mark point in the global framework on each group of synchronous images as well as internal parameters and external parameters of the multi-view camera, so as to obtain second three-dimensional coordinates corresponding to each numbered three-dimensional mark point.
Specifically, after the above steps of reconstructing the first three-dimensional coordinates of the mark points and performing tracking and stitching processing and inter-group deduplication processing are completed, all three-dimensional mark points in the obtained framework have determined unique numbers. Simultaneously, there is a one-to-one correspondence relationship between the mark points included in each synchronous image and the three-dimensional mark points in the global framework. Furthermore, the internal parameters and external parameters of the multi-view camera, the coordinates of the image points of each numbered three-dimensional mark point on each group of synchronous images, and the first three-dimensional coordinates of each numbered three-dimensional mark point are obtained in advance. Thus, all conditions required for bundle adjustment are prepared.
In some embodiments, the process of performing bundle adjustment processing on the first three-dimensional coordinates of each numbered three-dimensional mark point based on the coordinates of the image points and the first three-dimensional coordinates of each numbered three-dimensional mark point in the global framework on each group of synchronous images as well as internal parameters and external parameters of the multi-view camera, so as to obtain second three-dimensional coordinates corresponding to each numbered three-dimensional mark point may include steps 40501 to 40502:
Step 40501: Internal orientation elements and external orientation elements corresponding to each group of synchronous images are respectively inputted into collinear equations to obtain collinear equations to be solved corresponding to each group of synchronous images.
In practice, internal parameters of a camera include internal orientation elements, distortion coefficients, pixel sizes in horizontal and vertical directions, and a ratio of the pixel sizes in the horizontal and vertical directions. The internal orientation elements include a translation distance x0 from an origin o of an image plane coordinate system (an intersection of an optical axis of the camera and an image plane) to a center of a synchronous image in a horizontal direction, a translation distance y0 from the origin o of the image plane coordinate system to the center of the synchronous image in a vertical direction, and a distance from an optical center of the camera to the image plane, namely an effective focal length f of the camera. That is to say, the internal orientation elements express the spatial position of the optical center of the camera relative to the center of the synchronous image. The internal parameters of the camera can be obtained through camera calibration. It can be considered that the internal orientation elements are determined in the camera calibration mentioned above.
In the embodiments of the present disclosure, the parameters of the spatial position and posture of a photographic beam at a moment of photography can be determined according to the internal parameters and external parameters of the multi-view camera. The parameters of the spatial position and posture of the photographic beam at the moment of photography are referred to as external orientation elements for representing the spatial position of the photographic beam at the moment of photography. The external orientation elements include six parameters, where three elements are line elements for describing spatial coordinate values of a photographic center, and the other three are angle elements for describing the spatial posture of the image.
Specifically, a collinear equation is a mathematical relationship expressing that an object point, an image point and a projection center (usually a lens center for an image) are located on a straight line.
In some embodiments of the present disclosure, internal orientation elements and external orientation elements corresponding to each group of synchronous images can be inputted into collinear equations to obtain collinear equations to be solved corresponding to each group of synchronous images. The collinear equations are represented by the following formulae:
Step 40502: Iterative computations are performed on all collinear equations to be solved in sequence by taking the coordinates of the image points of each numbered three-dimensional mark point in the global framework on each group of synchronous images and the first three-dimensional coordinates of each numbered three-dimensional mark point as initial values based on a bundle adjustment algorithm, so as to obtain the second three-dimensional coordinates corresponding to each numbered three-dimensional mark point.
Specifically, first, the coordinates of the image points of each numbered three-dimensional mark point in the global framework on each group of synchronous images and the first three-dimensional coordinates of each numbered three-dimensional mark point are taken as initial values to be inputted into collinear equations to be solved corresponding to each group of synchronous images for iterative computations, the three-dimensional mark points and the internal parameters and external parameters of the multi-view camera are optimized jointly at the same time, and optimal coordinates of the three-dimensional mark points, namely the second three-dimensional coordinates of the three-dimensional mark points, are obtained through minimum residuals. The process of the bundle adjustment algorithm involved in the embodiments of the present disclosure is similar to that in related technologies, so details can refer to the bundle adjustment algorithm provided in the related technologies and will not be described here.
In this embodiment, by performing tracking and stitching processing on multiple groups of three-dimensional mark points, features of each group of three-dimensional mark points are extracted to number the three-dimensional mark points; inter-group deduplication processing is performed on the numbers of the three-dimensional mark points in each group in the framework to ensure the uniqueness and consistency of the numbers of the three-dimensional mark points in each group in the framework; bundle adjustment processing is performed on the first three-dimensional coordinates of each numbered three-dimensional mark point based on the coordinates of the image points and the first three-dimensional coordinates of each numbered three-dimensional mark point in the framework on each group of synchronous images as well as the internal parameters and external parameters of the multi-view camera, so as to obtain second three-dimensional coordinates corresponding to each numbered three-dimensional mark point, namely optimized three-dimensional coordinates, so that a three-dimensional image obtained by multi-view measurement is more accurate, adverse impacts on the measurement result of the multi-view camera caused by environmental factors such as temperature are avoided, and a reconstructing result can be viewed in real time; and the image point coordinates of the mark points are directly processed to determine three-dimensional coordinates of the mark points, so that the amount of computation is small, and measurement efficiency can be improved.
In some embodiments of the present disclosure, a ruler may be arranged on the surface of the object to be measured or periphery of the object to be measured. Multiple groups of synchronous images of the object to be measured continuously photographed by the multi-view camera include the object to be measured and the ruler, that is, the ruler and the object to be measured are photographed simultaneously by the multi-view camera. After the unique numbers of the three-dimensional mark points in each group in the global framework are obtained, the photogrammetry device may also execute a flowchart of a photogrammetry method provided in
Step 501: A measurement size of at least one ruler corresponding to the object to be measured and a physical size corresponding to the measurement size are obtained.
The ruler in the embodiments of the present disclosure can be understood as a ruler with a known physical size, which can be used as a reference for photogrammetry. The physical size can be understood as an actual size, for example, the physical size may include an actual length of the ruler, and the like. Each ruler may be composed of at least two encoded mark points arranged on the surface of the object to be measured or periphery of the object to be measured, the encoded mark points can be understood as special mark points with known encoded information, and the physical size of the ruler may be obtained according to the distance between the encoded mark points. At least one ruler corresponds to the object to be measured. In some embodiments, the ruler may include a ruler carrier, and encoded mark points may be arranged on the ruler carrier.
In the embodiments of the present disclosure, a measurement size of at least one ruler corresponding to the object to be measured and a physical size corresponding to the measurement size can be obtained.
In some embodiments, the process of obtaining a measurement size of at least one ruler corresponding to the object to be measured and a physical size corresponding to the measurement size may include steps 50101 to 50103:
Step 50101: Second three-dimensional coordinates of encoded mark points corresponding to the encoded information are obtained based on the encoded information on any two encoded mark points in the ruler.
In the embodiments of the present disclosure, first three-dimensional coordinates of the mark points on the surface of the object to be measured and first three-dimensional coordinates of the encoded mark points on the ruler corresponding to the object to be measured can be reconstructed based on multiple groups of synchronous images, so as to obtain the multiple groups of three-dimensional mark points; tracking and stitching processing, inter-group deduplication processing and bundle adjustment processing are performed on the multiple groups of three-dimensional mark points to obtain second three-dimensional coordinates of each three-dimensional mark point; the second three-dimensional coordinates include second three-dimensional coordinates of the mark points on the surface of the object to be measured and second three-dimensional coordinates of the encoded mark points on the ruler corresponding to the object to be measured; and second three-dimensional coordinates of the encoded mark points corresponding to the encoded information are obtained based on the encoded information on any two encoded mark points in the ruler.
Step 50102: The distance between any two encoded mark points is computed based on the second three-dimensional coordinates of the encoded mark points corresponding to the encoded information, so as to obtain a measurement size corresponding to the ruler.
In the embodiments of the present disclosure, the distance between any two encoded mark points can be computed based on the second three-dimensional coordinates of the encoded mark points corresponding to the encoded information, and the distance is determined as the measurement size corresponding to the ruler.
Step 50103: A physical size corresponding to the measurement size is determined based on the encoded information on the encoded mark points corresponding to the measurement size.
In the embodiments of the present disclosure, identification information of the ruler and the physical size determined by any two encoded mark points in the ruler corresponding to the identification information of the ruler may be stored in advance, and the identification information of the ruler includes the encoded information of each encoded mark point in the ruler.
In the embodiments of the present disclosure, the identification information corresponding to the encoded information may be determined based on the encoded information on the encoded mark point corresponding to the measurement size, and the physical size corresponding to the measurement size is obtained in the physical size of the ruler corresponding to the identification information.
Step 502: A ratio of the measurement size to the physical size of the ruler is computed.
In the embodiments of the present disclosure, after the measurement size and the physical size of the ruler are obtained, a ratio of the measurement size to the physical size of each ruler can be computed.
Step 503: The second three-dimensional coordinates of each three-dimensional mark point in the global framework are adjusted based on the ratio to obtain an adjusted global framework.
In the embodiments of the present disclosure, after the ratio of the ruler corresponding to the object to be measured is obtained, the ratio of one ruler can be selected, and the second three-dimensional coordinates of each three-dimensional mark point in the global framework are multiplied by the ratio to adjust the second three-dimensional coordinates of each three-dimensional mark point in the global framework, so as to obtain an adjusted global framework.
In some embodiments, the process of adjusting the second three-dimensional coordinates of each three-dimensional mark point in the global framework based on the ratio to obtain an adjusted global framework may include steps 50301 to 50302:
Step 50301: An average ratio of the rulers corresponding to the object to be measured is computed based on the number of the rulers and the ratio.
In the embodiments of the present disclosure, a quotient of the ratio and the number can be computed based on the number of the ruler and the ratio of the measurement size to the physical size of each ruler, so as to obtain the average ratio of the rulers corresponding to the object to be measured.
Step 50302: The second three-dimensional coordinates of each three-dimensional mark point in the global framework are adjusted based on the average ratio to obtain an adjusted framework.
In the embodiments of the present disclosure, after the average ratio of the rulers corresponding to the object to be measured is obtained, the second three-dimensional coordinates of each three-dimensional mark point in the global framework may be multiplied by the average ratio to adjust the second three-dimensional coordinates of each three-dimensional mark point in the global framework, so as to obtain an adjusted global framework.
For example, three rulers a, b and c are arranged on the surface of the object to be measured; a physical length of the ruler a is 100.1 mm, a physical length of the ruler b is 100.2 mm, and a physical length of the ruler c is 100.3 mm; for each ruler, based on the second three-dimensional coordinates corresponding to the encoded mark points in the ruler, it can be computed that the measurement size of the ruler a is 99.9 mm, the measurement size of the ruler b is 100.0 mm, and the measurement size of the ruler c is 100.1 mm; an average ratio (100.1/99.9+100.2/100.0+100.3/100.1)/3 of the three rulers is computed; and then, the second three-dimensional coordinates of each three-dimensional mark point in the global framework are multiplied by the average ratio to adjust the second three-dimensional coordinates of the three-dimensional mark points in the global framework, so as to obtain an adjusted global framework.
Therefore, the second three-dimensional coordinates of each three-dimensional mark point in the global framework can be adjusted according to the ruler corresponding to the object to be measured to optimize the global framework, so as to improve accuracy of photogrammetry.
Step 601: Multiple groups of synchronous images of an object to be measured continuously photographed by a multi-view camera are obtained, where each group of synchronous images includes multiple images photographed by multiple cameras in the multi-view camera at a same moment, and multiple mark points are arranged on a surface of the object to be measured.
Step 602: Coordinates of image points corresponding to the mark points in the synchronous images are extracted for each group of synchronous images, and first three-dimensional coordinates of the mark points corresponding to the image points are reconstructed according to calibration data of the multi-view camera and the coordinates of the image points to obtain multiple groups of three-dimensional mark points.
Step 603: Tracking and stitching processing is performed on the multiple groups of three-dimensional mark points to obtain a mark point original framework corresponding to the mark points on the surface of the object to be measured and numbers of the three-dimensional mark points in each group in the original framework.
Step 604: Inter-group deduplication processing is performed on the numbers of the three-dimensional mark points in each group in the original framework to obtain a global framework and unique numbers of the three-dimensional mark points in each group in the global framework.
Step 605: An image corresponding to the monocular camera in each group of synchronous images is extracted for any monocular camera in the multi-view camera.
In the embodiments of the present disclosure, the multi-view camera includes at least two monocular cameras. After the mark point global framework corresponding to the mark points on the surface of the object to be measured and the unique numbers of the three-dimensional mark points in each group in the global framework are obtained, an image corresponding to the monocular camera in each group of synchronous images can be extracted for any monocular camera in the multi-view camera.
Step 606: The coordinates of the image points of each numbered three-dimensional mark point in the global framework on the image corresponding to the monocular camera are determined as first image point coordinates.
In the embodiments of the present disclosure, the coordinates of the image points of each numbered three-dimensional mark point in the global framework on the image corresponding to the monocular camera can be determined first, and then, the coordinates of the image points of each numbered three-dimensional mark point in the global framework on the image corresponding to the monocular camera can be determined as first image point coordinates.
Step 607: Bundle adjustment processing is performed on the first three-dimensional coordinates of each numbered three-dimensional mark point based on the first image point coordinates of each numbered three-dimensional mark point in the global framework on the image corresponding to the monocular camera, the first three-dimensional coordinates of each numbered three-dimensional mark point as well as internal parameters and external orientation elements of the monocular camera, so as to obtain third three-dimensional coordinates corresponding to each numbered three-dimensional mark point.
In the embodiments of the present disclosure, the internal parameters of the monocular camera can be obtained through calibration. The parameters of the spatial position and posture of a photographic beam at a moment of photography can be determined according to the internal parameters of the monocular camera. The parameters of the spatial position and posture of the photographic beam at the moment of photography are referred to as external orientation elements for representing the spatial position of the photographic beam at the moment of photography. The external orientation elements include six parameters, where three elements are line elements for describing spatial coordinate values of a photographic center, and the other three are angle elements for describing the spatial posture of the image.
In the embodiments of the present disclosure, bundle adjustment processing can be performed on the first three-dimensional coordinates of each numbered three-dimensional mark point based on the first image point coordinates of each numbered three-dimensional mark point in the global framework on the image corresponding to the monocular camera, the first three-dimensional coordinates of each numbered three-dimensional mark point as well as the internal parameters and external orientation elements of the monocular camera, so as to obtain the third three-dimensional coordinates corresponding to each numbered three-dimensional mark point.
The “third three-dimensional coordinates” referred to in the embodiments of the present disclosure are only used for distinguishing three-dimensional coordinates obtained based on a three-dimensional reconstruction method and three-dimensional coordinates obtained based on other methods, and do not have any other meaning.
In some embodiments, the process of performing bundle adjustment processing on the first three-dimensional coordinates of each numbered three-dimensional mark point based on the first image point coordinates of each numbered three-dimensional mark point in the global framework on the image corresponding to the monocular camera, the first three-dimensional coordinates of each numbered three-dimensional mark point as well as internal parameters and external orientation elements of the monocular camera, so as to obtain third three-dimensional coordinates corresponding to each numbered three-dimensional mark point may include steps 60701 to 60703:
Step 60701: Projection transformation is performed on the first three-dimensional coordinates of each numbered three-dimensional mark point to obtain second image point coordinates of the image points of the first three-dimensional coordinates on the image corresponding to the monocular camera.
In the embodiments of the present disclosure, projection transformation can be performed on the first three-dimensional coordinates of each numbered three-dimensional mark point to obtain coordinates of the image points of the first three-dimensional coordinates on the image corresponding to the monocular camera, namely second image point coordinates.
Step 60702: Residual equations corresponding to each numbered three-dimensional mark point are established based on the first image point coordinates and second image point coordinates corresponding to each numbered three-dimensional mark point.
In the embodiments of the present disclosure, after the first image point coordinates and second image point coordinates corresponding to the first three-dimensional coordinates of each numbered three-dimensional mark point are obtained, residual equations corresponding to each numbered three-dimensional mark point can be established based on the first image point coordinates and second image point coordinates corresponding to each numbered three-dimensional mark point.
Step 60703: Iterative computations are performed on all residual equations in sequence by taking the first three-dimensional coordinates of each numbered three-dimensional mark point in the global framework as well as internal parameters and external orientation elements of the monocular camera as initial values based on a bundle adjustment algorithm, so as to obtain third three-dimensional coordinates corresponding to each numbered three-dimensional mark point.
In the embodiments of the present disclosure, iterative computations can be performed on all residual equations corresponding to each numbered three-dimensional mark point in sequence by taking the first three-dimensional coordinates of each numbered three-dimensional mark point in the global framework as well as the internal parameters and external orientation elements of the monocular camera as the initial values, so as to obtain the third three-dimensional coordinates corresponding to each numbered three-dimensional mark point. The process of the bundle adjustment algorithm involved in the embodiments of the present disclosure is similar to that in related technologies, so details can refer to the bundle adjustment algorithm provided in the related technologies and will not be described here.
Therefore, bundle adjustment processing can be performed on the first three-dimensional coordinates of each numbered three-dimensional mark point based on the first image point coordinates of each numbered three-dimensional mark point in the global framework on the image corresponding to the monocular camera, the first three-dimensional coordinates of each numbered three-dimensional mark point as well as the internal parameters and external orientation elements of the monocular camera, so as to obtain the third three-dimensional coordinates corresponding to each numbered three-dimensional mark point. As a result, adverse impacts on the measurement result of the multi-view camera caused by possible structural instability factors of the multi-view camera can be avoided to obtain optimized three-dimensional coordinates, so a three-dimensional image obtained by multi-view measurement is more accurate.
Optionally, the above constructing component 703 includes:
Optionally, the above processing sub-component includes:
Optionally, the processing component 702 includes:
Optionally, the above tracking and stitching unit includes:
Optionally, the above processing sub-component further includes:
Optionally, the above first bundle adjustment unit includes:
Optionally, the above processing sub-component further includes:
Optionally, the above obtaining unit includes:
Optionally, the above adjusting unit includes:
Optionally, the above processing sub-component further includes:
Optionally, the above second bundle adjustment unit includes:
The photogrammetry apparatus provided in the embodiment of the present disclosure may implement the method according to any one of the above embodiments. Execution modes and beneficial effects are similar and will not be described here.
An embodiment of the present disclosure provides a photogrammetry device, including:
An embodiment of the present disclosure provides a computer-readable storage medium:
The storage medium stores a computer program, and the computer program, when executed by a processor, may implement the photogrammetry method described above. Execution modes and beneficial effects are similar and will not be described here.
The above computer-readable storage medium may use any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may include, but is not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or any combination of the above. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connector having one or more wires, a portable disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any appropriate combination of the above.
Program codes for executing the operations of the embodiments of the present disclosure may be written by the above computer program in any combination of one or more programming languages. The programming languages include object-oriented programming languages such as Java and C++, and further include conventional procedural programming languages such as “C” language or similar programming languages. The program codes may be executed completely on a user computer device, partially on user equipment, as a stand-alone software package, partially on a user computer device and partially on a remote computer device, or completely on a remote computer device or a server.
It should be noted that in the article, the “first”, the “second” and other relational terms are used for distinguishing an entity or an operation from the other entities and operations only, not necessarily to require or imply any actual relationship or sequence between the entities or the operations; and the terms, such as “comprise”, “comprising” or any other variant, are intended to cover non-exclusive comprising, so that processes, methods, goods or devices containing a series of factors not only comprise the factors, but also comprise other factors which are not listed obviously, or comprise the inherent factors of the processes, the methods, the goods or the devices. The factors restrained by a statement “include a . . . ” shall not exclude the condition that other same factors also exist in the processes, methods, goods or devices including the factors under the condition that no more restraints are required.
The above is only the specific implementation disclosed in the present disclosure, which enables those skilled in the art to understand or implement the present disclosure. Various modifications for the embodiments would be obvious for those skilled in the art, and a general principle defined in the present description can be implemented in other embodiments in the case of not departing from the spirit or scope of the present invention. Therefore, the present invention is not limited to the embodiments described herein, but should fall within a widest scope consistent with the principle and innovative characteristics disclosed herein.
According to the photogrammetry method provided by the present disclosure, matching of the same mark points can be achieved without encoded points or with a small number of encoded points by a multi-view measurement technology, thus achieving the photogrammetry without encoded points or with a small number of encoded points, reducing workload of the measuring personnel in arranging encoded points, and improving measurement efficiency. Furthermore, because the photogrammetry of an object to be measured can be achieved without encoded points, inaccurate measurement due to movement of encoded points is avoided, measurement accuracy is improved, and industrial applicability is very strong.
Number | Date | Country | Kind |
---|---|---|---|
202111133068.3 | Sep 2021 | CN | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/CN2022/121930 | 9/27/2022 | WO |