This application relates to the field of computer technologies, including a data processing method and apparatus, a computer device, a storage medium, and a program product.
Currently, during calibration of an intrinsic component parameter (that is, an intrinsic camera parameter) of a camera component, a calibration board (that is, a shot object) needs to be captured from a plurality of angles by using the camera component, and then the intrinsic component parameter of the camera component is generated based on a plurality of images captured from the plurality of angles. Alternatively, video shooting needs to be performed on the calibration board by using the camera component, and then the intrinsic component parameter of the camera component is generated based on a plurality of video frames captured from the shot video obtained through video shooting. However, the manner of generating the intrinsic component parameter through the plurality of captured images or the plurality of captured video frames requires time to process the plurality of images. Therefore, a speed of calibrating the intrinsic component parameter is increased.
In addition, in the related art, a hardware device (for example, a focus follower) may further be installed in the camera component, and the intrinsic component parameter of the camera component may be directly read by using the hardware device. However, the hardware device is very expensive, and installation and deployment are very troublesome, which increases the costs of calibrating the intrinsic component parameter.
Embodiments of this disclosure provide a data processing method and apparatus, a computer device, a non-transitory computer-readable storage medium, and a program product, which helps improve efficiency of determining one or more intrinsic camera parameters.
Some aspects of the disclosure provide a method of data processing. The method includes obtaining an image of a spatial object in a space, the spatial object is captured in the image by a camera component, the image includes one or more captured planar regions corresponding to one or more planes of the spatial object, a first captured planar region of the one or more captured planar regions includes an array of first captured identification codes that are individually identifiable and includes first captured straight lines, the first captured straight lines are associated with the first captured identification codes according to a first mapping relationship, the first captured straight lines in the image are associated with a first vanishing point. The method further includes identifying the first captured identification codes from the image, identifying the first captured straight lines in the image based on the first mapping relationship, determining first equations of the first captured straight lines in the image based on coordinates of captured points on the first captured straight lines in the image, determining, based on the first equations of the first captured straight lines, coordinates of the first vanishing point, and determining one or more intrinsic parameters of the camera component based on at least the first vanishing point. Apparatus and non-transitory computer-readable storage medium counterpart embodiments are also contemplated.
An aspect of the embodiments of this disclosure provides a non-transitory computer-readable storage medium, the computer-readable storage medium storing instructions which executed by a processor cause the processor to perform the method provided in the embodiments of this disclosure.
An aspect of the embodiments of this disclosure provides a computer program product or a computer program, the computer program product including a computer program, the computer program being stored in a computer-readable storage medium. The processor of the computer device reads the computer program from the computer-readable storage medium. The processor executes the computer program, causing the computer device to perform the method provided in the embodiments of this disclosure.
To describe the technical solutions in embodiments of this disclosure more clearly, the following briefly describes the accompanying drawings required for describing the embodiments.
The technical solutions in embodiments of this disclosure are described below with reference to the accompanying drawings in the embodiments of this disclosure. The described embodiments are merely some rather than all of the embodiments of this disclosure. Other embodiments are within the scope of the present disclosure.
Each terminal device in the terminal device cluster may include: an intelligent terminal having a data processing function such as a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart home appliance, a wearable device, an onboard terminal, an intelligent voice interaction device, and a camera. For ease of understanding, in this embodiment of this disclosure, a terminal device may be selected as a target terminal device from the plurality of terminal devices shown in
The server 2000 may be an independent physical server, or may be a server cluster formed by a plurality of physical servers or a distributed system, and may further be a cloud server that provides basic cloud computing services such as cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, a middleware service, a domain name service, a security service, a CDN, and a big data and artificial intelligence platform.
It is to be understood that the target terminal device may be integrated with a camera component for capturing a target image associated with a spatial object. The camera component herein may be a camera component for capturing a photo or a video on the target terminal device, for example, a camera. A plurality of camera components may be integrated and installed on a target terminal device. The spatial object may be a two-dimensional code green screen, and the two-dimensional code green screen represents a green screen printed with a two-dimensional code. In some embodiments, the spatial object may further be a checkerboard green screen, and the checkerboard green screen represents a green screen printed with a rectangular box in a solid color (for example, black). In addition, the spatial object may further include a to-be-shot subject (for example, a lion). It is to be understood that this embodiment of this disclosure is described by using an example in which the spatial object is the two-dimensional code green screen.
The two-dimensional code green screen may be three surfaces: left wall+right wall+ground. In some embodiments, the two-dimensional code green screen may alternatively be any one surface of the left wall+right wall+the ground, and the two-dimensional code green screen may further be any two surfaces of the left wall+right wall+the ground. All two-dimensional codes in the two-dimensional code green screen have unique patterns and serial numbers, which may be detected in the target image by using an identification code detection algorithm (for example, a two-dimensional code detection algorithm), and coordinates of corners of the two-dimensional codes on the target image can be accurately obtained. For a single two-dimensional code, four vertices formed by a frame (that is, a bounding rectangle of the two-dimensional code) of the two-dimensional code may be referred to as corners of the two-dimensional code, and four edges of a quadrilateral defined by the four corners of the two-dimensional code are an upper edge, a lower edge, a left edge, and a right edge.
It is to be understood that in this disclosure, the two-dimensional code that can be correctly identified by using the two-dimensional code detection algorithm may be referred to as an observable two-dimensional code. It is to be understood that when the two-dimensional code is blocked, the two-dimensional code is not clear, or a part of the two-dimensional code exceeds a picture boundary of the target image, the two-dimensional code detection algorithm cannot be used to detect the two-dimensional code. In this case, the two-dimensional code is not regarded as the observable two-dimensional code.
For ease of understanding, in this disclosure, the two-dimensional code in the two-dimensional code green screen may be referred to as an identification code. In an embodiment, the upper edge, the lower edge, the left edge, and the right edge of the two-dimensional code may be collectively referred to as corresponding spatial line segments of the identification code in this disclosure. The two-dimensional code corner of the two-dimensional code may be referred to as a space corner in this disclosure.
It is to be understood that the foregoing network architecture may be applied to the field of virtual-real fusion, for example, virtual-real fusion in video production (virtual production), live streaming, and post-video special effects. The virtual-real fusion means that a real to-be-shot subject is incorporated into a virtual scene. Compared with a conventional method that involves entirely real shooting, the virtual-real fusion can allow for easy scene replacement, greatly reduce the costs of setting up scenes (the virtual-real fusion requires only a green screen), and can provide impressively cool environmental effects. In addition, the virtual-real fusion is also highly consistent with concepts such as virtual reality (VR), metaverse, and the Complete Reality of Internet, which can provide a very basic ability to incorporate a real person into the virtual scene.
It may be understood that the target terminal device may shoot a real scene through the camera component (that is, a real lens), obtain the virtual scene from the server 2000, and fuse the virtual scene with the real scene to obtain a fusion scene. The virtual scene may be a scene synthesized directly by the server 2000, or may be a scene obtained by the server 2000 from another terminal device other than the target terminal device. Another terminal device other than the target terminal device may shoot the virtual scene through the camera component (that is, a virtual lens).
According to the virtual-real fusion method in this disclosure, the camera component needs to be calibrated before the shooting to ensure correct visual perception of the subsequently synthesized picture (a correct perspective relationship). To ensure the correct perspective relationship between the virtual scene and the real scene, the target terminal device needs to ensure that intrinsic component parameters respectively corresponding to the virtual scene and the real scene (that is, intrinsic camera parameters) match. Therefore, the intrinsic component parameter of the camera component in the target terminal device for the target image may be obtained by identifying the target image captured by the target terminal device, and then the intrinsic component parameter corresponding to the camera component may be adjusted. The to-be-shot object may be shot based on the camera component with the adjusted intrinsic component parameter, and finally the fusion scene having the correct perspective relationship is obtained. The intrinsic component parameter of the camera component is an intrinsic camera parameter, and the intrinsic camera parameter may include but is not limited to an optical center and a focal length.
For ease of understanding, further,
As shown in
The planar region of the spatial object corresponds to at least two coordinate axes, every two of the at least two coordinate axes are used to form a spatial plane (that is, a plane where the left wall 21a, the right wall 21b, and the ground region 21c are located), and every two coordinate axes are perpendicular to each other. As shown in
As shown in
Further, the terminal device 20b may assign straight line identifiers (which may alternatively be referred to as identifiers of straight lines) to N spatial virtual straight lines, and the straight line identifiers of the spatial virtual straight line are used as line segment identifiers of the spatial line segments. For example, the straight line identifiers assigned by the terminal device 20b to the spatial virtual straight line S2 may be a straight line identifier K. In this way, when the spatial line segments on the spatial virtual straight line S2 are a spatial line segment X1, a spatial line segment X2, . . . , and a spatial line segment XM, the terminal device 20b uses the straight line identifier K as the line segment identifier of the spatial line segment X1, the spatial line segment X2, . . . , and the spatial line segment XM. To be specific, the line segment identifier of the spatial line segment X1, the spatial line segment X2, . . . , and the spatial line segment Xx is the straight line identifier K (that is, a line segment identifier K).
As shown in
As shown in
Further, the terminal device 20b may generate, based on the vanishing point identifier and the straight line equation, vanishing point coordinates of the vanishing point indicated by the vanishing point identifier. Specifically, the terminal device 20b may generate, based on the vanishing point identifier and the straight line equation of the spatial virtual straight line mapped by the vanishing point identifier, the vanishing point coordinates of the vanishing point indicated by the vanishing point identifier. For example, the terminal device 20b may generate, based on the straight line equation of the spatial virtual straight line (the spatial virtual straight line mapped by the vanishing point identifier B1 includes the spatial virtual straight line S1 and the spatial virtual straight line S2) mapped by the vanishing point identifier B1, the vanishing point coordinates of the vanishing point indicated by the vanishing point identifier B1. The vanishing point coordinates of the vanishing point indicated by the vanishing point identifier B1 may be vanishing point coordinates Z1. Similarly, the vanishing point coordinates of the vanishing point indicated by the vanishing point identifier B2 may be vanishing point coordinates Z2, and the vanishing point coordinates of the vanishing point indicated by the vanishing point identifier B3 may be vanishing point coordinates Z3.
As shown in
As shown in
It may be seen that in this embodiment of this disclosure, a single target image captured by the camera component may be processed, spatial virtual straight lines parallel to the x-axis, the y-axis, and the z-axis in the target image are obtained in real time, the vanishing point of each group of parallel lines is accurately calculated, and then the intrinsic component parameter of the camera component is calibrated based on the vanishing point coordinates of the vanishing point formed by the spatial virtual straight line. In this embodiment of this disclosure, the intrinsic component parameter of the camera component may be determined by using the single image without processing a plurality of images and without using a hardware device to calibrate the intrinsic component parameter, which may significantly reduce the costs of calibrating the intrinsic component parameter and improve efficiency of calibration.
The virtual-real fusion requires calibration of the camera component. In this embodiment of this disclosure, in collaboration with a shooting technique in which the spatial object can support real-time optical zoom (for example, Hitchcock zoom), a video with an impressively cool picture effect can be produced, thereby improving viewing experience of the virtual-real fusion and attracting more users. In addition, according to this disclosure, hardware costs of supporting optical zoom may be greatly reduced while clarity is ensured, and a hardware threshold can be reduced. A mobile phone, an ordinary camera, and a professional camera may all be used. Installation, deployment, and operation are simple, and a threshold for users to use is lowered to attract more video production users. In the meanwhile, the spatial object can further assist in image matting and camera movement.
Further,
Step S101: Obtain a target image associated with a spatial object.
The target image is obtained by capturing the spatial object by a shooting component. The spatial object includes an array composed of identification codes. A bounding rectangle of the identification code may be regarded as an outline of the identification code, including 4 edges. In short, the identification code may include 4 edges, that is, 4 spatial line segments. Therefore, the target image may alternatively include at least part of the identification code in the array, and the identification code in the target image that may be detected by using an identification code detection algorithm is an observable identification code (for example, an observable two-dimensional code).
Step S102: Obtain, from the target image, a spatial virtual straight line composed of spatial line segments, use a straight line identifier of the spatial virtual straight line as the line segment identifier of the spatial line segment, and determine a vanishing point identifier mapped by the spatial virtual straight line.
It may be understood that the terminal device may use the identification code detection algorithm to identify the identification code in the target image, and then connect spatial line segments in the identification code that are in the same row and on the same side of the array (for example, spatial line segments on an upper side of each of the identification codes in a row, that is, upper edges of the identification codes in the same row), and obtain the spatial virtual straight line by extending the connected spatial line segments. For another example, the spatial line segments in the identification code that are in the same column and on the same side of the array (for example, a left side of each identification code in a row) are connected, and the spatial virtual straight line is obtained by extending the connected spatial line segments. In addition, when the identification code in the target image is identified, the terminal device may generate corner coordinates of a space corner in the identification code in the target image.
It is to be understood that the identification code detection algorithm may be any open source algorithm, for example, an ArUco (Augmented Reality University of Cordoba) identification code detection algorithm in opencv (a cross-platform computer vision and machine learning software library released based on the Apache 2.0 license (open source)). The execution process of the ArUco identification code detection algorithm is candidate box detection, quadrilateral identification, target filtering, and corner correction. After the detection by using the identification code detection algorithm, the identifiers of all observable identification codes (that is, unit code identifiers) and two-dimensional coordinates of four space corners of each observable identification code may be obtained.
It may be understood that the terminal device may assign the unit code identifier to the identification code, and store, in a first table (that is, a table T1), the unit code identifier in association with the line segment identifier of the spatial line segment included in the identification code. Therefore, the table T1 may be used to query for the line segment identifier (that is, the straight line identifier) of the spatial line segment that forms the identification code through the unit code identifier. A unit code identifier may be used to find the four line segment identifiers respectively corresponding to the straight line where the upper edge is located, the straight line where a lower edge is located, the straight line where a left edge is located, and the straight line where a right edge is located. To be specific, a unit code identifier may be used to find the straight line identifiers of the spatial virtual straight lines to which the straight line where the upper edge is located, the straight line where the lower edge is located, the straight line where the left edge is located, and the straight line where the right edge is located respectively belong.
It may be understood that the terminal device may store, in a second table (that is, a table T2), the straight line identifier of the spatial virtual straight line in association with the vanishing point identifier mapped by the spatial virtual straight line. Therefore, the table T2 may be used to query for the vanishing point identifier by using the straight line identifier, and one vanishing point identifier may be found by using one straight line identifier. The terminal device may divide the spatial virtual straight line into three groups of spatial virtual straight lines perpendicular to each other based on the x-axis, y-axis, and z-axis. Each group of spatial virtual straight lines correspond to a vanishing point identifier.
For ease of understanding,
Step S103: Generate a straight line equation of a spatial virtual straight line based on a line segment identifier and corner coordinates of a space corner in a spatial line segment.
Specifically, the terminal device may determine, based on the line segment identifier, the spatial virtual straight line to which the spatial line segment belongs, and use the corner coordinates of the space corner in the spatial line segment as key point coordinates on the spatial virtual straight line. Further, the terminal device may generate the straight line equation of the spatial virtual straight line based on the key point coordinates.
It may be understood that the terminal device may obtain one or more spatial planes composed of spatial coordinate axes corresponding to a target image, determine a maximum quantity of identification codes in the target image based on the one or more spatial planes, and determine a maximum quantity of key points corresponding to the spatial virtual straight line based on the maximum quantity of identification codes.
Further, the terminal device may generate a straight line fitting matrix based on the maximum quantity of key points and a straight line quantity of spatial virtual straight lines, and store, in the straight line fitting matrix, the straight line identifier of the spatial virtual straight line in association with the key point coordinates on the spatial virtual straight line. The straight line fitting matrix may be expressed as Dline, the straight line fitting matrix Dline is a two-dimensional matrix, a height of the matrix is a quantity of all spatial virtual straight lines, that is, Nmax=4*(a+b+c), a width is N, and each element in the straight line fitting matrix Dline is a pair of real number coordinates. A row represents two-dimensional coordinates of space corners on a spatial virtual straight line. The straight line fitting matrix Dline may be used to perform the step of generating the straight line equation of the spatial virtual straight line based on the key point coordinates in step S103.
The terminal device needs to initialize each element in the straight line fitting matrix Dline before obtaining the key point coordinates on the spatial virtual straight line. For example, in this embodiment of this disclosure, each element in the straight line fitting matrix Dline may be initialized to [−1, −1]. It is to be understood that an initialized value of each element in the straight line fitting matrix Dline is not limited in this embodiment of this disclosure.
It may be understood that the terminal device may generate a straight line equation storage matrix based on the straight line quantity of spatial virtual straight lines and a quantity of straight line parameters in the straight line equation, and store, in the straight line equation storage matrix, the straight line identifier of the spatial virtual straight line in association with the straight line parameters corresponding to the spatial virtual straight lines. The straight line equation storage matrix may be expressed as Dpoint, the straight line equation storage matrix Dpoint is a two-dimensional matrix, a height of the matrix is a quantity of all spatial virtual straight lines, that is, Nmax=4*(a+b+c), a width is 3, and each element in the straight line equation storage matrix Dpoint is a real number. A row represents straight line parameters in the straight line equation of a spatial virtual straight line, and a straight line equation of a spatial virtual straight line may be determined by using three straight line parameters. The straight line equation storage matrix Dpoint may be used to perform the step of generating, based on the vanishing point identifier and the straight line equation, the vanishing point coordinates of the vanishing point indicated by the vanishing point identifier in step S104.
The terminal device needs to initialize each element in the straight line equation storage matrix Dpoint before obtaining the straight line parameter in the straight line equation. For example, in this embodiment of this disclosure, each element in the straight line equation storage matrix Dpoint may be initialized to −1. It is to be understood that an initialized value of each element in the straight line equation storage matrix Dpoint is not limited in this embodiment of this disclosure.
In this embodiment of this disclosure, a plane (a right wall) perpendicular to an x-axis may be referred to as a plane x, a plane (a left wall) perpendicular to a y-axis may be referred to as a plane y, and a plane (the ground) perpendicular to a z-axis may be referred to as a plane z. c (a z-axis direction) times b (a y-axis direction) identification codes exist on the plane x, c (the z-axis direction) times a (an x-axis direction) identification codes exist on the plane y, and a (the x-axis direction) times b (the y-axis direction) identification codes exist on the plane z. The maximum quantity of identification codes may be expressed as max (a, b, c), and the maximum quantity of key points (that is, N) may be expressed as N=2*max (a, b, c). It may be understood that the maximum quantity of key points may represent a maximum quantity of space corners on the spatial virtual straight line, or may represent a maximum quantity of spatial virtual straight lines that may be used for a single vanishing point.
For ease of understanding, an example in which quantities of identification codes for the plane x and the plane y in the z-axis direction are both c is used for description in this embodiment of this disclosure, an example in which quantities of identification codes for the plane z and the plane x in the y-axis direction are both b is used for description in this embodiment of this disclosure, and an example in which quantities of identification codes for the plane z and the plane y in the x-axis direction are both a is used for description in this embodiment of this disclosure.
In some embodiments, the quantities of identification codes for the plane x and the plane y in the z-axis direction may be different, the quantities of identification codes for the plane z and the plane x in the y-axis direction may be different, and the quantities of identification codes for the plane z and the plane y in the x-axis direction may be different. In this case, c represents a larger value of the quantities of identification codes for the plane x and the plane y in the z-axis direction, b represents a larger value of the quantities of identification codes for the plane z and the plane x in the y-axis direction, and a represents a larger value of the quantities of identification codes for the plane z and the plane y in the x-axis direction.
It may be understood that in this disclosure, calculation of the vanishing point coordinates may be accelerated based on table lookup, and the tables involved in this disclosure may include the table T1, the table T2, the straight line fitting matrix Dline, and the straight line equation storage matrix Dpoint. All of the identifiers involved in table creation, such as the unit code identifier, the straight line identifier, and the vanishing point identifier do not necessarily have to be labeled as described in this disclosure, and may also be labeled by using another labeling method.
Therefore, the initialization method in this embodiment of this disclosure may accelerate the speed of fitting the spatial virtual straight lines, avoid repeated scanning of the spatial virtual straight line to which a two-dimensional code corner belongs, and repeated occupation and release of internal memory. A maximum quantity N of points (that is, the maximum quantity of key points) on the spatial virtual straight line may be used to initialize internal memory space for fitting the straight lines, and allocate the maximum possible memory at one time.
For ease of understanding,
As shown in
As shown in
As shown in
Step S104: Generate, based on the vanishing point identifier and the straight line equation, vanishing point coordinates of the vanishing point indicated by the vanishing point identifier, and determine an intrinsic component parameter of a camera component for a target image based on the vanishing point coordinates.
For ease of understanding,
As shown in
As shown in
As shown in
As shown in
As shown in
It may be understood that in this embodiment of this disclosure, a value obtained with the high-precision Zhang Zhengyou's calibration method may be used as a truth value, to obtain a relative error of the intrinsic component parameter generated in this embodiment of this disclosure. The results are shown in Table 1.
As shown in Table 1, an optical center may include an optical center abscissa ux and an optical center ordinate uy. A focal length may include a x-direction focal length fx and a y-direction focal length fy. For the four parameters shown in Table 1, errors in this embodiment of this disclosure and the Zhang Zhengyou's calibration method are both within 2%. The x-direction focal length fx and the y-direction focal length fy in this embodiment of this disclosure are the same.
On a single-core central processing unit (CPU), an overall time consumed to obtain the spatial virtual straight line, calculate the vanishing point, and calculate the intrinsic component parameter in this disclosure is less than 0.25 milliseconds, which does not occupy hardware resources. When this disclosure is applied to virtual-real fusion, only a small quantity of machine resources are occupied, and another virtual-real fusion related algorithm is not stalled.
It may be learned that in this embodiment of this disclosure, a single target image obtained by the camera component shooting a spatial object may be obtained, parallel lines (that is, the spatial virtual straight lines) are detected in real time in the target image, vanishing point coordinates of the vanishing points mapped by the parallel lines may be calculated, and then the intrinsic component parameter of the camera component is generated based on the intrinsic component parameter calibration method of the vanishing point. In this way, the intrinsic component parameter of the camera component may be determined by using a single image without processing a plurality of images and without using a hardware device to calibrate the intrinsic component parameter, which may significantly reduce the costs of calibrating the intrinsic component parameter and improve efficiency of calibration.
Further,
Step S1021: Obtain, from the target image, the spatial virtual straight line composed of the spatial line segments.
The spatial virtual straight line composed of the spatial line segments is the spatial virtual straight line where the spatial line segments are located. For a specific process of obtaining the spatial virtual straight line composed of spatial line segments by the terminal device, reference may be made to the descriptions of step S102 in the embodiment corresponding to
Step S1022: Assign a straight line identifier to the spatial virtual straight line based on a positional relationship between the spatial virtual straight line and a spatial coordinate axis corresponding to the target image.
Specifically, the terminal device may obtain a target space plane formed by the spatial coordinate axis corresponding to the target image. The spatial coordinate axis forming the target space plane includes a first coordinate axis and a second coordinate axis, and the target space plane may be any one of a plane x, a plane y, and a plane z. Further, the terminal device may traverse an identification code in the target space plane to obtain the spatial virtual straight line associated with the identification code in the target space plane, and determine, as a target spatial virtual straight line, the spatial virtual straight line associated with the identification code in the target space plane. Further, the terminal device may assign a first straight line identifier to the target spatial virtual straight line parallel to the first coordinate axis, and assign a second straight line identifier to the target spatial virtual straight line parallel to the second coordinate axis. The first straight line identifier is sorted based on the second coordinate axis, and the second straight line identifier is sorted based on the first coordinate axis. The straight line identifier includes a first straight line identifier and a second straight line identifier.
It may be understood that for the identification codes on a left wall and a right wall, top, bottom, left, and right indicates that a person stands on the ground, and faces top, bottom, left, and right of the identification code. For the ground, top, bottom, left, and right indicates that a person stands on the right wall, and faces top, bottom, left, and right of the identification code. In some embodiments, for the ground, top, bottom, left, and right may alternatively indicate that a person stands on the left wall, and faces top, bottom, left, and right of the identification code.
For the spatial virtual straight line of the plane x (that is, the right wall), an index matrix Mx having a height of c and a width of b is constructed based on the arrangement mode of the identification codes in the plane x, and an element in an ith row and a jth column of the matrix is a unit code identifier of the identification code in an ith row and a jth column on the right wall. In this way, the terminal device may assign the straight line identifier to four points included in each identification code while traversing the index matrix Mx in a column-first manner (or in a row-first manner). The assignment manner is: first assigning subscripts of 0 to (c−1) to upper straight lines of all the identification codes in an order from the highest to the lowest; assigning subscripts of c to (2c−1) to lower straight lines of all the identification codes in an order from the highest to the lowest; assigning subscripts of 2c to (2c+b−1) to left straight lines of all the identification codes in an order from the leftmost to the rightmost; and then assigning subscripts of (2c+b) to (2c+2b−1) to right straight lines of all the identification codes in an order from the leftmost to the rightmost.
For the spatial virtual straight line of the plane y (that is, the left wall), an index matrix My having a height of c and a width of a is constructed based on the arrangement mode of the identification codes in the plane y, and an element in an ith row and a jth column of the matrix is a unit code identifier of the identification code in an ith row and a jth column on the left wall. In this way, the terminal device may assign the straight line identifier to four points included in each identification code while traversing the index matrix My in a column-first manner (or in a row-first manner). The assignment manner is: first assigning subscripts of (2c+2b) to (2c+2b+c−1) to upper straight lines of all the identification codes in an order from the highest to the lowest; assigning subscripts of 3c+2b to (4c+2b−1) to lower straight lines of all the identification codes in an order from the highest to the lowest; assigning subscripts of (4c+2b) to (4c+2b+a−1) to left straight lines of all the identification codes in an order from the leftmost to the rightmost; and then assigning subscripts of (4c+2b+a) to (4c+2b+2a−1) to right straight lines of all the identification codes in an order from the leftmost to the rightmost.
For the spatial virtual straight line of the plane z (that is, the ground), an index matrix Mz having a height of a and a width of b is constructed based on the arrangement mode of the identification codes in the plane z, and an element in the ith row and the jth column of the matrix is a unit code identifier of the identification code in an ith row and a jth column on the ground. In this way, the terminal device may assign the straight line identifier to four points included in each identification code while traversing the index matrix Mz in a column-first manner (or in a row-first manner). The assignment manner is: first assigning subscripts of (4c+2b+2a) to (4c+2b+3a−1) to upper straight lines of all the identification codes in an order from the highest to the lowest; assigning subscripts of (4c+2b+3a) to (4c+2b+4a−1) to lower straight lines of all the identification codes in an order from the highest to the lowest; assigning subscripts of (4c+2b+4a) to (4c+3b+4a−1) to left straight lines of all the identification codes in an order from the leftmost to the rightmost; and then assigning subscripts of (4c+3b+4a) to (4c+4b+4a−1) to right straight lines of all the identification codes in an order from the leftmost to the rightmost.
It may be understood that the manner of assigning the straight line identifier to the spatial virtual straight line is not limited in the embodiments of this disclosure. For step S1025, reference may be made to the straight line identifier assigned in step S1022 to assign different vanishing point identifiers to the spatial virtual straight line. In some embodiments, for example, the plane x is used as an example for description. The terminal device may first assign subscripts of 0 to (2c−1) to the upper straight lines and the lower straight lines of all the identification codes in an order from the highest to the lowest, and then then assign subscripts of 2c to (2c+2b−1) to the left straight lines and the right straight lines of all the identification codes in an order from the leftmost to the rightmost.
Step S1023: Use the straight line identifiers of the spatial virtual straight line as a line segment identifier of the spatial line segment that forms the spatial virtual straight line.
For example, the spatial virtual straight line S2 is composed of a spatial line segment X1 and a spatial line segment X2. If the straight line identifier of the spatial virtual straight line S2 is a straight line identifier K, the terminal device may use the straight line identifier K as the line segment identifier of the spatial line segment X1 and the spatial line segment X2.
For ease of understanding,
As shown in
As shown in
As shown in
Step S1024: Use a quantity of coordinate axes in a spatial coordinate axis corresponding to a target image as a quantity of vanishing points.
The quantity of vanishing points is at least two. As shown in
In some embodiments, in a case that the quantity of coordinate axes in the spatial coordinate axis corresponding to the target image is three, if no identification code does not exist in any two of the plane x, the plane y, or the plane z, the quantity of vanishing points is two.
Step S1025: Determine, from at least two vanishing point identifiers based on a positional relationship between the spatial virtual straight line and the spatial coordinate axis, a vanishing point identifier mapped by the spatial virtual straight line.
A vanishing point identifier corresponds to a vanishing point. The positional relationship between the spatial virtual straight line and the spatial coordinate axis is determined by step S1022.
For the spatial virtual straight line in the plane x, the terminal device may assign the spatial virtual straight line having the straight line identifiers of 0 to (c−1) to a y-axis vanishing point 1, that is, a vanishing point ly; assign the spatial virtual straight line having the straight line identifiers of c to (2c−1) to the y-axis vanishing point 1, that is, the vanishing point ly; assign the spatial virtual straight line having the straight line identifiers of 2c to (2c+b−1) to a z-axis vanishing point 2, that is, a vanishing point lz; and assign the spatial virtual straight line having the straight line identifiers of (2c+b) to (2c+2b−1) to the z-axis vanishing point 2, that is, the vanishing point lz.
For the spatial virtual straight line in the plane y, the terminal device may assign the spatial virtual straight line having the straight line identifiers of (2c+2b) to (2c+2b+c−1) to an x-axis vanishing point 0, that is, a vanishing point 1x; assign the spatial virtual straight line having the straight line identifiers of (3c+2b) to (4c+2b−1) to the x-axis vanishing point 0, that is, the vanishing point lx; assign the spatial virtual straight line having the straight line identifiers of (4c+2b) to (4c+2b+a−1) to the z-axis vanishing point 2, that is, the vanishing point lz; and assign the spatial virtual straight line having the straight line identifiers of (4c+2b+a) to (4c+2b+2a−1) to the z-axis vanishing point 2, that is, the vanishing point lz.
For the spatial virtual straight line in the plane z, the terminal device may assign the spatial virtual straight line having the straight line identifiers of (4c+2b+2a) to (4c+2b+3a−1) the y-axis vanishing point 1, that is, the vanishing point ly; assign the spatial virtual straight line having the straight line identifiers of (4c+2b+3a) to (4c+2b+4a−1) to the y-axis vanishing point 1, that is, the vanishing point ly; assign the spatial virtual straight line having the straight line identifiers of (4c+2b+4a) to (4c+3b+4a−1) to the x-axis vanishing point 0, that is, the vanishing point lx; and assign the spatial virtual straight line having the straight line identifiers of (4c+3b+4a) to (4c+4b+4a−1) to the x-axis vanishing point 0, that is, the vanishing point lx.
For a specific process of determining the vanishing point identifier mapped by the spatial virtual straight line in the plane x, the plane y, and the plane z, reference may be made to
It may be understood that the terminal device may map spatial virtual straight lines parallel to the same coordinate axis to the same vanishing point identifier. As shown in
It may be learned that in this embodiment of this disclosure, the spatial virtual straight line composed of the spatial line segment may be obtained from the target image, the straight line identifier is assigned to the spatial virtual straight line based on a positional relationship between the spatial virtual straight line and the spatial coordinate axis, and then the straight line identifier of the spatial virtual straight line is used as the line segment identifier of the spatial line segment that constitutes the spatial virtual straight line. It may be understood that the vanishing point identifier mapped by the spatial virtual straight line may be determined from at least two vanishing point identifiers based on the positional relationship between the spatial virtual straight line and the spatial coordinate axis. The line segment identifier may be stored in the first table, the vanishing point identifier may be stored in the second table, and a speed of calibrating an intrinsic component parameter in subsequent steps may be increased by using the first table and the second table.
Further,
Step S1031: Determine, based on the line segment identifier, the spatial virtual straight line to which the spatial line segment belongs, and use the corner coordinates of the space corner in the spatial line segment as key point coordinates on the spatial virtual straight line.
Specifically, the terminal device may obtain, from the first table based on the unit code identifier of the identification code, the line segment identifier of the spatial line segment forming the identification code. The spatial virtual straight line includes a spatial virtual straight line Si, where i may be a positive integer, and i is less than or equal to a straight line quantity of spatial virtual straight lines. Further, if the line segment identifier obtained from the first table is a straight line identifier of the spatial virtual straight line Si, the terminal device may use the spatial virtual straight line Si as the spatial virtual straight line to which the spatial line segment belongs. Further, the terminal device may obtain corner coordinates of a space corner in the spatial line segment. The space corner includes a first corner and a second corner, and the first corner and the second corner are two endpoints of the spatial line segment. Further, the terminal device may use the corner coordinates of the first corner and the corner coordinates of the second corner as key point coordinates on the spatial virtual straight line Si to which the spatial line segment belongs.
It may be understood that the terminal device may fill data (that is, a straight line fitting matrix Dline) for fitting the straight lines based on the key point coordinates. The terminal device may initialize actual quantities of points (that is, the quantity of key point coordinates on the spatial virtual straight line) of all spatial virtual straight lines to 0. The actual quantity of points of a jth spatial virtual straight line is denoted as Nj (that is, an initial value of Nj is 0), and then the detected identification codes are processed in sequence as follows. The unit code identifier (a serial number) of a current identification code is i, and a table T1 is queried for line segment identifiers corresponding to four edges of the identification code having the unit code identifier of i. For the four edges of the identification code, that is, an upper edge, a lower edge, a left edge, and a right edge, the following processing is performed in sequence. The straight line identifier of the spatial virtual straight line where the current edge is located is recorded is j. The actual quantity of points Nj of the spatial virtual straight line j is extracted. Two-dimensional coordinates of an endpoint 1 of the edge are extracted, and a jth row and an Njth column of the straight line fitting matrix Dline are filled with the two-dimensional coordinates. Nj is increased by 1. To be specific, the quantity of key point coordinates on the spatial virtual straight line having the straight line identifier of j is increased by 1. Two-dimensional coordinates of an endpoint 2 of the edge are extracted, and a jth row and an Njth column of the straight line fitting matrix Dline are filled with the two-dimensional coordinates. Nj is increased by 1.
The endpoint 1 is a first endpoint, and the endpoint 2 is a second endpoint. For a vertical spatial virtual straight line, the first endpoint may be located above the second endpoint. For a horizontal spatial virtual straight line, the first endpoint may be located to the left of the second endpoint. In some embodiments, for the vertical spatial virtual straight line, the first endpoint may be located below the second endpoint. For the horizontal spatial virtual straight line, the first endpoint may be located to the right of the second endpoint.
Step S1032: Generate a straight line equation of the spatial virtual straight line based on the key point coordinates.
Specifically, the terminal device may obtain the key point coordinates on the spatial virtual straight line Si from the straight line fitting matrix, average key point parameters in the key point coordinates on the spatial virtual straight line Si to obtain an average key point parameter corresponding to the spatial virtual straight line Si, and generate a parameter matrix corresponding to the spatial virtual straight line Si based on the average key point parameter corresponding to the spatial virtual straight line Si and the key point parameter corresponding to the spatial virtual straight line Si. Further, the terminal device may perform singular value decomposition (SVD) on the parameter matrix corresponding to the spatial virtual straight line Si to obtain a dominant eigenvector matrix corresponding to the spatial virtual straight line Si. Further, the terminal device may obtain a parametric equation corresponding to the spatial virtual straight line Si, determine the straight line parameter in the parametric equation corresponding to the spatial virtual straight line Si based on the matrix parameter in the dominant eigenvector matrix corresponding to the spatial virtual straight line Si, and use, as the straight line equation of the spatial virtual straight line Si, the parametric equation that determines the straight line parameter.
It may be understood that if the quantity of key point coordinates on the spatial virtual straight line is not 0, the terminal device may extract all of the key point coordinates of the spatial virtual straight line on the straight line fitting matrix Dline, and fit straight line equation parameters of the spatial virtual straight line (that is, the straight line parameter) by using the obtained key point coordinates. A current straight line label is denoted as i. A parametric equation of a straight line labeled as i is denoted as aix+biy+ci=0. Elements in an ith row and a jth column of the straight line fitting matrix Dline are denoted as two-dimensional coordinates [di,lx,di,ly]. A matrix Mj (that is, the parameter matrix) is constructed, a height of the matrix Mj is Ni (that is, a quantity of key point coordinates on the spatial virtual straight line numbered i), and a width is 2. For a specific form of the matrix Mj, reference may be made to Formula (1):
a
i
=V
1,0 (4)
b
i
=V
1,1 (5)
c
i=−(ai
For ease of understanding,
As shown in
As shown in
It may be seen that in this embodiment of this disclosure, the spatial virtual straight line to which the spatial line segment belongs may be determined based on the line segment identifier, corner coordinates of a space corner in the spatial line segment are used as the key point coordinates on the spatial virtual straight line, and then the straight line equation of the spatial virtual straight line is generated based on the key point coordinates on the virtual straight line. The key point coordinates may be stored in the straight line fitting matrix. The straight line parameter of the straight line equation may be stored in a straight line equation storage matrix. The straight line fitting matrix and the straight line equation storage matrix may increase a speed of calibrating an intrinsic component parameter in subsequent steps.
Further,
Step S1041: Obtain, from a second table, vanishing point identifiers mapped by spatial virtual straight lines, and obtain straight line parameters corresponding to the spatial virtual straight lines from a straight line equation storage matrix.
Step S1042: Divide the straight line parameters corresponding to the spatial virtual straight lines based on the vanishing point identifiers, and obtain a space division matrix corresponding to the vanishing point identifiers.
Specifically, the terminal device may initialize a quantity of candidate straight lines of the vanishing point identifier, and initialize a first auxiliary matrix and a second auxiliary matrix based on a maximum quantity of key points. The straight line parameters corresponding to the spatial virtual straight lines include a first straight line parameter, a second straight line parameter, and a third straight line parameter. Further, the terminal device may traverse the spatial virtual straight lines, fill the first auxiliary matrix with the first straight line parameter and the second straight line parameter in the traversed spatial virtual straight lines based on the vanishing point identifiers, and fill the second auxiliary matrix with the third straight line parameter in the traversed spatial virtual straight lines based on the vanishing point identifiers. Positions of the first straight line parameter and the second straight line parameter in the first auxiliary matrix are determined by a quantity of candidate straight lines. A position of the third straight line parameter in the second auxiliary matrix is determined by the quantity of candidate straight lines. Further, the terminal device may accumulate the quantities of candidate straight lines, and obtain a quantity of target straight lines after traversing the spatial virtual straight lines. Further, the terminal device may use, as a new first auxiliary matrix, a straight line parameter obtained from the first auxiliary matrix having a quantity of rows being the quantity of target straight lines, use, as a new second auxiliary matrix, a straight line parameter obtained from the second auxiliary matrix having the quantity of rows being the quantity of target straight lines, and use the new first auxiliary matrix and the new second auxiliary matrix as the space division matrix corresponding to the vanishing point identifiers.
It may be understood that the terminal device may prepare to fill a matrix Dx, a matrix Dy, a matrix Dz, a vector Bx, a vector By, and a vector Bz with the straight line equation storage matrix Dpoint, and prepare to fit the data of the vanishing point. A quantity Nx of straight lines available for x-axis vanishing points (that is, the quantity of candidate straight lines corresponding to an x-axis) is initialized to zero, a quantity Ny of straight lines available for y-axis vanishing points (that is, the quantity of candidate straight lines corresponding to a y-axis) is initialized to zero, and a quantity Nz of straight lines available for z-axis vanishing points (that is, the quantity of candidate straight lines corresponding to a z-axis) is initialized to zero. The matrix Dx, the matrix Dy, and the matrix Dz are initialized to a real matrix having N (that is, a possible maximum quantity of spatial virtual straight lines at each vanishing point) rows and 2 columns, and the vector Bx, the vector By, and the vector Bz are N rows of vectors.
It may be understood that an initialized value of each element in the matrix Dx, the matrix Dy, the matrix Dz, the vector Bx, the vector By, and the vector Bz is not limited in this embodiment of this disclosure. In this embodiment of this disclosure, each element in the matrix Dx, the matrix Dy, the matrix Dz, the vector Bx, the vector By, and the vector Bz may be initialized to −1. The matrix Dx, the matrix Dy, the matrix Dz may be collectively referred to as the first auxiliary matrix, and the vector Bx, the vector By, and the vector Bz may be collectively referred to as the second auxiliary matrix. The matrix Dx is the first auxiliary matrix corresponding to the x-axis, the matrix Dy is the first auxiliary matrix corresponding to the y-axis, and the matrix Dz is the first auxiliary matrix corresponding to the z-axis. The vector Bx is the second auxiliary matrix corresponding to the x-axis, the vector By is the second auxiliary matrix corresponding to the y-axis, and the vector Bz is the second auxiliary matrix corresponding to the z-axis. The second auxiliary matrix may also be referred to as a second auxiliary vector.
Further, the terminal device may traverse each spatial virtual straight line. A straight line identifier of a current spatial virtual straight line is denoted as i, and parameters of the straight line equation are a parameter ai (that is, the first straight line parameter), a parameter bi (that is, the second straight line parameter), and a parameter ci (that is, the third straight line parameter). Further, the terminal device may extract, from a table T2 based on a straight line identifier i of the spatial virtual straight line, the vanishing point identifier to which the straight line identifier i belongs, and then fill the matrix Dx and the vector Bx, or the matrix Dy and the vector By, or the matrix Dz and the vector Bz with the parameter ai, the parameter bi, and the parameter ci based on a type of the vanishing point identifier. The specific method is as follows. If the vanishing point identifier is equal to 0, an Nyth row and a 0th column of Dx are filled with ai, an Nyth row and a 1st column of Dx are filled with bi, and an Nxth row of Bx is filled with −ci. Then Nx=Nx+1, where the vanishing point identifier 0 is the vanishing point identifier corresponding to the x-axis. If the vanishing point identifier is equal to 1, an Nyth row and a 0th column of Dy are filled with ai, an Nyth row and a 1st column of Dy are filled with bi, and an Nxth row of By is filled with −ci. Then Ny=Ny+1, where the vanishing point identifier 1 is the vanishing point identifier corresponding to the y-axis. If the vanishing point identifier is equal to 2, an Nzth row and a 0th column of Dz are filled with ai, an Nzth row and a 1st column of Dz are filled with bi, and an Nzth row of Bz is filled with −ci. Then Nz=Nz+1, where the vanishing point identifier 2 is the vanishing point identifier corresponding to the z-axis. In some embodiments, if the actual quantity Ni of points of the straight line is equal to zero, no operation is performed, and a next straight line is directly processed.
It may be understood that after all of the spatial virtual straight lines are traversed, the quantity of candidate straight lines may be referred to as the quantity of target straight lines. The quantity of target straight lines may represent the quantity of spatial virtual straight lines corresponding to the vanishing points.
Step S1043: Perform least square fitting on space division straight lines based on the space division matrix to generate a straight line intersection point of the space division straight lines, and use the straight line intersection point of the space division straight lines as vanishing point coordinates of the vanishing points corresponding to the vanishing point identifiers.
The space division straight lines are the spatial virtual straight lines corresponding to the space division matrix. Different space division matrices correspond to different spatial virtual straight lines, and different space division matrices may be used to generate different vanishing point coordinates.
It may be understood that the terminal device may respectively perform the following operations on the matrix Dx, the vector Bx, the matrix Dy, the vector By, the matrix Dz, and the vector Bz, and calculate the vanishing points corresponding to the x-axis, the y-axis, and the z-axis. It may be understood that if the quantity Nx of target straight lines is greater than or equal to 2, the x-axis vanishing point is calculated, otherwise it is considered that the x-axis vanishing point does not exist. Therefore, the terminal device may construct a matrix Px and a vector Qx. The matrix Px is first Nx rows of the matrix Dx, and the vector Qx is first Nx rows of the vector Bx. The matrix Px may be referred to as the new first auxiliary matrix, the vector Qx may be referred to as the new second auxiliary matrix, and the matrix Px and the matrix Qx may be collectively referred to as the space division matrix corresponding to the x-axis. In this way, for the calculation method of vanishing point coordinates lx of the x-axis vanishing point generated by the terminal device based on the space division matrix corresponding to the x-axis, reference may be made to Formula (7):
l
x=(PxT·Px)−1·(PxT·Bx) (7)
It may be understood that if the quantity Ny of target straight lines is greater than or equal to 2, the y-axis vanishing point is calculated, otherwise it is considered that the y-axis vanishing point does not exist. Therefore, the terminal device may construct a matrix Py and the vector Qy. The matrix Py is first Ny rows of the matrix Dy, and the vector Qy is first Ny rows of the vector By. The matrix Py may be referred to as the new first auxiliary matrix, the vector Qy may be referred to as the new second auxiliary matrix, and the matrix Py and the matrix Qy may be collectively referred to as the space division matrix corresponding to the y-axis. In this way, for the calculation method of vanishing point coordinates ly of the y-axis vanishing point generated by the terminal device based on the space division matrix corresponding to the y-axis, reference may be made to Formula (8):
l
y=(PyT·Py)−1·(PyT·By) (8)
It may be understood that if the quantity Nz of target straight lines is greater than or equal to 2, the z-axis vanishing point is calculated, otherwise it is considered that the z-axis vanishing point does not exist. Therefore, the terminal device may construct a matrix Pz and the vector Qz. The matrix Pz is first Nz rows of the matrix Dz, and the vector Qz is first Nz rows of the vector Bz. The matrix Pz may be referred to as the new first auxiliary matrix, the vector Qz may be referred to as the new second auxiliary matrix, and the matrix Pz and the matrix Qz may be collectively referred to as the space division matrix corresponding to the z-axis. In this way, for the calculation method of vanishing point coordinates lz of the z-axis vanishing point generated by the terminal device based on the space division matrix corresponding to the z-axis, reference may be made to Formula (9):
l
z=(PzT·Pz)−1·(PzT·Bz) (9)
Step S1044: Determine an intrinsic component parameter of a camera component for a target image based on the vanishing point coordinates.
For a specific process of determining the intrinsic component parameter of the camera component for the target image by the terminal device based on the vanishing point coordinates, reference may be made to the description of step S1052 to step S1053 in the embodiment corresponding to
For ease of understanding,
Further, as shown in
It may be seen that in this embodiment of this disclosure, the vanishing point identifiers mapped by the spatial virtual straight lines may be obtained from a second table, the straight line parameters corresponding to the spatial virtual straight lines are obtained from a straight line equation storage matrix, and the straight line parameters corresponding to the spatial virtual straight lines are divided based on the vanishing point identifiers, to obtain a space division matrix corresponding to the vanishing point identifier. Further, least square fitting is performed on the space division straight lines based on the space division matrix, so as to generate the vanishing point coordinates of the vanishing points corresponding to the space division straight lines, and then an intrinsic component parameter of a camera component is determined based on the vanishing point coordinates. The manner of determining the intrinsic component parameter based on the vanishing point coordinates provided in this embodiment of this disclosure may reduce the costs of calibrating the intrinsic component parameter and increase a calibration speed.
Further,
Step S1051: Generate, based on a vanishing point identifier and a straight line equation, vanishing point coordinates of a vanishing point indicated by the vanishing point identifier.
For a specific process of generating the vanishing point coordinates by a terminal device based on the vanishing point identifier and the straight line equation, reference may be made to the description of step S1041 to step S1043 in the embodiment corresponding to
Step S1052: Determine angles between every two spatial virtual straight lines in space division straight lines, obtain a maximum angle from the angles between every two spatial virtual straight lines, and determine that the space division straight lines satisfy a vanishing point qualification condition if the maximum angle is greater than or equal to an included angle threshold.
It may be understood that the terminal device may automatically detect, based on the detected vanishing point, whether the vanishing point is available. For each group of space division straight lines, if the group of space division straight lines include only two spatial virtual straight lines, the terminal device may directly calculate an included angle α (that is, the maximum angle) between the two spatial virtual straight lines. In some embodiments, if the group of space division straight lines include more than two spatial virtual straight lines, the terminal device may calculate the included angles between every two spatial virtual straight lines, and use the maximum one of the included angles between every two spatial virtual straight lines as the included angle α (that is, the maximum angle).
In some embodiments, if the maximum angle is less than the included angle threshold, it is determined that the vanishing points corresponding to the group of space division straight lines are not available. To be specific, it is determined that the space division straight lines do not satisfy the vanishing point qualification condition. The vanishing point qualification condition is a condition that the maximum angle between every two spatial virtual straight lines in the space division straight lines is greater than or equal to the included angle threshold. In other words, if the spatial virtual straight lines in the space division straight lines are approximately parallel in the target image, it may be determined that the group of space division straight lines are not available, and the vanishing point coordinates determined by using unavailable space division straight lines are inaccurate. It is to be understood that a specific value of the included angle threshold is not limited in this embodiment of this disclosure.
Step S1053: Generate the intrinsic component parameter of the camera component for the target image based on the vanishing point coordinates corresponding to the space division straight lines that satisfy the vanishing point qualification condition.
It is to be understood that the terminal device may generate the intrinsic component parameter of the camera component for the target image when the space division straight lines that satisfy the vanishing point qualification condition are 2 groups or 3 groups. A group of space division straight lines correspond to a vanishing point, the spatial virtual straight lines in each group of space division straight lines are parallel to each other, and different groups of space division straight lines are perpendicular to each other.
When the space division straight lines satisfying the vanishing point qualification condition are less than or equal to 1 group, that is, when a quantity of available vanishing points is less than or equal to 1, the terminal device does not calibrate the intrinsic component parameter. When the space division straight lines satisfying the vanishing point qualification condition are equal to 2 groups, that is, when the quantity of available vanishing points is equal to 2, the terminal device may call a calibration algorithm of 2 vanishing points. When the space division straight lines satisfying the vanishing point qualification condition are equal to 3 groups, that is, when the quantity of available vanishing points is equal to 3, the terminal device may call the calibration algorithm of 3 vanishing points.
It is to be understood that when the space division straight lines satisfying the vanishing point qualification condition are equal to 2 groups (that is, when the quantity of vanishing points is 2), the vanishing point coordinates corresponding to the space division straight lines that satisfy the vanishing point qualification condition include first vanishing point coordinates and second vanishing point coordinates. It is to be understood that the terminal device may determine an optical center abscissa and an optical center ordinate of the camera component in the target image based on an image height and an image width of the target image. The optical center abscissa and the optical center ordinate are used to represent optical center coordinates of an (component) optical center of the camera component. Further, the terminal device may determine a first vector from the (component) optical center of the camera component to the first vanishing point coordinates and a second vector from the (component) optical center of the camera component to the second vanishing point coordinates. Further, the terminal device may determine a vertical relationship between the first vector and the second vector based on a vertical relationship between the space division straight line corresponding to the first vanishing point coordinates and the space division straight line corresponding to the second vanishing point coordinates, and establish, based on the vertical relationship between the first vector and the second vector, a constraint equation associated with the first vector and the second vector. Further, the terminal device may determine a component focal length of the camera component based on the first vanishing point coordinates, the second vanishing point coordinates, and the constraint equation. Further, the terminal device may use the optical center coordinates and the component focal length as the intrinsic component parameters of the camera component for the target image.
The terminal device may obtain optical center coordinates (ux, uy). A width (that is, the image width) of the target image is w, and a height (that is, the image height) is h. Therefore, the optical center of the camera component is in the center of a picture formed by the target image. To be specific, ux=w/2 (that is, the optical center abscissa), and uy=h/2 (that is, the optical center ordinate).
The terminal device may calculate a focal length f of the camera component (that is, the component focal length). In a two-dimensional xy coordinate system of an image plane (that is, a plane where a focal point is perpendicular to an optical axis), a right-handed rectangular coordinate system is established by using a direction along the focal point toward the optical center as a z-axis. The vanishing point and the optical center are located on an imaging plane, and the imaging plane is located at an origin of the z-axis. In the coordinate system, coordinates of the focal point cf are (ux, uy, −f), coordinates of the optical center c are (ux, uy, 0), coordinates p of the vanishing point 1 are (px, Py, −f) (that is, the first vanishing point coordinates), coordinates q of the vanishing point 2 are (qx, qy, −f) (that is, the second vanishing point coordinates), and a distance between the focal point cf and the optical center c is the focal length f. The vanishing point 1 and the vanishing point 2 may be vanishing points corresponding to any two coordinate axes in the x-axis, the y-axis, and the z-axis of the spatial coordinate axis corresponding to the target image. To be specific, the first vanishing point coordinates and the second vanishing point coordinates are any two vanishing point coordinates among the vanishing point coordinates lx, the vanishing point coordinates ly, and the vanishing point coordinates lz. Lines connecting the optical center c to the vanishing point 1 and the vanishing point 2 coincide with coordinate axes in the right-handed rectangular coordinate system.
Because two groups of parallel lines in a three-dimensional space (that is, the space division straight line corresponding to the vanishing point 1 and the space division straight line corresponding to the vanishing point 2) are perpendicular to each other, a vector {right arrow over (v)}1 from the optical center c to the vanishing point 1 (that is, the first vector, which is parallel to a group of parallel lines) is perpendicular to a vector {right arrow over (v)}2 from the optical center c to the vanishing point 2 (that is, the second vector, which is parallel to another group of parallel lines), that is, {right arrow over (v)}1·{right arrow over (v)}2=0. For a constraint equation associated with the first vector and the second vector that is obtained by expansion, reference may be made to Formula (10):
(px−ux)(qx−ux)+(py−uy)(qy−uy)+f2=0 (10)
According to the constraint equation shown in Formula (10), Formula (11) of the focal length f may be determined:
f=−√{square root over ((px−ux)(qx−ux)−(py−uy)(qy−uy))} (11)
In some embodiments, it is to be understood that when 3 groups of space division straight lines satisfy the vanishing point qualification condition (that is, when the quantity of vanishing points is 3), the vanishing point coordinates corresponding to the space division straight lines that satisfy the vanishing point qualification condition include first vanishing point coordinates, second vanishing point coordinates, and third vanishing point coordinates. It is to be understood that the terminal device may determine a first vector from the (component) optical center of the camera component to the first vanishing point coordinates, a second vector from the (component) optical center of the camera component to the second vanishing point coordinates, and a third vector from the (component) optical center of the camera component to the third vanishing point coordinates. Further, the terminal device may determine a vertical relationship among the first vector, the second vector, and the third vector based on a vertical relationship among the space division straight line corresponding to the first vanishing point coordinates, the space division straight line corresponding to the second vanishing point coordinates, and the space division straight line corresponding to the third vanishing point coordinates, establish a constraint equation associated with the first vector and the second vector based on the vertical relationship between the first vector and the second vector, establish a constraint equation associated with the first vector and the third vector based on the vertical relationship between the first vector and the third vector, and establish a constraint equation associated with the second vector and the third vector based on the vertical relationship between the second vector and the third vector. Further, the terminal device may determine the component focal length of the camera component and the optical center abscissa and the optical center ordinate of the camera component in the target image based on the first vanishing point coordinates, the second vanishing point coordinates, the third vanishing point coordinates, the constraint equation associated with the first vector and the second vector, the constraint equation associated with the first vector and the third vector, and the constraint equation associated with the second vector and the third vector. The optical center abscissa and the optical center ordinate are used to represent optical center coordinates of the (component) optical center of the camera component. Further, the terminal device may use the optical center coordinates and the component focal length as the intrinsic component parameters of the camera component for the target image.
The 3 vanishing points indicate that 3 groups of parallel lines perpendicular to each other exist in a spatial object. Therefore, a quantity of constraint equations formed by the vertical relationships is three. In this way, ux, uy, and f may be solved by using the three constraint equations, without considering by default that ux=w/2 and uy=h/2. Specifically, the processing process is as follows. In the two-dimensional xy coordinate system of the image plane, the right-handed rectangular coordinate system is established by using the direction along the focal point toward the optical center as the z-axis. In the coordinate system, coordinates of the focal point cf are (ux, uy, −f), coordinates of the optical center c are (ux, uy, 0), coordinates p of the vanishing point 1 are (px, py, 0) (that is, the first vanishing point coordinates), coordinates q of the vanishing point 2 are (qx, qy, 0) (that is, the second vanishing point coordinates), and coordinates r of the vanishing point 3 are (rx, ry, 0) (that is, the third vanishing point coordinates). The vanishing point 1, the vanishing point 2, and the vanishing point 3 may be vanishing points respectively corresponding to the x-axis, the y-axis, and the z-axis of the spatial coordinate axis corresponding to the target image. To be specific, the first vanishing point coordinates, the second vanishing point coordinates, and the third vanishing point coordinates are the vanishing point coordinates lx, the vanishing point coordinates ly, and the vanishing point coordinates 12. Lines connecting the optical center to the vanishing point 1, the vanishing point 2, and the vanishing point 3 coincide with the coordinate axes in the right-handed rectangular coordinate system.
Every two groups of parallel lines (that is, the space division straight line corresponding to the vanishing point 1 and the space division straight line corresponding to the vanishing point 2, the space division straight line corresponding to the vanishing point 1 and the space division straight line corresponding to the vanishing point 3, and the space division straight line corresponding to the vanishing point 2 and the space division straight line corresponding to the vanishing point 2) are perpendicular to each other in the three-dimensional space. Therefore, the vector {right arrow over (v)}1 from the optical center c to the vanishing point 1 (the vector is parallel to a group of parallel lines) is perpendicular to the vector {right arrow over (v)}2 from the optical center c to the vanishing point 2 (the vector is parallel to another group of parallel lines), the vector {right arrow over (v)}1 from the optical center c to the vanishing point 1 is perpendicular to the vector {right arrow over (v)}3 from the optical center c to the vanishing point 3, and the vector {right arrow over (v)}2 from the optical center c to the vanishing point 2 is perpendicular to the vector {right arrow over (v)}3 from the optical center c to the vanishing point 3. To be specific, {right arrow over (v)}1·{right arrow over (v)}1=0, {right arrow over (v)}1·{right arrow over (v)}3=0, and {right arrow over (v)}2·{right arrow over (v)}3=0. Based on the above, for the constraint equations obtained by expansion, reference may be made to Formula (12), Formula (13), and Formula (14):
(px−ux)(qx−ux)+(py−uy)(qy−uy)+f2=0 (12)
(px−ux)(rx−ux)+(py−uy)(ry−uy)+f2=0 (13)
(qx−ux)(rx−ux)+(qy−uy)(ry−uy)+f2=0 (14)
According to the constraint equations shown in Formula (12), Formula (13), and Formula (14), Formula (15) may be obtained after simplification:
After matrix transformation is performed on Formula (15), Formula (16) of [ux,uy]T may be obtained:
After the terminal device obtains ux (that is, the optical center abscissa) and uy (that is, the optical center ordinate), ux and uy may be substituted into the foregoing Formula (12), and Formula (17) for calculating the focal length f may be obtained:
f=√{square root over (−(px−ux)(qx−ux)−(py−uy)(qy−uy))} (17)
In some embodiments, after the terminal device obtains ux and uy, ux and uy may also be substituted into the foregoing Formula (13) or Formula (14), and the focal length f is calculated by using Formula (13) or Formula (14). For a specific process of calculating the focal length f by using Formula (13) or Formula (14), reference may be made to the description of calculating the focal length f by using Formula (12). Details are not described herein again.
It may be seen that in this embodiment of this disclosure, the vanishing point coordinates of the vanishing point indicated by the vanishing point identifier may be generated based on the vanishing point identifier and the straight line equation, then the space division straight lines are screened based on the included angles between every two spatial virtual straight lines in the space division straight lines, so as to obtain the space division straight line that satisfies the vanishing point qualification condition, and then the intrinsic component parameter of the camera component is generated based on the vanishing point coordinates corresponding to the space division straight line that satisfies the vanishing point qualification condition. The manner of determining the intrinsic component parameter based on the vanishing point coordinates provided in this embodiment of this disclosure may reduce the speed and costs of calibrating the intrinsic component parameter.
As shown in
Step S1502: Identify an identifier and a corner of the identification code from the image. The identified corner of the identification code is an identified corner on each edge of the identification code. In step S1502 herein, an identification code detection algorithm may be used to detect an identifiable identification code in the image. Each edge of a rectangular outline (or a bounding rectangle) of the identification code may be considered as each edge of the identification code.
Step S1503: Obtain a first mapping relationship between the identification code in the array and a straight line where each edge of the identification code in the array is located. In an embodiment, each edge of the identification code in the array may be parallel to a coordinate axis in a first three-dimensional rectangular coordinate system. In the first three-dimensional rectangular coordinate system, a first coordinate axis and a second coordinate axis are in an image plane, and a third coordinate axis is perpendicular to the image plane. A two-dimensional coordinate system composed of the first coordinate axis and the second coordinate axis may be, for example, used as a pixel coordinate system of the image.
Step S1504: Fit, based on the first mapping relationship and the identified corner of the identification code, a straight line equation of the straight line where each edge of the identified identification code is located.
Step S1505: Obtain a second mapping relationship between the straight line where each edge of the identification code in the array is located and a vanishing point. The vanishing point herein represents a visual intersection point of parallel lines in the real world in the image. A group of straight lines parallel to each other in the straight lines where the edges of the identification codes in the array are located correspond to the same vanishing point.
Step S1506: Determine, based on the second mapping relationship and the straight line equation, the vanishing point corresponding to the straight line where each edge of the identified identification code is located.
Step S1507: Calibrate an intrinsic parameter of the camera component based on the determined vanishing point. The determined vanishing point herein is, for example, 2 or 3.
Based on the above, according to the solution of the embodiments of this disclosure, the calibration of the intrinsic camera parameter may be implemented by using the single image captured by the spatial object including the identifiable identification code. In particular, because the identification code is identifiable, according to the solutions of the embodiments of this disclosure, the identification codes at a plurality of angles to the camera (for example, identification codes corresponding to two planar regions or three planar regions perpendicular to each other) may be obtained from the single image, and distribution information of the identified identification code (a corner of each edge) is used to determine the corner of each edge of the identification code on the straight line in the image, so that the straight line where each edge in the image is located can be determined (for example, the straight line is represented by using the straight line equation). Further, the determined straight line may be used to determine the vanishing point, so that the vanishing point may be used to calibrate the content of the camera. In this way, according to the solutions of the embodiments of this disclosure, the trouble that checkerboard images need to be captured from a plurality of angles to calibrate the intrinsic camera parameter in the conventional calibration scheme (for example, a manner of calibration by using a checkerboard) may be avoided, thereby reducing capturing requirements for image data and improving efficiency and convenience of calibration of the intrinsic camera parameter.
In addition, because the identification code is identifiable, in this embodiment of this disclosure, the first mapping relationship (the first mapping relationship may also be actually considered to represent the mapping relationship between the corner of each edge of the identification code and the straight line) between the identifier of the identification code and the straight line where each edge of the identification code is located, and the second mapping relationship between the straight line where each edge is located and the vanishing point may be established before the method 1500 is performed. On this basis, in this embodiment of this disclosure, the first mapping relationship and the second mapping relationship do not need to be obtained by using the image during performing of the method 1500, and the first mapping relationship and the second mapping relationship may be predetermined, thereby further improving data processing efficiency of the computer device during calibration of the intrinsic camera parameter.
In an embodiment, the identification code in the array is a two-dimensional code. In step S1502, the identification code in the image may be detected to identify the identifier of the identification code and coordinates of the identified corner of each edge of the identification code. Each edge of the identification code is each edge of a bounding rectangle of the identification code.
In an embodiment, in step S1503, the first table for representing the first mapping relationship may be obtained. The first table is used for representing a correspondence between the identifier of the identification code in the array and the identifier of the straight line where each edge of the identification code in the array is located. The first table herein is, for example, the table T1 above.
In an embodiment, the first table is created before the image is obtained, and a manner of creating the first table includes:
storing, in the first table based on a distribution of the identification codes in the array of each planar region in the spatial object, the identifier of the identification code in the array in association with the identifier of the straight line where each edge of the identification code in the array is located. Herein, the first mapping relationship is obtained from the pre-established first table, so that the efficiency of data processing during the calibration of the intrinsic camera parameter may be improved in this embodiment of this disclosure. The straight line where each edge is located may be, for example, the spatial virtual straight line above.
In an embodiment, the obtaining a second mapping relationship between the straight line where each edge of the identification code in the array is located and a vanishing point includes:
In an embodiment, the second table corresponds to two vanishing points or three vanishing points.
In a case that the second table corresponds to two vanishing points, the two vanishing points includes a first vanishing point and a second vanishing point. A straight line corresponding to the first vanishing point is parallel to a first coordinate axis in a first three-dimensional rectangular coordinate system, and a straight line corresponding to the second vanishing point is parallel to a second coordinate axis in the first three-dimensional rectangular coordinate system.
In a case that the second table corresponds to three vanishing points, the three vanishing points include a first vanishing point, a second vanishing point, and a third vanishing point. The straight line corresponding to the first vanishing point is parallel to the first coordinate axis in the first three-dimensional rectangular coordinate system, the straight line corresponding to the second vanishing point is parallel to the second coordinate axis in the first three-dimensional rectangular coordinate system, and a straight line corresponding to the third vanishing point is parallel to a third coordinate axis in the first three-dimensional rectangular coordinate system. The first coordinate axis and the second coordinate axis in the first three-dimensional rectangular coordinate system are in an image plane, and the third coordinate axis is perpendicular to the image plane.
In an embodiment, the second table is created before the image is obtained, and a manner of creating the second table includes:
In an embodiment, S1504 may be implemented as the following steps:
S1: Query, based on the first mapping relationship, for the straight line where each edge of the identified identification code is located. For example, in S1, the identifier of the straight line corresponding to each edge of the identification code may be found.
S2: Assign the corner of each edge of the identified identification code to the found straight line where each edge is located. For example, for an edge of the identification code, in S2, a corner on the edge may be assigned to the straight line where the edge is located.
S3: Fit, for each straight line corresponding to the identifier of the found straight line, a straight line equation of the straight line by using the corner assigned to the straight line. In other words, for each straight line, in S3, the corner on the straight line may be used to fit the straight line equation of the straight line.
Based on the above, in S1504, the first mapping relationship may be used to assign the corner to the found straight line. A corner assigned to a straight line is the corner on the straight line, so that a plurality of corners on the straight line may be used to fit the straight line equation of the straight line.
In an embodiment, S1506 may be implemented by: determining the identifier of the vanishing point corresponding to each straight line equation based on the second mapping relationship; and determining, for the determined identifier of each vanishing point, coordinates of the corresponding vanishing point in the image plane based on the straight line equation corresponding to the identifier of the vanishing point. Herein, in S1506, an intersection point of the straight lines represented by a plurality of straight line equations in the image may be used as the vanishing point.
In an embodiment, before the determining, for the determined identifier of each vanishing point, coordinates of the corresponding vanishing point in the image plane based on the straight line equation corresponding to the identifier of the vanishing point, the method 1500 further includes: determining, for the determined identifier of each vanishing point, a maximum included angle between the straight lines represented by the straight line equation corresponding to the identifier of the vanishing point; and deleting an identifier of a vanishing point from the determined identifiers of the vanishing points corresponding to the straight line equations in a case that the maximum included angle corresponding to the straight line equation corresponding to the identifier of the vanishing point is less than a first threshold. Herein, the first threshold may be set as required, for example, 5 degrees, but is not limited thereto. In this way, the identifier of the vanishing point is deleted to implement selection of the vanishing point, to avoid using the unqualified vanishing point (that is, the vanishing point corresponding to a case where the maximum included angle is less than the first threshold) to calibrate the intrinsic camera parameter, thereby improving accuracy of the calibration of the intrinsic parameter.
In an embodiment, before the determining, for the determined identifier of each vanishing point, coordinates of the corresponding vanishing point in the image plane based on the straight line equation corresponding to the identifier of the vanishing point, the method 1500 may further include:
The second threshold herein is, for example, 2. When the quantity of straight line equations corresponding to one identifier does not reach the second threshold, the coordinates of the corresponding vanishing point cannot be calculated actually. Therefore, the identifier of the vanishing point is deleted from the determined identifiers of the vanishing points in this disclosure, to avoid invalid calculation, thereby improving data processing efficiency.
In an embodiment, S1507 may be implemented as the following steps:
S11: Determine coordinates of an optical center of the camera component based on a height and a width of the image. For example, assuming that coordinates of the optical center are (ux, uy), the width and the height of the image are denoted as w and h, and optical center coordinates of the camera are: ux=w/2, and uy=h/2.
S12: Determine a vector of each vanishing point, the vector of each vanishing point being a vector between each vanishing point and the optical center of the camera component, the vectors of different vanishing points being perpendicular to each other.
S13: Determine a focal length of the camera component based on the vector of each vanishing point.
For example, in a two-dimensional xy coordinate system of a focal plane of the camera (that is, the image plane), a right-handed rectangular coordinate system is established by using a direction along a camera focus toward the optical center as a z-axis, that is, the first three-dimensional rectangular coordinate system above.
In the coordinate system, coordinates of the camera focus cf are denoted as (ux, uy, −f), and coordinates of an optical center c are denoted as (ux, uy, 0). A total of two vanishing points exist in S13, where coordinates p of a vanishing point 1 are (px, Py, −f), and coordinates q of a vanishing point 2 are (qx, qy, −f).
Because two groups of parallel lines in a 3D space are perpendicular to each other, a vector {right arrow over (v1)} (the vector is parallel to a group of parallel lines) from the optical center c of the camera to the vanishing point 1 is perpendicular to a vector {right arrow over (v2)} (the vector is parallel to another group of parallel lines) from the optical center c to the vanishing point 2, that is, {right arrow over (v1)}·{right arrow over (v2)}=0. A constraint equation is obtained by expansion:
(px−ux)(qx−ux)+(py−uy)(qy−uy)+f2=0
Based on the foregoing equation, a value of a focal length f may be calculated:
f=√{square root over (−(px−ux)(qx−ux)−(py−uy)(qy−uy))}
In an embodiment, a total of three vanishing points exist in S13. The coordinates p of the vanishing point 1 are (px, Py, 0), the coordinates q of the vanishing point 2 are (qx, qy, 0), and coordinates r of a vanishing point 3 are (rx, ry, 0).
In this embodiment of this disclosure, the focal length f may be calculated based on the following formula:
f=√{square root over (−(px−ux)(qx−ux)−(py−uy)(qy−uy))}
Further,
The image obtaining module 1601 is configured to obtain an image obtained by shooting a spatial object by a camera component, the spatial object including two planar regions or three planar regions perpendicular to each other, each planar region including an array composed of a plurality of identification codes, each of the identification codes carrying information with an identifiable unique identifier.
The identification unit 1602 is configured to identify an identifier and a corner of the identification code from the image, the identified corner of the identification code being an identified corner on each edge of the identification code.
The straight line fitting unit 1603 is configured to: obtain a first mapping relationship between the identification code in the array and a straight line where each edge of the identification code in the array is located, and fit, based on the first mapping relationship and the identified corner of the identification code, a straight line equation of the straight line where each edge of the identified identification code is located.
The vanishing point determination unit 1604 is configured to: obtain a second mapping relationship between the straight line where each edge of the identification code in the array is located and a vanishing point; and determine, based on the second mapping relationship and the straight line equation, the vanishing point corresponding to the straight line where each edge of the identified identification code is located.
The calibration unit 1605 is configured to calibrate an intrinsic parameter of the camera component based on the determined vanishing point.
Based on the above, according to the solution of the embodiments of this disclosure, the calibration of the intrinsic camera parameter may be implemented by using the single image captured by the spatial object including the identifiable identification code. In particular, because the identification code is identifiable, according to the solutions of the embodiments of this disclosure, the identification codes at a plurality of angles to the camera (for example, identification codes corresponding to two planar regions or three planar regions perpendicular to each other) may be obtained from the single image, and distribution information of the identified identification code (a corner of each edge) is used to determine the corner of each edge of the identification code on the straight line, so that the straight line where each edge is located can be determined (for example, the straight line is represented by using the straight line equation). Further, the determined straight line may be used to determine the vanishing point, so that the vanishing point may be used to calibrate the content of the camera. In this way, according to the solutions of the embodiments of this disclosure, the trouble that checkerboard images need to be captured from a plurality of angles to calibrate the intrinsic camera parameter in the conventional calibration scheme (for example, a manner of calibration by using a checkerboard) may be avoided, thereby reducing capturing requirements for image data and improving efficiency and convenience of calibration of the intrinsic camera parameter.
Further,
In the computer device 1000 shown in
It is to be understood that the computer device 1000 described in this embodiment of this disclosure may perform the description of the data processing method in the embodiments corresponding to
Moreover, an embodiment of this disclosure further provides a computer-readable storage medium, such as a non-transitory computer-readable storage medium. The computer-readable storage medium stores the computer program executed by the data processing apparatus 1 mentioned above, for example. When the processor executes the computer program, the description of the data processing method in the foregoing embodiments corresponding to
In addition, an embodiment of this disclosure further provides a computer program product. The computer program product may include a computer program, and the computer program may be stored in a computer-readable storage medium. A processor of a computer device reads the computer program from the computer-readable storage medium, and the processor may execute the computer program, so that the computer device performs the description of the data processing method in the foregoing embodiments corresponding to
It is noted that all or some of the processes of the method in the foregoing embodiments may be implemented by using a computer program instructing relevant hardware. The computer program may be stored in a computer-readable storage medium. When the program is executed, the processes of the foregoing method embodiments may be performed. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), a random access memory (RAM), or the like.
One or more modules, submodules, and/or units of the apparatus can be implemented by processing circuitry, software, or a combination thereof, for example. The term module (and other similar terms such as unit, submodule, etc.) in this disclosure may refer to a software module, a hardware module, or a combination thereof. A software module (e.g., computer program) may be developed using a computer programming language and stored in memory or non-transitory computer-readable medium. The software module stored in the memory or medium is executable by a processor to thereby cause the processor to perform the operations of the module. A hardware module may be implemented using processing circuitry, including at least one processor and/or memory. Each hardware module can be implemented using one or more processors (or processors and memory). Likewise, a processor (or processors and memory) can be used to implement one or more hardware modules. Moreover, each module can be part of an overall module that includes the functionalities of the module. Modules can be combined, integrated, separated, and/or duplicated to support various applications. Also, a function being performed at a particular module can be performed at one or more other modules and/or by one or more other devices instead of or in addition to the function performed at the particular module. Further, modules can be implemented across multiple devices and/or other components local or remote to one another. Additionally, modules can be moved from one device and added to another device, and/or can be included in both devices.
What is disclosed above is merely exemplary embodiments of this disclosure, and is not intended to limit the scope of the claims of this disclosure. Therefore, equivalent variations made in accordance with the claims of this disclosure still fall within the scope of this disclosure.
Number | Date | Country | Kind |
---|---|---|---|
202210500935.0 | May 2022 | CN | national |
The present application is a continuation of International Application No. PCT/CN2023/092217, filed on May 5, 2023 and entitled “DATA PROCESSING METHOD AND APPARATUS, COMPUTER DEVICE, STORAGE MEDIUM, AND PROGRAM PRODUCT”, which claims priority to Chinese Patent Application No. 202210500935.0, filed on May 10, 2022 and entitled “DATA PROCESSING METHOD AND APPARATUS, COMPUTER DEVICE, STORAGE MEDIUM, AND PROGRAM PRODUCT”. The entire disclosures of the prior applications are hereby incorporated by reference.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/CN2023/092217 | May 2023 | WO |
Child | 18584684 | US |