Aspects described herein generally relate to the field of computer and communication technologies, and specifically, to a distortion coefficient calibration method and apparatus for an extended reality device and a storage medium.
With the rise of extended reality technologies, extended reality devices have emerged. Extended devices are also referred to as XR devices, and include virtual reality devices (also referred to as VR devices), augmented reality devices (AR devices), and mixed reality devices (MR devices). These devices can implement system simulation of three-dimensional dynamic scenes and entity behaviors.
When a user uses an extended reality device, light emitted by a display in the extended reality device enters human eyes through an optical lens. However, since the light may be distorted after passing through the optical lens in the extended reality device, an image viewed by the user through the extended reality device may be a distorted image. To enable the user to view a normal image, before the extended reality device may be used, distortion correction needs to be performed based on a distortion coefficient of the extended reality device. However, there may be currently no unified, standard, and automatic distortion coefficient calibration method.
According to aspects described herein, a distortion coefficient calibration method and apparatus for an extended reality device, a computer device, a storage medium, and/or a computer program product may be provided.
A distortion coefficient calibration method for an extended reality device may be provided, including:
A distortion coefficient calibration apparatus for an extended reality device may be provided, including:
A non-transitory computer-readable storage medium may be provided, having a computer program stored therein, the computer program being executed by a processor to implement the operations in any distortion coefficient calibration method for an extended reality device provided in the embodiments of this application.
A computer program product or a computer program may be provided, including computer instructions stored in a computer-readable storage medium. A processor of a computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the distortion coefficient calibration method for an extended reality device provided in the above optional embodiments.
Details of one or more aspects described herein are provided in the accompanying drawings and descriptions below. Other features and advantages of this application become apparent with reference to the specification, the accompanying drawings, and the claims.
To describe the technical solutions or the traditional technology more clearly, the following briefly describes the accompanying drawings required for describing the aspects. Apparently, the accompanying drawings in the following descriptions show merely illustrative aspects, and a person of ordinary skill in the art may still derive other accompanying drawings from the disclosed accompanying drawings without creative efforts.
The technical solutions will be described below clearly and comprehensively in conjunction with accompanying drawings of the embodiments of this application.
“Multiple” mentioned in the specification means two or more. “And/or” describes an association relationship between associated objects, and represents that there may be three relationships. For example, A and/or B may represent three cases: only A exists, both A and B exist, and only B exists. The character “/” generally indicates an “or” relationship between the associated objects.
An instruction sequence of a computer program may include various branch instructions, such as conditional jump instructions. The branch instruction may be an instruction in a computer program and allows a computer to execute different instruction sequences, thereby deviating from its default behavior of executing instructions in sequence.
A distortion coefficient calibration method for an extended reality device may be applied to an application environment shown in
The terminal 102 may be but is not limited to various desktop computers, laptops, smartphones, tablets, IoT devices, and portable wearable devices. The IoT devices may include smart speakers, smart televisions, smart air conditioners, smart in-vehicle devices, or the like. The portable wearable devices may be extended reality devices, smart watches, smart bracelets, finger rings, handles, head-mounted devices, and the like. The server 104 may be implemented by using an independent server or a server cluster that includes multiple servers.
Aspects described herein relate to XR technologies. For example, aspects described herein relate to calibration of a distortion coefficient in an XR device. An XR technology, that is, an extended reality technology, refers to the integration of reality and virtuality through a computer technology and a wearable device to create a virtual environment for human-computer interaction, includes technical characteristics of VR, augmented reality (AR), and mediated reality (MR), and brings experiencers immersive feeling of seamless transition between the virtual world and the real world. The VR technology refers to the use of devices such as computers to generate a vivid virtual world with multiple sensory experiences such as three-dimensional vision, touch, and smell, so that people in the virtual world may have immersive feeling, and may be mostly used in game entertainment scenes, such as VR glasses, VR displays, and VR all-in-one machines. The AR technology may be a technology that superimposes virtual information onto the real world or even transcends reality. To some extent, it may be an extension of the VR technology. In comparison, AR device products have characteristics of small size, light weight, and portability. The MR technology may be further developments of VR and AR technologies, and may build a closed loop of communication between users by presenting virtual scenes in real scenes, greatly enhancing user experience. The XR technology may include the characteristics of the above three technologies and have bright application prospects, and may be used in remote teaching scenes of scientific and experimental courses in education and training, or immersive entertainment scenes in film and television entertainment, such as immersive movie viewing and games, or exhibition event scenes such as concerts, plays, and museums, or 3D home decoration and architectural design scenes in industrial modeling and design, or new consumption scenes, such as cloud shopping and cloud fitting.
The “first”, the “second”, and similar terms used in this application do not indicate any order, number or significance, but may be used to only distinguish different components. Unless the context clearly indicates otherwise, singular forms such as “one”, “a”, or “the” do not indicate a number limit, but rather indicate the existence of at least one. Numbers such as “multiple” or “multiple copies” mentioned in various aspects of this application all refer to the number of “at least two”. For example, “multiple” refers to “at least two”, and “multiple copies” refers to “at least two copies”.
In an example, as shown in
Operation 202: Obtain a standard calibration image and a distortion calibration image, the distortion calibration image may be formed by acquiring a screen (e.g., screenshot) through an optical lens of an extended reality device when a display of the extended reality device displays the standard calibration image as the screen.
The standard calibration image refers to a standard image used for calibration. For example, since corner points of a chessboard grid image are evenly distributed and may include more distortion trends, the standard calibration image may be specifically a standard chessboard grid image, and the standard chessboard grid image may be a chessboard grid image with evenly distributed corner points. A corner point may be an extreme point. For example, a corner point when two straight lines form an angle may be referred to as a corner point. The distortion calibration image refers to an image after distortion of the standard calibration image. Distortion may refer to an abnormal change, for example, a morphological change. For example, referring to
Specifically, when a distortion coefficient of an extended reality device needs to be determined, a computer device may obtain a standard calibration image and a distortion calibration image. Extended reality (XR) may be the collective name for various new immersive technologies such as virtual reality (VR), augmented reality (AR), and mediated reality (MR), and an extended reality device may be the collective name for a virtual reality device, an augmented reality device, and a mediated reality device. Theoretically, an extended reality technology may be a computer system that can create a virtual world or combine a real world with a virtual world. By creating a virtual world or combining a real world with a virtual world, users can experience highly immersive content. The distortion coefficient may be a coefficient used when an optical lens in an extended reality device produces distortion.
In an aspect, the distortion calibration image may be acquired by an image acquisition device; and in a process in which the image acquisition device acquires the distortion calibration image, the optical lens of the extended reality device may be located between the image acquisition device and the display of the extended reality device, and the optical center of the optical lens may be aligned with the center of the display.
Specifically, the distortion calibration image may be acquired by an image acquisition device. In a process of acquiring the distortion calibration image, the standard calibration image may be inputted to the extended reality device, and the standard calibration image may be displayed on an extended reality display, so that the image acquisition device may acquire the standard calibration image through the optical lens of the extended reality device, to obtain the distortion calibration image.
For example, referring to
In an aspect, the optical lens in the extended reality device may specifically be an XR optical lens with an ultra-short focal length and a complex folded optical path. This optical lens may be also referred to as an optical pancake lens. This optical lens may be an ultra-thin XR lens that folds an optical path through an optical element. By equipping the extended reality device with an optical pancake lens, the thickness of the extended reality device may be greatly reduced.
In an aspect, the pancake lens may be also referred to as a foldable light path lens, and adopt a “foldable” light path structure, which ensures the amplification of virtual images while shortening a straight-line distance from a screen to human eyes. The pancake lens may include a semi-reflective and semi-transparent lens, a phase retarder, and a reflective polarizer. In the pancake lens, after light emitted by the display in the XR device enters the semi-reflective and semi-transparent lens, the light may refract multiple times between the lens, the phase retarder, and the reflective polarizer, and may be finally emitted from the reflective polarizer. Through this solution, a volume of an optical part can be reduced, thereby reducing a volume of the entire XR device and improving wearing comfort.
In an aspect, for a distance between the image acquisition device and the optical lens, may refer to a distance between human eyes and the optical lens when a user uses the extended reality device. For example, the distance between the image acquisition device and the optical lens may be the same as the distance between human eyes and the optical lens.
In an aspect, the center of the image acquisition device, the optical center of the optical lens, and the center of the display may be aligned. For example, the center of the image acquisition device, the optical center of the optical lens, and the center of the display may be located on a horizontal line.
In the above aspect, only by displaying the standard calibration image on the display, the image acquisition device can conveniently and quickly acquire the distortion calibration image through the optical lens, thereby improving the acquisition efficiency of the distortion calibration image and improving the distortion coefficient calibration efficiency.
Operation 204: Perform calibration point detection based on the standard calibration image and the distortion calibration image to obtain multiple calibration point pairs, each of the calibration point pairs including a first calibration point belonging to the standard calibration image and a second calibration point belonging to the distortion calibration image, and the second calibration point being a corresponding calibration point obtained by acquiring the first calibration point through the optical lens of the extended reality device in a process of obtaining the distortion calibration image.
A calibration point may be a position point with a calibration characteristic in a calibration image. When the standard calibration image and the distortion calibration image are chessboard grid images, the calibration points may be corner points in the chessboard grid.
Specifically, when the standard calibration image and the distortion calibration image are obtained, the computer device may perform calibration point detection on the standard calibration image and the distortion calibration image, to obtain multiple first calibration points in the standard calibration image and second calibration points corresponding to the first calibration points in the distortion calibration image, thereby obtaining multiple calibration point pairs. For example, referring to
In an aspect, when the calibration point may be a corner point, the computer device may perform calibration point detection on the standard calibration image and the distortion calibration image according to an opencv (a cross-platform computer vision and machine learning software library) corner point detection method, to obtain multiple calibration point pairs.
In an aspect, for each pixel block in the standard calibration image, the computer device may determine whether the pixel block includes a calibration point characteristic. If the pixel block includes a calibration point characteristic, the pixel block may be determined as the first calibration point. The computer device may determine whether the pixel block includes a calibration point characteristic through a pre-trained machine learning model. Correspondingly, the computer device may also determine the second calibration point through the above method. The pixel block may include one or more pixels.
In an aspect, a coordinate system may be established with the center of the standard calibration image as the origin, thereby determining coordinates of the first calibration point in the standard calibration image in the coordinate system, to obtain the first calibration point coordinates. Correspondingly, a coordinate system may be established with the center of the distortion calibration image as the origin, thereby determining coordinates of the second calibration point in the distortion calibration image in the coordinate system, to obtain the second calibration point coordinates.
In an aspect, before performing calibration point detection on the standard calibration image and the distortion calibration image, sizes of the standard calibration image and the distortion calibration image may be adjusted so that the sizes of the standard calibration image and the distortion calibration image are unified.
Operation 206: Obtain a to-be-fitted distortion relationship, the to-be-fitted distortion relationship including a to-be-determined distortion coefficient.
Specifically, the computer device may obtain a preset to-be-fitted distortion relationship, the to-be-fitted distortion relationship may be configured for obtaining a distortion relationship through fitting, and the distortion relationship may be configured for representing a conversion relationship between calibration points in the standard calibration image and the distortion calibration image. The distortion coefficient in the to-be-fitted distortion relationship may be to be determined.
In an aspect, taking the classic Brown model as an example, the to-be-fitted distortion relationship may be determined by the following formula:
ru indicates an undistorted distance; and rd indicates a distorted distance. The undistorted distance may be a distance between the first calibration point coordinates and the center point of the standard calibration image. The distorted distance may be a distance between the second calibration point coordinates and the center point of the distortion calibration image. (xu, yu) indicates the first calibration point coordinates. The first calibration point coordinates may be coordinates determined by establishing a coordinate system with the center point of the standard calibration image as the origin. (xd, yd) indicates the second calibration point coordinates. The second calibration point coordinates may be coordinates determined by establishing a coordinate system with the center point of the distortion calibration image as the origin. c1 and c2 are to-be-determined distortion coefficients. The first calibration point coordinates may be coordinates of the first calibration point in the standard calibration image. The second calibration point coordinates may be coordinates of the second calibration point in the distortion calibration image.
Operation 208: Perform numerical fitting on the to-be-fitted distortion relationship according to the multiple calibration point pairs, to determine a value of the distortion coefficient in the to-be-fitted distortion relationship to obtain a distortion relationship, the distortion relationship being configured for representing a conversion relationship between calibration points in the standard calibration image and the distortion calibration image.
Numerical fitting may be also referred to as curve fitting, commonly known as curve forming, and refers to a process of obtaining a continuous function (that is, a curve) based on several pieces of discrete data. The obtained continuous function may be consistent with the inputted several pieces of discrete data. Based on the conversion relationship between the calibration points in the standard calibration image and the distortion calibration image, the calibration points in the standard calibration image may be converted into calibration points in the distortion calibration image. For example, the distortion relationship may be specifically a function. After coordinates of a calibration point A in the distortion calibration image are inputted to the function, the function may output coordinates of a calibration point B in the standard calibration image corresponding to the calibration point A.
Specifically, when multiple calibration point pairs are obtained, the computer device may determine coordinate pairs respectively corresponding to the calibration point pairs. The coordinate pair includes coordinates of the first calibration point and coordinates of the second calibration point, and the first calibration point and the second calibration point belong to the same calibration point pair. For example, when the calibration point pair A includes a first calibration point a and a second calibration point b, a coordinate pair A corresponding to the calibration point pair A may include coordinates of the first calibration point a and coordinates of the second calibration point b. According to the coordinate pairs corresponding to the multiple calibration point pairs, the value of the to-be-determined distortion coefficient in the to-be-fitted distortion relationship may be determined. For example, values of c1 and c2 in the above formula may be determined. The target value of the distortion coefficient may be a distortion coefficient value used when the optical lens of the extended reality device produces distortion. At this point, the distortion coefficient used when the extended reality device produces distortion has been calibrated. By calibrating the distortion coefficient, subsequent distortion correction, depth estimation, spatial positioning, and the like may be performed based on the calibrated distortion coefficient.
In an aspect, the computer device may perform numerical fitting on the to-be-fitted distortion relationship through a least squares method, a gradient descent learning method, a trust region algorithm, a Gauss-Newton iteration method, or the like based on multiple calibration point pairs, to obtain the value of the distortion coefficient in the to-be-fitted distortion relationship.
In the above distortion coefficient calibration method for an extended reality device, by obtaining the standard calibration image and the distortion calibration image after distortion of the standard calibration image, calibration point detection may be performed on the standard calibration image and the distortion calibration image, thereby obtaining a calibration point pair including the first calibration point before distortion and the second calibration point after distortion. In this way, based on multiple calibration point pairs, numerical fitting may be performed on the to-be-fitted distortion relationship to obtain the value of the distortion coefficient of the to-be-fitted distortion relationship, that is, obtain the distortion relationship that represents the conversion relationship between the calibration points in the standard calibration image and the distortion calibration image, thereby achieving the purpose of automatic calibration of the distortion coefficient. Since the method described herein only requires one standard calibration image to achieve distortion coefficient calibration of the extended reality device, the process of distortion coefficient calibration may be greatly simplified and the distortion coefficient calibration efficiency may be improved.
In an aspect, the performing calibration point detection based on the standard calibration image and the distortion calibration image to obtain multiple calibration point pairs includes: performing calibration point detection on the standard calibration image to obtain multiple first calibration points, and determining coordinates of each of the first calibration points in the standard calibration image, to obtain first calibration point coordinates respectively corresponding to the multiple first calibration points; performing calibration point detection on the distortion calibration image to obtain multiple second calibration points, and determining coordinates of each of the second calibration points in the distortion calibration image, to obtain second calibration point coordinates respectively corresponding to the multiple second calibration points; determining a positional relationship between the first calibration points according to the first calibration point coordinates respectively corresponding to the multiple first calibration points; determining a positional relationship between the second calibration points according to the second calibration point coordinates respectively corresponding to the multiple second calibration points; and performing matching between the multiple first calibration points and the multiple second calibration points according to the positional relationship between the first calibration points and the positional relationship between the second calibration points, to obtain the multiple calibration point pairs.
Specifically, the computer device may perform calibration point detection on the standard calibration image to identify each first calibration point in the standard calibration image, and output first calibration point coordinates corresponding to each first calibration point. The first calibration point may refer to a pixel or a pixel block in the standard calibration image, and correspondingly, the first calibration point coordinates may refer to position coordinates of the pixel or the pixel block in the standard calibration image. For example, referring to
Further, the computer device determines the positional relationship between the first calibration points according to abscissa and ordinate values of the first calibration point coordinates. Since there may be a one-to-one correspondence between the first calibration point and the first calibration point coordinates, the positional relationship between the first calibration points may be also the positional relationship between the first calibration point coordinates. For example, when first calibration point coordinates of a first calibration point A are (1, 1), first calibration point coordinates of a first calibration point B are (1, 0), and first calibration point coordinates of a first calibration point C are (0, 1), it may be considered that the first calibration point A may be located on the right of the first calibration point C and above the first calibration point B. Further, the computer device sorts the first calibration points according to the positional relationship between the first calibration points to obtain a first calibration point matrix. For example, in the first calibration point matrix, the mark A of the first calibration point A may be located on the right of the mark C of the first calibration point C and above the mark B of the first calibration point B. Correspondingly, the computer device may determine the positional relationship between the second calibration points in the above manner. Since there may be a one-to-one correspondence between the second calibration point and the second calibration point coordinates, the positional relationship between the second calibration points may be the positional relationship between the second calibration point coordinates, and the second calibration points are sorted according to the positional relationship between the second calibration points to obtain a second calibration point matrix.
Further, the computer device uses the first calibration point and the second calibration point located at the same position in the first calibration point matrix and the second calibration point matrix as a calibration point pair, and uses first calibration point coordinates of the first calibration point and second calibration point coordinates of the second calibration point in a calibration point pair as a coordinate pair. For example, when it is determined that the first calibration point A is located at the intersection of the first row and the first column in the first calibration point matrix, and the second calibration point D is located at the intersection of the first row and the first column in the second calibration point matrix, it may be determined that first calibration point coordinates corresponding to the first calibration point A and second calibration point coordinates corresponding to the second calibration point D are a coordinate pair of the calibration points.
In this aspect, by performing calibration point detection, a calibration point pair that reflects the change before and after calibration point distortion may be obtained, so that the distortion coefficient of the extended reality device may be subsequently obtained based on the calibration point pair. In addition, since the positional relationship between the points in the image does not change after the image may be distorted, by determining the positional relationship between the first coordinate points and determining the positional relationship between the second coordinate points, calibration points that correspond to each other before and after distortion may be accurately obtained based on the determined positional relationship.
In an aspect, the performing calibration point detection on the standard calibration image to obtain multiple first calibration points may include: obtaining a slide window and triggering the slide window to slide on the standard calibration image according to a preset moving step size, to obtain a standard partial image selected by the slide window; determining a first overall grayscale value of the standard partial image; triggering the slide window to move towards any direction multiple times to obtain multiple standard partial images after movement corresponding to the current standard partial image; determining a second overall grayscale value of each of the standard partial images after movement; and extracting the first calibration point from the standard partial image according to that a difference between each second overall grayscale value and the first overall grayscale value may be greater than or equal to a preset difference threshold.
Specifically, when the first calibration point needs to be identified, the computer device may generate a slide window and trigger the slide window to slide on the standard calibration image according to a preset moving step size. For example, the computer device may trigger the slide window to slide on the slide window from top to bottom from left to right. The image selected by the slide window may be referred to as a standard partial image. For each standard partial image selected by the slide window, the computer device may determine whether the standard partial image selected by the slide window includes a calibration point characteristic, and if yes, determine that there may be a calibration point in the slide window and use the calibration point as the first calibration point.
In an aspect, when the calibration point may be a corner point, the corner point may be a corner point of an edge, and the characteristic of the edge may be that a gradient suddenly changes in a direction. Therefore, the corner point may indicate that an area has gradient information of edge changes in two or multiple directions. If a slide window may be used to observe a corner point area, as the slide window slides in multiple directions, a strong change (a gradient) of pixel density may be perceived, that is, the overall grayscale value change may be perceived.
Based on the above principle, the standard partial image currently selected by the slide window may be referred to as the current standard partial image. When the current standard partial image may be selected by the slide window, referring to
In an aspect, the computer device may determine grayscale values of pixels in the current standard partial image, and use an average of the grayscale values of the pixels in the current standard partial image as the first overall grayscale value. Correspondingly, the computer device may determine grayscale values of pixels in the standard partial image after movement, and use an average of the grayscale values of the pixels in the standard partial image after movement as the second overall grayscale value.
In the above aspect, the slide window may be triggered to slide in the standard calibration image according to the moving step size, so that the standard calibration image may be divided into multiple standard partial images. Therefore, the first calibration point in the standard calibration image may be accurately identified based on the multiple standard partial images obtained by the division. When the current standard partial image is selected, by triggering the slide window to slide in any direction, the current standard partial image after sliding may be obtained. In this way, based on the overall grayscale value of the current standard partial image before sliding and the overall grayscale value of the current standard partial image after sliding, it can be quickly and accurately determined whether the current standard partial image includes a corner point characteristic, and then the corresponding first calibration point may be determined based on the corner point characteristic.
In an aspect, the extracting the first calibration point from the standard partial image according to a difference between each second overall grayscale value and the first overall grayscale value includes: performing subtraction on each second overall grayscale value and the first overall grayscale value to obtain a grayscale difference corresponding to each second overall grayscale value; obtaining absolute values of grayscale differences, and selecting absolute values greater than or equal to a preset difference threshold from the absolute values of the grayscale differences; determining a number of the selected absolute values, and when the number may be greater than or equal to a preset number threshold, using the center of the standard partial image as the first calibration point.
Specifically, when obtaining the second overall grayscale value of each standard partial image after movement, the computer device may perform subtraction on each second overall grayscale value and the first overall grayscale value, to obtain a grayscale value difference corresponding to each second overall grayscale value. For example, the computer device subtracts the first overall grayscale value from a second overall grayscale value A to obtain a grayscale difference corresponding to the second overall grayscale value A, and subtracts the first overall grayscale value from a second overall grayscale value B to obtain a grayscale difference corresponding to the second overall grayscale value B. Further, the computer device obtains the absolute values of the grayscale differences to obtain multiple absolute values, determines target absolute values greater than or equal to the preset difference threshold, and counts a number of the selected target absolute values. When the number of the selected target absolute values is greater than or equal to a preset number threshold, the center of the current standard partial image may be used as the first calibration point. For example, when a difference between each second overall grayscale value and the first overall grayscale value may be greater than or equal to a preset difference threshold, the center of the current standard partial image may be used as the first calibration point.
In the above aspect, the second overall grayscale value may be the overall grayscale value of the standard partial image after movement, and the standard partial image after movement may be obtained by moving the standard partial image. Therefore, when the difference between each second overall grayscale value and the first overall grayscale value may be greater than or equal to the preset difference threshold, it may be considered that there may be such a target area in the standard partial image that overall grayscale values of other areas around the target area may be smaller than the overall grayscale value of the target area. It may be known that the grayscale value of the corner point area may be greater than a grayscale value of an area around the corner point. Therefore, when it may be determined that there may be such a target area in the standard partial image that overall grayscale values of other areas around the target area may be smaller than the overall grayscale value of the target area, it may be considered that the target area may be an area in which a corner point may be located. Therefore, the target area may be used as the first calibration point. In this way, the determination of the first calibration point may be achieved.
In an aspect, the performing calibration point detection on the distortion calibration image to obtain multiple second calibration points may include: obtaining a slide window and triggering the slide window to slide on the distortion calibration image according to a preset moving step size, to obtain a distortion partial image selected by the slide window; determining a third overall grayscale value of the distortion partial image; triggering the slide window to move towards any direction multiple times to obtain multiple distortion partial images after movement corresponding to the distortion partial image; determining a fourth overall grayscale value of each of the distortion partial image after movement; and extracting the second calibration point from the distortion partial image according to a difference between each fourth overall grayscale value and the third overall grayscale value.
Specifically, when the second calibration point needs to be identified, the computer device may generate a slide window and trigger the slide window to slide on the distortion calibration image according to a preset moving step size. For example, the computer device may trigger the slide window to slide on the slide window from top to bottom from left to right. For each distortion partial image selected by the slide window, the computer device may determine whether the distortion partial image includes a calibration point characteristic, and if yes, determine that there may be a calibration point in the slide window and use the calibration point as the second calibration point. For determining of whether the distortion partial image selected by the slide window includes a calibration point characteristic, refer to the above aspect. In this aspect, details are not described herein again. In an aspect, a size of the slide window generated for the distortion calibration image may be consistent with a size of the slide window generated for the standard calibration image. A moving step size of the slide window for the distortion calibration image may be consistent with a moving step size of the slide window for the standard calibration image.
In an aspect, the computer device may also identify only some calibration points in the standard calibration image and the distortion calibration image, without identifying all calibration points. For example, only calibration points near center areas in the standard calibration image and the distortion calibration image may be identified.
In the above aspect, the slide window may be triggered to slide in the distortion calibration image according to the moving step size, so that the distortion calibration image may be divided into multiple distortion partial images. Therefore, the second calibration point in the distortion calibration image may be accurately identified based on the multiple distortion partial images obtained by the division.
In an aspect, the performing numerical fitting on the to-be-fitted distortion relationship according to the multiple calibration point pairs, to determine a target value of the distortion coefficient in the to-be-fitted distortion relationship includes: obtaining a preset distortion coefficient value increment model; where the distortion coefficient value increment model may be generated based on the distortion coefficient in the initial distortion relationship; determining a predicted value of the distortion coefficient of a current round; obtaining a predicted value increment of the distortion coefficient of the current round through the distortion coefficient value increment model according to the multiple calibration point pairs and the predicted value of the distortion coefficient of the current round; when the predicted value increment of the distortion coefficient does not meet the numerical convergence condition, adding the predicted value increment of the distortion coefficient of the current round and the predicted value of the distortion coefficient of the current round, to obtain an updated predicted value of the distortion coefficient; using a next round as the current round, using the updated predicted value as a predicted value of the distortion coefficient of the current round, and returning to continue to perform the operation of obtaining a predicted value increment of the distortion coefficient of the current round through the distortion coefficient value increment model according to the multiple calibration point pairs and the predicted value of the distortion coefficient of the current round, until the predicted value increment of the distortion coefficient meets the numerical convergence condition; and using a predicted value of the distortion coefficient of the last round as the value of the distortion coefficient in the to-be-fitted distortion relationship.
Specifically, the computer device may obtain a preset distortion coefficient value increment model, where the distortion coefficient value increment model may be configured for determining a change amount of the predicted value of the distortion coefficient in two consecutive iterations. The distortion coefficient increment model may specifically be a function, or may be a change amount function model for predicting the predicted value of the distortion coefficient in two consecutive iterations. The computer device may determine the predicted value of the distortion coefficient of the current round, inputs the predicted value of the distortion coefficient of the current round and multiple calibration point pairs into the distortion coefficient value increment model, and outputs the predicted value increment of the distortion coefficient of the current round through the distortion coefficient value increment model. The computer device may determine whether the predicted value increment of the distortion coefficient of the current round meets a preset numerical convergence condition. If the preset numerical convergence condition is not met, the computer device may add the predicted value increment of the distortion coefficient and the predicted value of the distortion coefficient of the current round, to obtain an updated predicted value of the distortion coefficient. Then, a next round of iteration may be entered, the obtained updated predicted value of the distortion coefficient may be used as the predicted value of the distortion coefficient of the current round, and the distortion coefficient value incremental model may continue to be triggered to output the predicted value increment of the distortion coefficient of the current round based on multiple calibration point pairs and the predicted value of the distortion coefficient of the current round, until the outputted predicted value increment of the distortion coefficient of the current round meets the numerical convergence condition.
When the predicted value of the distortion coefficient of the current round meets the numerical convergence condition, the computer device may use the predicted value of the distortion coefficient of the current round as the value of the distortion coefficient in the distortion relationship. The value of the distortion coefficient may be a value of the distortion coefficient used when the extended reality device produces distortion.
In an aspect, the computer device may obtain a preset convergence value. For example, the convergence value may be 1e−8, and the predicted value increment of the distortion coefficient of the current round may be compared with the convergence value. If the predicted value increment of the distortion coefficient of the current round is less than or equal to the convergence value, it may be determined that the predicted value increment of the distortion coefficient of the current round meets the numerical convergence condition; otherwise, it may be determined that the numerical convergence condition is not met.
In an aspect, the preset value may be used as the predicted value of the distortion coefficient of the first round.
In an aspect, the distortion coefficient value increment model may be determined based on a residual model; the residual model may be determined based on the to-be-fitted distortion relationship; the residual model may represent residual between a first coordinate change and a second coordinate change; the first coordinate change may be a change between a coordinate before distortion and a coordinate after distortion determined based on the predicted value of the distortion coefficient; and the second coordinate change may be a change between a coordinate before distortion and a coordinate after distortion determined based on an actual value of the distortion coefficient.
Specifically, the corresponding residual model may be determined first based on the to-be-fitted distortion relationship. The residual model may be configured for representing residual between the change between the coordinate before distortion and the coordinate after distortion determined based on the predicted value of the distortion coefficient and the change between the coordinate before distortion and the coordinate after distortion determined based on the actual value of the distortion coefficient. Then, the difference between the predicted value and the actual value of the distortion coefficient in the distortion relationship may be determined based on the residual between the change between the coordinate before distortion and the coordinate after distortion determined based on the predicted value of the distortion coefficient and the change between the coordinate before distortion and the coordinate after distortion determined based on the actual value of the distortion coefficient. The predicted value of the distortion coefficient may be a fitted value obtained through numerical fitting. The actual value of the distortion coefficient may be a target of numerical fitting.
Further, to achieve a minimum value of least squares optimization for the residual model, Taylor expansion may be performed on the residual model to obtain the distortion coefficient value increment model. In an aspect, to achieve a minimum value of least squares optimization for the residual model, a partial derivative of the residual model in the direction of the distortion coefficient may be determined first to obtain the Jacobian matrix model. Then, through Taylor expansion, a Gauss-Newton condition, and a first-order derivative being zero, the distortion coefficient value increment model may be obtained.
In an aspect, the distortion coefficient value increment model may be determined by the following formula:
In the above aspect, when the predicted value increment of the distortion coefficient converges to close to zero, it may mean that even if the iteration continues, the fitted predicted value of the distortion coefficient may be almost unchanged. At the same time, the coordinate residual of each iteration in this system may be generated by the distortion coefficient of the iteration. Therefore, in this system, the residual model may be used as a relational expression (F and G) to optimize a partial derivative in the direction of the distortion coefficient (c1 and c2). When the iterative residual of the distortion coefficient converges to a small range, it means that the residual model may have reached an optimized target minimum value, that is, means that the distortion coefficient value corresponding to the distortion relationship has been obtained through fitting.
In an aspect, obtaining a predicted value increment of the distortion coefficient of the current round through the distortion coefficient value increment model according to the multiple calibration point pairs and the predicted value of the distortion coefficient of the current round may include: obtaining a Hessian matrix of the current round through a Hessian matrix model in the distortion coefficient value increment model according to the multiple calibration point pairs; obtaining an iteration matrix of the current round through an iteration matrix model in the distortion coefficient value increment model according to the multiple calibration point pairs and the predicted value of the distortion coefficient of the current round; and fusing the Hessian matrix of the current round and the iteration matrix of the current round to obtain the predicted value increment of the distortion coefficient of the current round.
Specifically, the distortion coefficient value increment model includes a Hessian matrix model and an iteration matrix model. The Hessian matrix model may be a model for outputting a Hessian matrix. For example, the Hessian matrix model may be a function model. The iteration matrix model may be a model for outputting an iteration matrix. For example, the iteration matrix model may be a function model. In the current round, the computer device may obtain the Hessian matrix of the current round based on the multiple calibration point pairs and the Hessian matrix, and output the iteration matrix of the current round based on the multiple calibration point pairs, the predicted value of the distortion coefficient of the current round, and the iteration matrix model. The computer device may fuse the iteration matrix of the current round and the Hessian matrix, for example, multiplies the negative first power of the Hessian matrix with the iteration matrix to obtain the predicted value increment of the distortion coefficient of the current round.
In an aspect, the Hessian matrix describes a local curvature of a function. When the distortion coefficient value increment model is
may be the Hessian matrix model, and (−J(c1,c2)Tr(c1,c2)) may be the iteration matrix model. Therefore, coordinate pairs corresponding to the multiple calibration point pairs may be inputted into (J(c1,c2)TJ(c1,c2)) to obtain a specific Hessian matrix, the coordinate pairs corresponding to the multiple calibration point pairs and the predicted value of the distortion coefficient of the current round may be inputted into the formula (−J(c1,c2)Tr(c1,c2)) to obtain a specific iteration matrix, and the negative first power of the Hessian matrix may be multiplied with the iteration matrix to obtain the predicted value increment of the distortion coefficient in the current round.
In the above aspect, by determining the Hessian matrix and the iteration matrix, a change amount between predicted values of the distortion coefficient in two consecutive iterations may be predicted based on the Hessian matrix and the iteration matrix, so that the value of the distortion coefficient may be subsequently determined based on the change amount between the predicted values of the distortion coefficient in two consecutive iterations.
In an aspect, obtaining a Hessian matrix of the current round through a Hessian matrix model in the distortion coefficient value increment model according to the multiple calibration point pairs may include: for each of the multiple calibration point pairs, obtaining, through a Jacobian matrix model in a Hessian matrix model based on a coordinate pair corresponding to the target calibration point pair, a current Jacobian matrix corresponding to the current calibration point pair; fusing the current Jacobian matrix and the transpose of the current Jacobian matrix to obtain a fused Jacobian matrix corresponding to the target calibration point pair; and superimposing fused Jacobian matrices respectively corresponding to the multiple calibration point pairs, to obtain the Hessian matrix of the current round.
Specifically, the Hessian matrix model may include the Jacobian matrix model. When the Hessian matrix of the current round needs to be determined, the computer device may input the coordinate pair corresponding to each calibration point pair into the Jacobian matrix model in the Hessian matrix model, to obtain a Jacobian matrix corresponding to each calibration point pair. Further, for each of the multiple calibration point pairs, the computer device may determine the transpose of the Jacobian matrix corresponding to the target calibration point pair to obtain the transpose of the Jacobian matrix, and fuse the Jacobian matrix and the transpose of the Jacobian matrix corresponding to the same calibration point pair, to obtain the fused Jacobian matrix corresponding to each calibration point pair. For example, the Jacobian matrix corresponding to the calibration point pair A may be multiplied with the transpose of the Jacobian matrix to obtain the fused Jacobian matrix corresponding to the calibration point pair A. When fused Jacobian matrices corresponding to the calibration point pairs are obtained, the computer device may superimpose the fused Jacobian matrices to obtain the Hessian matrix of the current round.
In an aspect, the computer device may set an initial value of the Hessian matrix and traverse calibration point pairs. For the first traversed calibration point pair, the computer device may determine a Jacobian matrix of the first traversed calibration point pair, and multiply the Jacobian matrix with the transpose of the Jacobian matrix to obtain a fused Jacobian matrix of the first traversed calibration point pair. The computer device superimposes the initial value of the Hessian matrix and the fused Jacobian matrix to obtain a superimposed Hessian matrix of the first traversed calibration point pair. The computer device continues to determine a fused Jacobian matrix of a next traversed calibration point pair, and superimposes the fused Jacobian matrix of the next traversed calibration point pair and the superimposed Hessian matrix of the first traversed calibration point pair, to obtain a superimposed Hessian matrix of the next traversed calibration point pair. Iteration may be performed in this way until the last calibration point pair may be traversed, and a superimposed Hessian matrix of the last traversed calibration point pair may be used as the Hessian matrix of the current round.
In an aspect, in the process of calculating the Jacobian matrix corresponding to the ith calibration point pair, assuming that the coordinate pair corresponding to the ith calibration point pair is [(xui, yui), (xdi, ydi)], the computer may determine an undistorted distance rui corresponding to the first calibration point coordinates (xui, yui), determines a distorted distance rdi corresponding to the second calibration point coordinates (xdi, ydi), and inputs (xui, yui), rui, (xdi, ydi), and rdi to the Jacobian matrix model
In the above aspect, by determining the fused Jacobian matrices of all the calibration point pairs, the Hessian matrix of the current round can be quickly obtained based on the fused Jacobian matrices of all the calibration point pairs.
In an aspect, the obtaining an iteration matrix of the current round through an iteration matrix model in the distortion coefficient value increment model according to the multiple calibration point pairs and the predicted value of the distortion coefficient of the current round may include: for each of the multiple calibration point pairs, determining the Jacobian matrix corresponding to the target calibration point pair; and according to the target calibration point pair and the predicted value of the distortion coefficient of the current round, obtaining, through a residual model in the iteration matrix model, a current residual matrix corresponding to the target calibration point pair; fusing the transpose of the Jacobian matrix corresponding to the target calibration point pair and the residual matrix corresponding to the target calibration point pair, to obtain a fused iteration matrix corresponding to the target calibration point pair; and superimposing fused iteration matrices respectively corresponding to the multiple calibration point pairs, to obtain the iteration matrix of the current round.
Specifically, the iteration matrix model may include a Jacobian matrix model and a residual model. When the iteration matrix of the current round needs to be determined, the computer device may input each coordinate pair of calibration points and the predicted value of the distortion coefficient of the current round into the Jacobian matrix model in the iteration matrix model, to obtain a Jacobian matrix corresponding to each coordinate pair of the calibration points. The computer device may also reuse the current Jacobian matrix corresponding to the current calibration point pair and generated when calculating the Hessian matrix.
Further, for each of the multiple calibration point pairs, the target calibration point pair may be referred to as the current calibration point pair, and the computer device may input a coordinate pair corresponding to the current calibration point pair and the predicted value of the distortion coefficient of the current round into the residual model in the iteration matrix model, to obtain the residual matrix corresponding to the current calibration point pair outputted by the residual model. The computer device may determine the transpose of the Jacobian matrix corresponding to the current calibration point pair, and multiply the transpose of the Jacobian matrix corresponding to the current calibration point pair with the residual matrix corresponding to the current calibration point pair, to obtain a fused iteration matrix corresponding to the current calibration point pair. When obtaining the fused iteration matrices corresponding to the calibration point pairs, the computer device may superpose the fused iteration matrices corresponding to the calibration point pairs, to obtain the iteration matrix of the current round.
In an aspect, the computer device may set an initial value of the iteration matrix and traverse calibration point pairs. For the first traversed calibration point pair, the computer device may determine the Jacobian matrix and the residual matrix of the first traversed calibration point pair, and fuse the Jacobian matrix and the residual matrix of the first traversed calibration point pair, to obtain the fused iteration matrix of the first traversed calibration point pair. The computer device may superimpose the initial value of the fused iteration matrix and the fused Jacobian matrix to obtain a superimposed iteration matrix of the first traversed calibration point pair. The computer device may continue to determine a fused iteration matrix of a next traversed calibration point pair, and superimpose the fused iteration matrix of the next traversed calibration point pair and the superimposed iteration matrix of the first traversed calibration point pair, to obtain a superimposed iteration matrix of the next traversed calibration point pair. Iteration may be performed in this way until the last calibration point pair is traversed, and a superimposed iteration matrix of the last traversed calibration point pair may be used as the iteration matrix of the current round.
In an aspect, in the process of calculating the residual matrix corresponding to the ith coordinate pair, assuming that the ith coordinate pairis [(xui, yui), (xdi, ydi)], the computer device may determine an undistorted distance rui corresponding to the first calibration point coordinates (xui, yui), determine a distorted distance rai corresponding to the second calibration point coordinates (xdi, ydi), and input (xui, yui), rui, (xdi, ydi), and rdi and the predicted value (c1j, c2j) of the distortion coefficient of the current round into the residual model
In the above aspect, by determining the fused iteration matrices of all the calibration point pairs, the iteration matrix of the current round can be quickly obtained based on the fused iteration matrices of all the calibration point pairs.
In an aspect, the Hessian matrix model may be generated based on the Jacobian matrix model, and the Jacobian matrix model may represent a partial derivative of the residual model in a direction of the distortion coefficient; the iteration matrix may be generated based on the Jacobian matrix model and the residual model; and the residual model may represent residual between the change between the coordinate before distortion and the coordinate after distortion determined based on the predicted value of the distortion coefficient and the change between the coordinate before distortion and the coordinate after distortion determined based on the actual value of the distortion coefficient.
Specifically, in the process of numerical fitting, the purpose of numerical fitting may be that residual between the predicted value of the distortion coefficient obtained through fitting and the actual value of the distortion coefficient is as close to 0 as possible. Based on this purpose, the residual model representing a difference between the change between the coordinate before distortion and the coordinate after distortion determined based on the predicted value of the distortion coefficient and the change between the coordinate before distortion and the coordinate after distortion determined based on the actual value of the distortion coefficient may be generated. In the process of numerical fitting, when the residual outputted by the residual matrix is close to zero, it may be considered that the predicted value of the distortion coefficient obtained through fitting at this time is approximate to the actual value of the distortion coefficient, and the fitting purpose is achieved. Therefore, the purpose of numerical fitting in this application may be equivalent to achieving a minimum value of least squares optimization for the residual. In the process of solving the minimum value of least squares optimization for the residual, the predicted value increment of the distortion coefficient in each iteration needs to be determined based on the Hessian matrix and the Jacobian matrix generated based on the residual model, so that final convergence determining of the predicted value increment of the distortion coefficient may be completed based on the predicted value increment of the distortion coefficient. When it is determined that the predicted value increment of the distortion coefficient converges, it may be determined that the residual outputted by the residual matrix is close to zero. In this case, the predicted value of the distortion coefficient obtained through iteration may be the target value.
In an aspect, the method further may include: obtaining the distortion relationship based on both the value of the distortion coefficient and the to-be-fitted distortion relationship; obtaining a to-be-displayed image, and performing anti-distortion processing on pixels in the to-be-displayed image according to the distortion relationship, to determine distortion correction positions respectively corresponding to the pixels; respectively moving the pixels in the to-be-displayed image to corresponding distortion correction positions, to obtain an anti-distortion image; and triggering the extended reality device to display the anti-distortion image.
Specifically, when the distortion coefficient of the extended reality model may be calibrated, that is, when the distortion relationship determined based on the distortion coefficient value is obtained, the computer device may obtain the to-be-displayed image, and perform anti-distortion processing on the to-be-displayed image according to the distortion relationship to obtain an anti-distortion image. The computer device may input the anti-distortion image into the extended reality device and triggers the extended reality device to display the anti-distortion image, so that human eyes can view a normal undistorted image through the extended reality device. The computer device may also trigger the extended reality device to perform anti-distortion processing on the to-be-displayed image according to the calibrated distortion relationship to obtain an anti-distortion image.
In an aspect, the distortion relationship may be specifically a function. For corresponding pixel coordinates of each pixel in the to-be-displayed image, the computer device may substitute the pixel coordinates of the current pixel into an inverse function of the distortion relationship to obtain a distortion correction position of the current pixel outputted by the inverse function. The computer device may combine corresponding distortion correction positions of pixels to obtain an anti-distortion image.
In an aspect, referring to
In the above aspect, by calibrating the distortion coefficient, the target distortion relationship may be determined based on the calibrated distortion coefficient, and a normally displayed undistorted image may be outputted based on the target distortion relationship. This greatly improves user experience.
In an aspect, referring to
In a specific aspect, referring to
At step S902, a computer device may obtain a standard calibration image and a distortion calibration image; the distortion calibration image being formed by acquiring a screen through an optical lens of an extended reality device when a display of the extended reality device displays the standard calibration image as the screen.
At step S904, the computer device may obtain a slide window and triggers the slide window to slide on the standard calibration image according to a preset moving step size, to obtain a current standard partial image selected by the slide window; and determines a first overall grayscale value of the current standard partial image.
At step S906, the computer device may trigger the slide window to move towards any direction multiple times to obtain multiple standard partial images after movement corresponding to the current standard partial image; and determines a second overall grayscale value of each of the standard partial images after movement.
At step S908, when a difference between each second overall grayscale value and the first overall grayscale value is greater than or equal to a preset difference threshold, the computer device may use the center of the current standard partial image as a first calibration point.
At step S910, the computer device may obtain a slide window and triggers the slide window to slide on the distortion calibration image according to a preset moving step size, obtain a current distortion partial image selected by the slide window; and determine a third overall grayscale value of the current distortion partial image.
At step S912, the computer device may trigger the slide window to move towards any direction multiple times to obtain multiple distortion partial images after movement corresponding to the current distortion partial image; and determines a fourth overall grayscale value of each of the distortion partial images after movement.
At step S914, when a difference between each fourth overall grayscale value and the third overall grayscale value is greater than or equal to a preset difference threshold, the computer device may use the center of the current distortion partial image as a second calibration point.
At step S916, the computer device may perform matching between multiple first calibration point coordinates and multiple second calibration point coordinates according to a positional relationship between the first calibration point coordinates and a positional relationship between the second calibration point coordinates, to obtain multiple calibration point pairs.
At step S918, for each of the multiple calibration point pairs, the computer device may obtain, through a Jacobian matrix model in a Hessian matrix model based on the current calibration point pair, a current Jacobian matrix corresponding to the current calibration point pair.
At step S920, the computer device may fuse the current Jacobian matrix and the transpose of the current Jacobian matrix to obtain a fused Jacobian matrix corresponding to the current calibration point pair; and superimpose fused Jacobian matrices respectively corresponding to the multiple calibration point pairs, to obtain a Hessian matrix of a current round.
At step S922, for each of the multiple calibration point pairs, the computer device may obtain, through a Jacobian matrix model in an iteration matrix model based on the current calibration point pair, a current Jacobian matrix corresponding to the current calibration point pair.
At step S924, for each of the multiple calibration point pairs, the computer device may obtain, through a residual model in the iteration matrix model based on the current calibration point pair and a predicted value of a distortion coefficient of a current round, a current residual matrix corresponding to the current calibration point pair.
At step S926, the computer device may fuse the transpose of the current Jacobian matrix and the current residual matrix to obtain a fused iteration matrix corresponding to the current calibration point pair; and superimpose fused iteration matrices respectively corresponding to the multiple calibration point pairs, to obtain an iteration matrix of the current round.
At step S928, the computer device may fuse the Hessian matrix of the current round and the iteration matrix of the current round to obtain a predicted value increment of the distortion coefficient of the current round; and when the predicted value increment of the distortion coefficient does not meet a numerical convergence condition, the computer device may add the predicted value increment of the distortion coefficient of the current round and the predicted value of the distortion coefficient of the current round, to obtain an updated predicted value of the distortion coefficient.
At step S930, the computer device may use a next round as the current round, use an updated predicted value as a predicted value of the distortion coefficient of the current round, return to continue to perform operation S922 until the predicted value increment of the distortion coefficient meets the numerical convergence condition, and use a predicted value of a distortion coefficient of the last round as a target predicted value corresponding to the distortion coefficient in the initial distortion relationship.
Although the operations are displayed sequentially according to the instructions of the arrows in the flowcharts, these operations are not necessarily performed sequentially according to the sequence instructed by the arrows. Unless otherwise explicitly specified in this application, execution of the operations may be not strictly limited, and the operations may be performed in other sequences. Moreover, at least some of the operations in flowcharts may include multiple sub-operations or multiple stages. The operations or stages are not necessarily performed at the same moment but may be performed at different moments. Execution of the operations or stages may be not necessarily sequentially performed, but may be performed alternately with other operations or at least some operations or stages of other operations.
This application further provides an application scenario that applies the above distortion coefficient calibration method for an extended reality device. Specifically, the application of the distortion coefficient calibration method for an extended reality device in this application scenario may be as follows:
When a distortion coefficient of an XR device needs to be calibrated, a to-be-fitted distortion relationship first may need to be obtained. A classic Brown model may be used as an example. This model may represent a distorted distance rd based on a coefficient cn of an undistorted distance ru, which is specifically as follows:
It can be seen that there are two unknown coefficients c1 and c2 above, and the purpose of calibration in this system may be to obtain specific values of the two coefficients through fitting. Specifically, during calibration, the system may first input a standard chessboard grid image, then directly input the image into an optical display, obtain a distorted chessboard grid image (with a white background) through a high-definition camera, and finally obtain chessboard grid corner point coordinate positions (also referred to as multiple calibration point pairs) before and after distortion through corner point detection (including but not limited to opencv corner point detection). At this point, preparation work required for the calibration process has been completed. Then, corresponding chessboard grid corner point coordinates (also referred to as multiple calibration point pairs) before and after distortion may be used as an input of numerical fitting, to obtain a coefficient in the distortion relationship through fitting to obtain a complete forward distortion relationship. A numerical fitting process may be specifically as follows:
During calibration, the purpose of numerical fitting may be that residual of a coordinate change caused by fitted distortion coefficient (c1, c2) and actual distortion coefficients (c1′, c2′) may be as close to zero as possible herein, the Brown model may be used as an example). Therefore, a caused residual relationship may be as follows:
To achieve a minimum value (that is, close to 0) of the least squares optimization for the residual, partial derivatives in the directions c1 and c2 need to be solved to form a Jacobian matrix, which may be specifically as follows:
Then, through Taylor expansion, a Gauss-Newton condition, and the first-order derivative being zero, changes in two coordinate dimensions in each iteration are obtained as follows:
At this point, increments Δc1 and Δc2 in dimensions c1 and c2 in each iteration may be obtained, then Δc1 and Δc2 are calculated through continuous iteration, and it may be determined whether values thereof converge to a very small range (which may be set to 1e−8 in this system). If yes, coefficients relative to all chessboard grid corner points before and after distortion may be obtained. That is, the calibration function may be completed. The obtained distortion coefficients may be directly used in subsequent distortion correction processing. Only after a distortion image may be accurately corrected, related subsequent processing (including but not limited to, image display and image quality enhancement) may be smooth. For specific meaning of each parameter in the above formula, refer to the above aspect.
Aspects described herein further provide another application scenario that applies the above distortion coefficient calibration method for an extended reality device. Specifically, the application of the distortion coefficient calibration method for an extended reality device in this application scenario may be as follows:
A user may enter a virtual reality world through an extended reality device. For example, a user may play games for entertainment through an extended reality device (an XR device) to enter an interactive virtual reality game scene. Before the user plays games for entertainment through the XR device, a distortion coefficient may be calibrated in advance through the distortion coefficient calibration method proposed in this application, so that the XR device may adjust and display the to-be-displayed virtual reality game scene according to the calibrated distortion coefficient. In this way, the virtual reality game scene viewed by the user may be an undistorted scene. By displaying the undistorted virtual reality game scene, the user can experience an immersive game.
The above application scenarios are only exemplary illustrations. The application of the distortion coefficient calibration method for an extended reality device is not limited to the above scenarios. For example, before a user performs entertainment and navigation through an extended reality device, a distortion coefficient may also be calibrated in advance through the distortion coefficient calibration method. For example, before a user watches a VR movie (also referred to as a virtual reality movie) through an extended reality device, a distortion coefficient may be calibrated in advance, so that the extended reality device may display undistorted movie images to the user based on the calibrated distortion coefficient. For example, before a user performs navigation through an extended reality device, a distortion coefficient may also be calibrated in advance, so that the extended reality device may display undistorted road images with virtual scenes and real scenes to the user based on the calibrated distortion coefficient.
Based on the same inventive concept, aspects described herein further provide a distortion coefficient calibration apparatus for an extended reality device that may be configured to implement the distortion coefficient calibration method for an extended reality device. The implementation solution to the problem provided by the apparatus may be similar to the implementation solution described in the above method. Therefore, for specific definitions in one or more aspects of the distortion coefficient calibration apparatus for an extended reality device provided below, refer to the above definitions in the distortion coefficient calibration method for an extended reality device. Details are not repeated herein.
In an aspect, as shown in
The image obtaining module 1002 may be configured to obtain a standard calibration image and a distortion calibration image, the distortion calibration image being formed by acquiring a screen through an optical lens of an extended reality device when a display of the extended reality device displays the standard calibration image as the screen.
The calibration point pair determining module 1004 may be configured to perform calibration point detection based on the standard calibration image and the distortion calibration image to obtain multiple calibration point pairs, each of the calibration point pairs including a first calibration point belonging to the standard calibration image and a second calibration point belonging to the distortion calibration image, and the second calibration point being a corresponding calibration point obtained by acquiring the first calibration point through the optical lens of the extended reality device.
The numerical fitting module 1006 may be configured to obtain a to-be-fitted distortion relationship, and perform numerical fitting on the to-be-fitted distortion relationship according to the multiple calibration point pairs, to determine a value of the distortion coefficient in the to-be-fitted distortion relationship to obtain a distortion relationship, the distortion relationship being configured for representing a conversion relationship between calibration points in the standard calibration image and the distortion calibration image.
In an aspect, the distortion calibration image may be acquired by an image acquisition device; and in a process in which the image acquisition device acquires the distortion calibration image, the optical lens of the extended reality device may be located between the image acquisition device and the display of the extended reality device, and the optical center of the optical lens may be aligned with the center of the display.
In an aspect, referring to
In an aspect, the calibration point pair determining module 1004 further includes a first calibration point determining module 1041, configured to obtain a slide window and trigger the slide window to slide on the standard calibration image according to a preset moving step size, to obtain a standard partial image selected by the slide window; determine a first overall grayscale value of the standard partial image based on grayscale values of pixels in the standard partial image; trigger the slide window to move towards any direction multiple times to obtain multiple standard partial images after movement corresponding to the standard partial image; determine a second overall grayscale value of each of the standard partial images after movement based on grayscale values of pixels in the standard partial image after movement; and extract the first calibration point from the standard partial image according to a difference between each second overall grayscale value and the first overall grayscale value.
In an aspect, the first calibration point determining module 1041 may be further configured to perform subtraction on each second overall grayscale value and the first overall grayscale value to obtain a grayscale difference corresponding to each second overall grayscale value; obtain a preset difference threshold, and select absolute values greater than or equal to the preset difference threshold from absolute values of grayscale differences; determine a number of the selected absolute values; and obtain a preset number threshold, and when the number may be greater than or equal to the preset number threshold, use the center of the standard partial image as the first calibration point.
In an aspect, the calibration point pair determining module 1004 may further include a second calibration point determining module 1042, configured to obtain a slide window and trigger the slide window to slide on the distortion calibration image according to a preset moving step size, to obtain a distortion partial image selected by the slide window; determine a third overall grayscale value of the distortion partial image based on grayscale values of pixels in the distortion partial image; trigger the slide window to move towards any direction multiple times to obtain multiple distortion partial images after movement corresponding to the distortion partial image; determine a fourth overall grayscale value of each of the distortion partial images after movement based on grayscale values of pixels in the distortion partial image after movement; and extract the second calibration point from the distortion partial image according to a difference between each fourth overall grayscale value and the third overall grayscale value.
In an aspect, the numerical fitting module 1006 may be further configured to obtain a preset distortion coefficient value increment model; where the distortion coefficient value increment model may be a model for determining a change amount of a predicted value of the distortion coefficient in two consecutive iterations; determine a predicted value of the distortion coefficient of a current round; obtain a predicted value increment of the distortion coefficient of the current round through the distortion coefficient value increment model according to the multiple calibration point pairs and the predicted value of the distortion coefficient of the current round; obtain a numerical convergence condition, and when the predicted value increment of the distortion coefficient does not meet the numerical convergence condition, add the predicted value increment of the distortion coefficient of the current round and the predicted value of the distortion coefficient of the current round, to obtain an updated predicted value of the distortion coefficient; use a next round as the current round, use the updated predicted value as a predicted value of the distortion coefficient of the current round, and return to continue to perform the operation of obtaining a predicted value increment of the distortion coefficient of the current round through the distortion coefficient value increment model according to the multiple calibration point pairs and the predicted value of the distortion coefficient of the current round, until the predicted value increment of the distortion coefficient meets the numerical convergence condition; and use a predicted value of the distortion coefficient of the last round as the value corresponding to the distortion coefficient in the to-be-fitted distortion relationship.
In an aspect, the distortion coefficient value increment model may be determined based on a residual model; the residual model may be determined based on the to-be-fitted distortion relationship; the residual model represents residual between a first coordinate change and a second coordinate change; the first coordinate change may be a change between a coordinate before distortion and a coordinate after distortion determined based on the predicted value of the distortion coefficient; and the second coordinate change may be a change between a coordinate before distortion and a coordinate after distortion determined based on an actual value of the distortion coefficient.
In an aspect, the numerical fitting module 1006 may be further configured to obtain a Hessian matrix of the current round through a Hessian matrix model in the distortion coefficient value increment model according to the multiple calibration point pairs; obtain an iteration matrix of the current round through an iteration matrix model in the distortion coefficient value increment model according to the multiple calibration point pairs and the predicted value of the distortion coefficient of the current round; and fuse the Hessian matrix of the current round and the iteration matrix of the current round to obtain the predicted value increment of the distortion coefficient of the current round.
In an aspect, the numerical fitting module 1006 may further include a Hessian matrix determining module 1061, and be configured to: for each of the multiple calibration point pairs, determine coordinates of the first calibration point belonging to the standard calibration image in the target calibration point pair, and determine coordinates of the second calibration point belonging to the distortion calibration image in the target calibration point pair; according to the coordinates of the first calibration point belonging to the standard calibration image and the coordinates of the second calibration point belonging to the distortion calibration image in the target calibration point pair, determine a coordinate pair corresponding to the target calibration point pair; obtain, through a Jacobian matrix model in the Hessian matrix model according to the coordinate pair, a Jacobian matrix corresponding to the target calibration point pair; fuse the Jacobian matrix and the transpose of the Jacobian matrix to obtain a fused Jacobian matrix corresponding to the target calibration point pair; and superimpose fused Jacobian matrices respectively corresponding to the multiple calibration point pairs, to obtain the Hessian matrix of the current round.
In an aspect, the numerical fitting module 1006 may further include an iteration matrix determining module 1062, and may be configured to: according to the target calibration point pair and the predicted value of the distortion coefficient of the current round, obtain, through a residual model in the iteration matrix model, a residual matrix corresponding to the target calibration point pair; fuse the transpose of the Jacobian matrix corresponding to the target calibration point pair and the residual matrix corresponding to the target calibration point pair, to obtain a fused iteration matrix corresponding to the target calibration point pair; and superimpose fused iteration matrices respectively corresponding to the multiple calibration point pairs, to obtain the iteration matrix of the current round.
In an aspect, the Hessian matrix model may be generated based on the Jacobian matrix model, and the Jacobian matrix model may represent a partial derivative of the residual model in a direction of the distortion coefficient; the iteration matrix may be generated based on the Jacobian matrix model and the residual model; and the residual model may represent residual between the change between the coordinate before distortion and the coordinate after distortion determined based on the predicted value of the distortion coefficient and the change between the coordinate before distortion and the coordinate after distortion determined based on the actual value of the distortion coefficient.
In an aspect, the distortion coefficient calibration apparatus 1000 for an extended reality device may further include an anti-distortion module, and may be configured to obtain the distortion relationship based on both the value of the distortion coefficient and the to-be-fitted distortion relationship; obtain a to-be-displayed image, and perform anti-distortion processing on pixels in the to-be-displayed image according to the distortion relationship, to determine distortion correction positions respectively corresponding to the pixels in the to-be-displayed image; respectively move the pixels to corresponding distortion correction positions, to obtain an anti-distortion image; and trigger the extended reality device to display the anti-distortion image.
Each module in the distortion coefficient calibration apparatus for an extended reality device may be implemented entirely or partially through software, hardware, or a combination thereof. The above-mentioned modules may be embedded in the processor in the computer device in the form of hardware or independent of the processor in the computer device, and may also be stored in the memory of the computer device in the form of software, so that the processor may invoke and execute the corresponding operations of the above-mentioned modules.
In an aspect, a computer device may be provided and may be a server. An internal structure thereof may be shown in
In an aspect, a computer device may be provided and may be a terminal. An internal structure thereof may be shown in
A person skilled in the art may understand that, the structure shown in
In an aspect, a computer device may be further provided, including: a memory and a processor, the memory stores computer programs, and the computer program may be executed by the processor to perform the operations of the foregoing method aspects.
In an aspect, a computer-readable storage medium may be provided, having computer programs stored therein, and the computer program may be executed by a processor to perform the operations of the foregoing method aspects.
In an aspect, a computer program product or a computer program may be provided, including computer instructions stored in a computer-readable storage medium. A processor of a computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the operations of the foregoing method aspects.
The user information (including but not limited to user equipment information, user personal information, or the like) and data (including but not limited to data used for analysis, stored data, displayed data, or the like) involved in this application are information and data that are authorized by the user or that have been fully authorized by all parties, and the collection, use and processing of relevant data need to comply with relevant laws, regulations and standards of relevant countries and regions.
A person of ordinary skill in the art may understand that all or some of procedures of the method in the foregoing aspects may be implemented by a computer program instructing relevant hardware. The computer program may be stored in a non-volatile computer-readable storage medium. When the computer program may be executed, the procedures of the foregoing method aspects may be implemented. References to the memory, the database, or other medium used in the aspects provided in this application may all include at least one of a non-volatile or a volatile memory. The non-volatile memory may include a read-only memory (ROM), a magnetic tape, a floppy disk, a flash memory, an optical memory, a high-density embedded non-volatile memory, a resistive random access memory (ReRAM), a magnetoresistive random access memory (MRAM), a ferroelectric random access memory (FRAM), a phase change memory (PCM), a graphene memory, or the like. The volatile memory may be a random access memory (RAM) or an external cache. As an illustration and not a limitation, the RAM may be in various forms, such as a static random access memory (SRAM) or a dynamic random access memory (DRAM). The database involved in the various aspects provided in this application may include at least one of a relational database and a non-relational database. The non-relational database may include a blockchain-based distributed database or the like, but may be not limited thereto. The processors involved in the various aspects provided by this application may be general-purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, data processing logic devices based on quantum computing, and are not limited thereto.
Technical features of the foregoing aspects may be randomly combined. To make description concise, not all possible combinations of the technical features in the foregoing aspects are described. However, the combinations of these technical features shall be considered as falling within the scope recorded by this specification provided that no conflict exists.
The foregoing aspects show only several implementations of this application and are described in detail, which, however, are not to be construed as a limitation to the patent scope of this application. For a person of ordinary skill in the art, several transformations and improvements may be made without departing from the idea of this application. These transformations and improvements belong to the protection scope of this application. Therefore, the protection scope of this application shall be subject to the appended claims.
| Number | Date | Country | Kind |
|---|---|---|---|
| 2022115939727 | Dec 2022 | CN | national |
This application is a continuation application of PCT Application PCT/CN2023/130102, filed Nov. 7, 2023, which claims priority to Chinese Patent Application No. 2022115939727 filed on Dec. 13, 2022, each entitled “DISTORTION COEFFICIENT CALIBRATION METHOD AND APPARATUS FOR EXTENDED REALITY DEVICE AND STORAGE MEDIUM”, and each which is incorporated herein by reference in its entirety.
| Number | Date | Country | |
|---|---|---|---|
| Parent | PCT/CN2023/130102 | Nov 2023 | WO |
| Child | 18894885 | US |