This application claims priority of Japanese Patent Application No. 2021-121955 filed in Japan on Jul. 26, 2021, the entire disclosure of which being incorporated herein by reference.
The present disclosure relates to an information processing device, an information processing method, an imaging device, and an information processing system.
Conventionally, systems for correcting depth information have been known (see, for example, Patent Literature 1).
An information processing device according to one embodiment of the present disclosure includes a control unit. The control unit acquires depth information in a predetermined space and distortion information of an imaging device that generates the depth information, and corrects the depth information based on the distortion information to generate corrected depth information.
An information processing method according to one embodiment of the present disclosure includes obtaining depth information in a predetermined space and distortion information of an imaging device that generates the depth information. The information processing method further includes correcting the depth information based on the distortion information to generate corrected depth information.
An imaging device according to one embodiment of the present disclosure includes at least two imaging elements that photograph images of a predetermined space, an optical system that forms images of the predetermined space on the imaging elements, a storage unit, and a control unit. The storage unit stores, as distortion information, information about enlargement, reduction or distortion of images photographed by each imaging element caused by at least one of characteristics of the optical system and errors in the placement of the imaging elements. The control unit generates depth information of the predetermined space based on images obtained by photographing the predetermined space with each imaging element, and outputs the depth information and the distortion information.
An information processing system according to one embodiment of the present disclosure includes an information processing device and an imaging device. The imaging device includes at least two imaging elements that photograph images of a predetermined space, an optical system that forms images of the predetermined space on the imaging elements, a storage unit, and a control unit. The storage unit stores, as distortion information, information about enlargement, reduction or distortion of images photographed by each imaging element caused by at least one of characteristics of the optical system and errors in the placement of the imaging elements. The control unit generates depth information of the predetermined space based on the images obtained by photographing the predetermined space with each imaging element, and outputs the depth information and the distortion information to the information processing device. The information processing device acquires the depth information and the distortion information from the imaging device and corrects the depth information based on the distortion information to generate corrected depth information.
Depth information needs to be corrected in a convenient manner. Depth information can be easily corrected with an information processing device, an information processing method, an imaging device, and an information processing system of the present disclosure.
As illustrated in
The (X_RB, Y_RB, Z_RB) coordinate system may be set as the same coordinate system as the (X, Y, Z) coordinate system or as a different coordinate system from the (X, Y, Z) coordinate system. When the (X_RB, Y_RB, Z_RB) coordinate system is set as a coordinate system different from the (X, Y, Z) coordinate system, the robot controller 10 converts the depth information generated by the imaging device 20 in the (X, Y, Z) coordinate system to depth information in the (X_RB, Y_RB, Z_RB) coordinate system and uses the converted depth information.
The number of the robots 40 and the robot controllers 10 is not limited to one, but may be two or more. The number of the imaging devices 20 may be one, two or more per work space. Each component is described in detail below.
The robot controller 10 includes a control unit 11 and a storage unit 12. The robot controller 10 is also referred to as an information processing device. Note that, in the present invention, the information processing device is not limited to the robot controller 10, but may be other components of the robot control system 1. The information processing device may be, for example, the imaging device 20.
The control unit 11 may include at least one processor to perform various functions of the robot controller 10. The processor may execute programs that implement the various functions of the robot controller 10. The processor may be realized as a single integrated circuit. The integrated circuit is also referred to as an IC (Integrated Circuit). The processor may also be realized as a plurality of communicatively connected integrated circuits and discrete circuits. The processor may include a CPU (Central Processing Unit). The processor may include a DSP (Digital Signal Processor) or a GPU (Graphics Processing Unit). The processor may be realized based on various other known technologies.
The robot controller 10 further includes the storage unit 12. The storage unit 12 may include an electromagnetic storage medium such as a magnetic disk, or may include a memory such as a semiconductor memory or magnetic memory. The storage unit 12 may be configured as an HDD (Hard Disk Drive) or an SSD (Solid State Drive). The storage unit 12 stores various information and programs executed by the control unit 11. The storage unit 12 may function as a work memory of the control unit 11. The control unit 11 may include at least a part of the storage unit 12.
The robot controller 10 may further include a communication device configured to be able to perform wired or wireless communication with the imaging device 20 and the robot 40. The communication device may be configured to be able to communicate using communication methods based on various communication standards. The communication device may be configured with a known communication technology. A detailed description of the hardware and the like of the communication device is omitted. The functions of the communication device may be realized by a single interface or by separate interfaces for each connection destination. The control unit 11 may be configured to communicate with the imaging device 20 and the robot 40. The control unit 11 may include the communication device.
As illustrated in
The end effector 44 may include, for example, a gripping hand capable of grasping the work target. The gripping hand may have a plurality of fingers. The number of fingers of the gripping hand may be two or more. The finger of the grasping hand may have one or more joints. The end effector 44 may include a suction hand capable of sucking a work target. The end effector 44 may include a scooping hand capable of scooping up the work target. The end effector 44 may include a drill or other tool capable of performing various machining operations, such as drilling a hole in the work target. The end effector 44 may be configured to perform various other operations, rather than being limited to those described in the above examples.
The robot 40 can control the position of the end effector 44 by moving the arm 42. The end effector 44 may have an axis that serves as a reference for the direction of action with respect to the work target. If the end effector 44 has such an axis, the robot 40 can control the direction of the axis of the end effector 44 by moving the arm 42. The robot 40 controls the start and end of the operation of the end effector 44 acting on the work target. The robot 40 can move or process the work target by controlling the operation of the end effector 44 while controlling the position of the end effector 44 or the direction of the axis of the end effector 44.
The robot 40 may further include sensors that detect the status of each component of the robot 40. The sensors may detect information about the actual position or attitude of each component of the robot 40 or the speed or acceleration of each component of the robot 40. The sensors may also detect the force acting on each component of the robot 40. The sensors may also detect the current flowing in a motor that drives each component of the robot 40 or the torque of the motor. The sensors may detect information obtained as the results of the actual operation of the robot 40. By acquiring the detection results of the sensors, the robot controller 10 can grasp the results of the actual operation of the robot 40.
The robot 40 further includes, although not required, a mark 46 attached to the end effector 44. The robot controller 10 recognizes the position of the end effector 44 based on an image obtained by photographing the mark 46 with the imaging device 20. The robot controller 10 can perform a calibration of the robot 40 using the position of the end effector 44 grasped based on the detection results of the sensors and the results of recognizing the position of the end effector 44 by the mark 46.
As illustrated in
The control unit 22 may include at least one processor. The processor can execute programs that realize various functions of the imaging device 20. The storage unit 23 may include an electromagnetic storage medium such as a magnetic disk, or may include a memory such as a semiconductor memory or magnetic memory. The storage unit 23 may be configured as an HDD or SSD. The storage unit 23 stores various information and programs executed by the control unit 22. The storage unit 23 may function as a work memory of the control unit 22. The control unit 22 may include at least a part of the storage unit 23.
As illustrated in
The left photographed image 51L includes a left image 52L of the measurement point 52 obtained by photographing the measurement point 52. The right photographed image 51R includes a right image 52R of the measurement point 52 obtained by photographing the measurement point 52. In the left photographed image 51L and the right photographed image 51R, a virtual measurement point image 52V is described. The virtual measurement point image 52V represents a virtual measurement point 52 in an image obtained by photographing, with the imaging element 21, the virtual measurement point 52 located at an infinite distance from the imaging device 20.
The virtual measurement point image 52V is located on the optical axis 25L of the left optical system 24L and the optical axis 25R of the right optical system 24R. The virtual measurement point image 52V is located at the center of the left photographed image 51L and the center of the right photographed image 51R. In
In other words, the image of the real measurement point 52 located at a finite distance from the imaging device 20 is formed at a position displaced from the virtual measurement point image 52V, such as the left image 52L of the measurement point 52 in the left photographed image 51L and the right image 52R of the measurement point 52 in the right photographed image 51R. The positions of the left image 52L of the measurement point 52 and the right image 52R of the measurement point 52 are determined as follows. First, an incident point 27L is assumed to be at the intersection of a dashed line connecting the optical center 26L of the left optical system 24L and the optical center 26R of the right optical system 24R and a dashed line connecting the measurement point 52 and the virtual measurement point image 52V of the left photographed image 51L. Further, an incident point 27R is assumed to be at the intersection of the dashed line connecting the optical center 26L of the left optical system 24L and the optical center 26R of the right optical system 24R and a dashed line connecting the measurement point 52 and the virtual measurement point image 52V in the right photographed image 51R. The left image 52L of the measurement point 52 of the left photographed image 51L is located at the intersection of a dashed line extending in the positive direction of the Z axis from the incident point 27L and a dashed line connecting the virtual measurement point images 52V of the left photographed image 51L and the right photographed image 51R. The right image 52R of the measurement point 52 of the right photographed image 51R is located at the intersection of a dashed line extending in the positive direction of the Z axis from the incident point 27R and the dashed line connecting the virtual measurement point images 52V of the left photographed image 51L and the right photographed image 51R.
The control unit 22 of the imaging device 20 can calculate the distance from the imaging device 20 to the measurement point 52 based on the difference between the X coordinates of the left image 52L of the measurement point 52 in the left photographed image 51L and the right image 52R of the measurement point 52 in the right photographed image 51R and the X coordinate of the virtual measurement point image 52V. The control unit 22 may calculate the distance from the imaging device 20 to the measurement point 52 based further on the distance between the centers of the two imaging elements 21 and the focal length of the optical system that forms the image on each imaging element 21. In
Parameters used to calculate the depth are described below. The distance between the center of the left photographed image 51L and the center of the right photographed image 51R is represented by T. The difference between the X coordinate of the virtual measurement point image 52V of the left photographed image 51L and the X coordinate of the left image 52L of the measurement point 52 is represented by XL. The sign of XL is positive when the left image 52L of the measurement point 52 is located in a more positive direction on the X axis than the virtual measurement point image 52V of the left photographed image 51L. The difference between the X coordinate of the virtual measurement point image 52V of the right imaging element 21R and the X coordinate of the right image 52R of the measurement point 52 is represented by XR. The sign of XR is positive when the right image 52R of the measurement point 52 is located in a more negative direction on the X axis than the virtual measurement point image 52V of the right photographed image 51R. In other words, the signs of XL and XR are positive when the left image 52L and the right image 52R of the measurement point 52 are displaced from the virtual measurement point image 52V in the direction closer to each other.
The focal lengths of the left optical system 24L and the right optical system 24R are represented by F. The left imaging element 21L and the right imaging element 21R are positioned so that the focal points of the left optical system 24L and the right optical system 24R are located on their imaging surfaces. The focal lengths (F) of the left optical system 24L and the right optical system 24R correspond to the distances from the left optical system 24L and the right optical system 24R to the imaging surfaces of the left imaging element 21L and the right imaging element 21R.
Specifically, the control unit 22 can calculate the depth by operating as follows.
The control unit 22 detects the X coordinate of the left image 52L of the measurement point 52 in the left photographed image 51L. The control unit 22 calculates the difference between the X coordinate of the left image 52L of the measurement point 52 and the X coordinate of the virtual measurement point image 52V as XL. The control unit 22 detects the X coordinate of the right image 52R of the measurement point 52 in the right photographed image 51R. The control unit 22 calculates the difference between the X coordinate of the right image 52R of the measurement point 52 and the X coordinate of the virtual measurement point image 52V as XR.
Here, in the XZ plane, a first triangle is considered to exist with the measurement point 52, the virtual measurement point image 52V of the left photographed image 51L and the virtual measurement point image 52V of the right photographed image 51R as vertices. In the XZ plane, a second triangle is considered to exist with the measurement point 52, the assumed incident point 27L in the left optical system 24L and the assumed incident point 27R in the right optical system 24R as vertices. The Z coordinate of the optical center 26L of the left optical system 24L is the same as the Z coordinate of the optical center 26R of the right optical system 24R. The Z coordinate of the virtual measurement point image 52V of the left photographed image 51L is the same as the Z coordinate of the virtual measurement point image 52V of the right photographed image 51R. Therefore, the first triangle and the second triangle are similar to each other.
The distance between the two virtual measurement point images 52V of the first triangle is T. The distance between the incident points 27L and 27R of the second triangle is T−(XL+XR). Since the first triangle and the second triangle are similar, the following Formula (1) is satisfied.
Based on Formula (1), Formula (2) for calculating D is derived as follows.
In Formula (2), the larger XL+XR is, the smaller D is calculated. On the other hand, if XL+XR=0, then D is infinite. For example, if the left image 52L of the left photographed image 51L coincides with the virtual measurement point image 52V and the right image 52R of the right photographed image 51R coincides with the virtual measurement point image 52V, D is calculated to be infinite. In fact, since the left image 52L and the right image 52R are defined to coincide with the virtual measurement point image 52V when the measurement point 52 is located at infinity, it can be said that D can be correctly calculated by Formula (2).
The control unit 22 can calculate the depth based on the two photographed images obtained by photographing the measurement point 52 with the two imaging elements 21 as described above. The control unit 22 calculates the distances to a plurality of measurement points 52 included in the photographed image of the work space of the robot 40, generates depth information representing the distance (depth) to each measurement point 52, and outputs the generated depth information to the robot controller 10. The depth information can be expressed by a function that takes X coordinate and Y coordinate as arguments, respectively, in the (X, Y, Z) coordinate system of the imaging device 20. The depth information can also be expressed as a two-dimensional map obtained by plotting depth values on the XY plane of the imaging device 20.
As described above, the imaging device 20 generates depth information based on the positions of the images of the measurement point 52 in the two photographed images obtained by photographing the work space with the left imaging element 21L and the right imaging element 21R. Here, the photographed image may be photographed as an enlarged, reduced, or distorted image relative to the actual work space. The enlargement, reduction, or distortion of the photographed image relative to the actual work space is also referred to as distortion of the photographed image. If the distortion occurs in the photographed image, the position of the image of the measurement point 52 in the photographed image may be displaced. As a result, the distortion of the photographed image causes errors in the calculation result of the depth. The distortion of the photographed image also causes errors in the depth information, which represents the calculation result of the depth.
For example, as illustrated in
A measurement point image 52Q in the distorted image 51Q is closer to the virtual measurement point image 52V than a measurement point image 52P in the non-distorted image 51P. In other words, X_DIST, which represents the distance from the measurement point image 52Q in the non-distorted image 51Q to the virtual measurement point image 52V, is shorter than XL or XR, which represents the distance from the measurement point image 52P in the distorted image 51P to the virtual measurement point image 52V. Therefore, in the above Formula (2), which is used to calculate the depth (D), the value of XL+XR is smaller. As a result, the result of the depth (D) calculated by the control unit 22 of the imaging device 20 based on the distorted image 51Q is larger than the calculation result based on the non-distorted image 51P. In other words, the distortion of the photographed image can cause errors in the calculation result of the depth (D).
The control unit 11 of the robot controller 10 acquires the depth information from the imaging device 20. The control unit 11 further acquires information about the distortion of the imaging device 20. The information about the distortion of the imaging device 20 is also referred to as distortion information. The distortion information may be, for example, optical and geometric parameters obtained during the manufacturing inspection of the imaging device 20. The distortion information represents the distortion of the left photographed image 51L and the right photographed image 51R. As described above, the errors in the calculation result of the depth (D) is determined by the distortion information. Therefore, the control unit 11 can correct the errors in the depth (D) represented by the depth information acquired from the imaging device 20 based on the distortion information. Specific examples of correction methods are described below.
The control unit 11 acquires the distortion information of the imaging device 20. As the distortion information, the control unit 11 may acquire the distortion information of each of the left imaging element 21L and the right imaging element 21R. The control unit 11 may also acquire the distortion information of the imaging device 20 from an external device, such as a cloud storage. The control unit 11 may also acquire the distortion information from the imaging device 20. In such a case, the imaging device 20 may store the distortion information in the storage unit 23. The control unit 11 may acquire the distortion information from the storage unit 23 of the imaging device 20. The imaging device 20 may store address information in the storage unit 23, wherein the address information specifies the location where the distortion information of the imaging device 20 is stored. The address information may include, for example, an IP address or URL (Uniform Resource Locator) for accessing an external device, such as a cloud storage. The control unit 11 may acquire the distortion information by acquiring the address information from the imaging device 20 and accessing an external device or the like specified by the address information.
The distortion information may include the distortion of the left optical system 24L and right optical system 24R. The distortion of each optical system may include the distortion caused in the photographed image by the characteristics of each optical system. The distortion information may include the distortion of the imaging surfaces of the left imaging element 21L and the right imaging element 21R. The distortion information may include the distortion in the photographed image caused by errors in the placement of the left optical system 24L or the right optical system 24R, or errors in the placement of the left imaging element 21L or the right imaging element 21R.
The characteristics of each optical system may be, for example, the degree of curvature or the size of the curved lens. The errors in the placement of the imaging element 21 and the like may be, for example, errors in the planar position of the imaging element 21 and the like when mounted, or manufacturing errors such as inclination of the optical axis.
For example, as shown in
Here, the control unit 11 can estimate errors of the depth value at each X coordinate based on the distortion information. Specifically, for the measurement point 52 located at (X1, Y1), the control unit 11 estimates the errors in the positions of the left image 52L and the right image 52R of the measurement point 52 in the photographed image caused by the distortion based on the distortion information. The control unit 11 can calculate a correction value for the depth value of the measurement point 52 located at (X1, Y1) based on the estimated errors in the positions of the left image 52L and the right image 52R of the measurement point 52. The correction value for the depth value is represented by D_corr. For example, when XL and XR become smaller by ΔXerr/2, respectively, due to mounting errors of the imaging element 21, the correction value for the depth value (D_corr) of the measurement point 52 at (X1, Y1) is expressed by the following Formula (3), for example.
In Formula (3), D is the depth value before correction. F and T are the focal length of the imaging device 20 and the distance between the centers of the two imaging elements 21, and these parameters are defined as the specification of the imaging device 20. ΔXerr can be acquired, for example, as a value estimated based on the distortion information. ΔXerr can also be acquired, for example, as the distortion information itself.
The control unit 11 can calculate the correction value for the depth value (D_corr) by substituting the estimated result of the errors in the positions of the left image 52L and the right image 52R in the photographed image of each measurement point 52 into Formula (3). The correction value for the depth value (D_corr) calculated by the control unit 11 can be represented, for example, as the graph shown in
As described above, the control unit 11 can estimate the correction value for the depth value (D_corr) based on the distortion information. The control unit 11 can estimate the correction value for each measurement point 52, correct the depth value of each measurement point 52, and bring the depth value closer to the true value of each measurement point 52. By correcting the depth value of each measurement point 52 of the depth information, the control unit 11 can generate corrected depth information that represents the corrected depth value. The control unit 11 may control the robot 40 based on the corrected depth information. For example, by correcting the depth value, the positioning accuracy of the robot 40 with respect to the object 50 located in the work space can be enhanced.
The control unit 11 of the robot controller 10 may execute an information processing method including the flowchart procedure shown in
The control unit 11 acquires the depth information from the imaging device 20 (step S1). The control unit 11 acquires the distortion information of the imaging device 20 that generated the depth information (step S2). The control unit 11 corrects the depth information based on the distortion information and generates the corrected depth information (step S3). After executing the procedure of step S3, the control unit 11 terminates the execution of the procedure of the flowchart of
The control unit 11 may further acquire color information from the imaging device 20. The control unit 11 may acquire the color information as a photographed image obtained by photographing the work space with the imaging device 20. In other words, the photographed image contains the color information. The control unit 11 may detect the existence position and/or the like of the object 50 and/or the like located in the work space based on the corrected depth information and the color information. As a result, the detection accuracy can be improved compared to the case of detecting the existence position of the object 50 and/or the like based on the depth information. In such a case, for example, the control unit 11 may generate integrated information in which the corrected depth information and the color information are integrated. The correction may be performed based on the corrected depth information as described below.
When controlling the robot 40, the control unit 11 can also transform the work space expressed in the (X, Y, Z) coordinate system into a configuration coordinate system of the robot. The configuration coordinate system of the robot refers, for example, to a coordinate system composed of each parameter that indicates the movement of the robot.
As described above, the robot controller 10 according to the present embodiment can correct the depth information acquired from the imaging device 20 based on the distortion information of the imaging device 20. If the robot controller 10 acquires a photographed image from the imaging device 20 and corrects the distortion of the photographed image itself, the amount of communication between the robot controller 10 and the imaging device 20 and the calculation load of the robot controller 10 will increase. The robot controller 10 according to the present embodiment can reduce the amount of communication between the robot controller 10 and the imaging device 20 by acquiring, from the imaging device 20, the depth information, which has a smaller data volume than an initial photographed image. By estimating the correction value for depth based on the distortion information, the robot controller 10 can reduce the calculation load compared to performing the correction of the distortion of the photographed image itself and the calculation of the depth based on the corrected photographed image. Thus, the robot controller 10 according to the present embodiment can easily correct the depth information. In addition, only the depth information can be easily corrected without changing the accuracy of the imaging device 20 itself.
By being able to correct the depth information, the robot controller 10 according to the present embodiment can improve the positioning accuracy of the robot 40 with respect to the object 50 located in the work space of the robot 40.
Other embodiments are described below.
<Application to Spaces Other than Work Space of Robot 40>
The depth information can be acquired in various spaces, not limited to the work space of the robot 40. The robot control system 1 and the robot controller 10 may be replaced by an information processing system and an information processing device, respectively, which process the depth information in various spaces. The various spaces whose depth information is acquired are also referred to as predetermined spaces.
Examples of the space whose depth information is acquired include a space in which an AGV (Automatic Guided Vehicle) equipped with a 3D stereo camera travels to perform an operation of pressing a door open/close switch. The depth information acquired in such a space is used to improve measurement accuracy of a distance that needs to be traveled from the current position.
Examples of the spaces whose depth information is acquired include a space in which a VR (virtual reality) or a 3D game device equipped with a 3D stereo camera is operated. In such a space, a measurement result of a distance to a controller, a marker or the like held by the player of the VR or 3D game device is acquired as the depth information. The depth information acquired in such a space is used to improve the accuracy of the distance to the controllers, the marker or the like. By improving the accuracy of the distance, the accuracy of aligning the position of a real object (such as a punching ball) in the space where the player is located with the position of the hand of the player in the virtual space is improved.
The control unit 11 of the robot controller 10 may acquire the depth information acquired from the imaging device 20 in the form of point group data that includes coordinate information of the measurement point in the measurement space. In other words, the depth information may have the form of point group data. In further other words, the point group data may contain the depth information. The point group data is data that represents a point group (also called a measurement point group), which is a set of a plurality of measurement points in the measurement space. It can be said that point group data is data that represents an object in the measurement space with a plurality of points. The point group data also represents the surface shape of an object in the measurement space. The point group data contains coordinate information representing the location of points on the surface of an object in the measurement space. The distance between two measurement points in a point group is, for example, the real distance in the measurement space. Since the depth information has the form of point group data, the data density can be smaller and the data volume can be smaller than the depth information based on the photographed image of the initial data initially acquired by the imaging element 21. As a result, the load of calculation processing when correcting the depth information can be further reduced. The conversion of the depth information from the initial form to the point group data form or the generation of the point group data containing the depth information may be performed by the control unit 22 of the imaging device 20.
When the point group data containing the depth information is generated, the depth information and the color information may be corrected after integrating the color information into the point group data. Even in such a case, since the point group data has a smaller data volume than the initial photographed image, the calculation load can be reduced when correcting the depth information and the like, and the amount of communication between the robot controller 10 and the imaging device 20 can be reduced. In such a case, the integration of the depth information and the color information may be implemented in the control unit 22 of the imaging device 20.
In the embodiment described above, the imaging device 20 outputs the depth information of a predetermined space to an information processing device. The imaging device 20 may output an image, such as an RGB (Red, Green, Blue) image or a monochrome image, obtained by photographing the predetermined space as it is, to an information processing device. The information processing device may correct the image obtained by photographing the predetermined space based on the corrected depth information. For example, the information processing device may correct the color information of the image obtained by photographing the predetermined space based on the corrected depth information. The color information refers to, for example, hue, saturation, luminance or lightness.
The information processing device may, for example, correct the brightness or lightness of the image based on the corrected depth information. The information processing device may, for example, correct the peripheral illumination of the image based on the corrected depth information. The peripheral illumination refers to the brightness of the light at the periphery of the lens of the imaging device 20. Since the brightness of the light at the periphery of the lens is reflected in the brightness of the periphery or corners of the photographed image, it can be said that the peripheral illumination refers to the brightness of the periphery or corners in the image, for example. When the imaging device 20 has a lens, the photographed image may be darker at the periphery in the image than at the center in the image due to the difference in light flux density between the center of the lens and the periphery of the lens caused, for example, by lens distortion of the imaging device 20. However, even if the peripheral illumination is small and the periphery or corners in the image are dark, the information processing device can correct the peripheral illumination or color information of the periphery in the image based on the corrected depth information, so that the image data can be free from any problems in robot control.
The information processing device may correct the peripheral illumination or color information of the image based on the magnitude of the correction value for depth corresponding to each pixel of the image. For example, the larger the correction value for depth, the more the peripheral illumination or color information corresponding to that correction value may be corrected.
The information processing device may correct the peripheral illumination or color information of the image. The information processing device may also perform the correction when integrating the depth information with the color information of the RGB image or the like.
The information processing device may also generate corrected depth information obtained by correcting the depth information of the predetermined space based on the distortion information of the imaging device 20, and correct the peripheral illumination or color information of the image based on the generated corrected depth information.
Although the embodiments according to the present disclosure have been described based on the drawings and the examples, it is to be noted that various variations and changes may be made by those who are ordinarily skilled in the art based on the present disclosure. Thus, it is to be noted that such variations and changes are included in the scope of the present disclosure. For example, functions and the like included in each component or the like can be rearranged without logical inconsistency, and a plurality of components or the like can be combined into one or divided.
All the components described in the present disclosure and/or all the disclosed methods or all the processing steps may be combined based on any combination except for the combination where these features are exclusive with each other. Further, each of the features described in the present disclosure may be replaced with an alternative feature for achieving the same purpose, equivalent purpose, or similar purpose, unless explicitly denied. Therefore, each of the disclosed features is merely an example of a comprehensive series of identical or equal features, unless explicitly denied.
The embodiments according to the present disclosure are not limited to any of the specific configurations in the embodiments described above. The embodiments according to the present disclosure can be extended to all the novel features described in the present disclosure or a combination thereof, or to all the novel methods described in the present disclosure, the processing steps, or a combination thereof.
Number | Date | Country | Kind |
---|---|---|---|
2021-121955 | Jul 2021 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2022/028813 | 7/26/2022 | WO |