WAVE FIELD RECONSTRUCTION METHOD BASED ON OPTICAL PERCEPTION

Information

  • Patent Application
  • 20250191284
  • Publication Number
    20250191284
  • Date Filed
    February 17, 2025
    3 months ago
  • Date Published
    June 12, 2025
    2 days ago
Abstract
This invention discloses a wave field reconstruction method via optical perception, including: (1) building a data platform for a virtual wave field; (2) pre-training a domain converter module using color image data of the virtual wave field and real water surface image data captured by a camera; (3) pre-training a depth estimation module using generated paired color image data and depth image data of the virtual wave field; (4) converting the style of a real water surface image into a virtual-water-surface-style image via the domain converter module; (5) outputting the virtual-water-surface-style image as a depth image with the same style and recording the distance between a wave surface sampling point and the camera's optical center; and (6) generating surface point cloud data of a water surface wave field with the camera's optical center as the coordinate origin. The method enables real-time, robust wave field depth reconstruction, ensuring reliability and authenticity.
Description
BACKGROUND
Technical Field

The present invention relates to the technical field of reconstruction and rebuilding for three-dimensional wave fields, and specifically relates to a wave field reconstruction method based on optical perception.


Description of Related Art

With development and application of artificial intelligence technology in the field of equipment, intelligent systems of unmanned boats are also experiencing a wave of intelligence. Being different from unmanned vehicles and unmanned aerial vehicles, unmanned boats face complex and ever-changing water surface environments, and due to a lack of perception for surrounding water surface and sea conditions, traditional unmanned boat movement control is very limited in accuracy, and it is difficult to control unmanned boats to complete tasks with high movement accuracy requirements.


In the related fields of fluid research, previous research on a wave field reconstruction technology is mainly carried out from three aspects, the first aspect is carrying out special treatment on a water body, the water body is dyed by means of a fluorescent substance, water surface reconstruction is carried out through analyzing fluorescence intensity of different areas in a captured image, or known image textures are pasted under the transparent water body, a relationship between texture distortion and a water surface shape is analyzed on the basis of the refraction principle, and the method is only suitable for being carried out in a laboratory; the second aspect is capturing an image of a sea surface by means of a camera, and numerical calculation is carried out in combination with physical equations during calculation, however, the calculation amount is too large, the calculation time is long, and an effect of real-time reconstruction cannot be achieved; and the third aspect is reconstructing a wave field by means of an SFS algorithm, however, a wave reconstruction algorithm based on the SFS algorithm does not change an assumption of orthogonal projection, so that a problem of projection occlusion of the wave field cannot be greatly solved.


SUMMARY

With regard to the above condition, the present invention provides a wave field reconstruction method based on optical perception, and the reconstruction method can intuitively represent wave information of a water surface wave field; real-time reconstruction for depth information of the water surface wave field can be realized through capturing a water surface image with a camera, and a high calculation speed is achieved; there are no special requirements on a water body, and high adaptability is achieved; and a problem of projection occlusion of the wave field is considered, so that reliability and authenticity of wave field reconstruction are ensured.


A specific technical solution of the present invention is a wave field reconstruction method based on optical perception, and the wave field reconstruction method based on the optical perception includes the following steps:

    • step 1, building a data generation platform for a virtual wave field, simulating optical features of a water surface through using composite illumination models, and generating paired color image data and depth image data of the virtual wave field;
    • step 2, constructing a domain converter module, and pre-training the constructed domain converter module through taking the color image data of the virtual wave field, and real water surface image data captured by a camera as a training set;
    • step 3: constructing a depth estimation module, and pre-training the constructed depth estimation module through taking the generated paired color image data and depth image data of the virtual wave field as a training set;
    • step 4: capturing a real water surface image, and converting a style of the real water surface image into an image with a virtual-water-surface style through using the domain converter module pre-trained in the step 2;
    • step 5, outputting the image with the virtual-water-surface style as a depth image with the virtual-water-surface style and recording a distance between a wave surface sampling point and an optical center of the camera through using the depth estimation module; and
    • step 6: carrying out coordinate mapping on the depth image with the virtual-water-surface style through adopting a point cloud mapping algorithm, and generating surface point cloud data that takes the optical center of the camera as a coordinate origin, of a water surface wave field.


Further, the data generation platform for the virtual wave field in the step 1 is built through adopting a Gerstner wave model, and the Gerstner wave model is shown in a formula (I) below,









{




x
=


x
0

+




i
=
1

n



Q
i


cos


θ
i



A
i



sin
[



k
i

(



x
0


cos


θ
i


+


y
0


sin


θ
i



)

+


ω
i


t

+

φ
i


]










y
=


y
0

+




i
=
1

n



Q
i


sin


θ
i



A
i



sin
[



k
i

(



x
0


cos


θ
i


+


y
0


sin


θ
i



)

+


ω
i


t

+

φ
i


]










z
=


z
0

+




i
=
1

n



A
i



cos
[



k
i

(



x
0


cos


θ
i


+


y
0


sin


θ
i



)

+


ω
i


t

+

φ
i


]












(
I
)







wherein t is time, (x, y, z) is a position at the time t, (x0, y0, z0) is a position at rest, Qi is a sharpness degree of a wave peak of the ith wave superposed, and has a value range of







[

0
,

1


k
i



A
i




]

,




Ai is an amplitude of the ith wave superposed, ki is a wave number of the ith wave superposed, θi is a direction angle of the ith wave superposed, ωi is an angular speed of the ith wave superposed, and φi is an initial phase of the ith wave superposed.


Further, the composite illumination models in the step 1 are a diffuse reflection model, a specular reflection model, and a sub-surface scattering model which are shown in a formula (II), a formula (III), and a formula (IV) below respectively,










I
d

=


k
d



I

i

n



cos

α





(
II
)







wherein Id is observed color brightness, kd is a diffuse reflectance coefficient, Iin is brightness of incident light, and α is an included angle between a plane normal and the incident light;










I
m

=


k
m





I

i

n


(

cos

(

ϕ
-
θ

)

)

n






(
III
)







wherein Im is observed color brightness, km is a specular reflection coefficient, Iin is the brightness of the incident light, ϕ is an included angle between the plane normal and the incident light, θ is an included angle between the plane normal and an observation line of sight, and n is a highlight coefficient; and









{






L
o

(


p
o

,

r
o


)

=






S

(


p
o

,

r
o

,

p
i

,

r
i


)




L
i

(


p
i

,

r
i


)





"\[LeftBracketingBar]"


cos


φ
i




"\[RightBracketingBar]"




dr
i


d

A










S

(


p
o

,

r
o

,

p
i

,

r
i


)

=


1
π




F
t

(


η
o

,

r
o


)




R
d

(




p
i

-

p
o




)




F
t

(


η
i

,

r
i


)










(
IV
)







where Li is emissivity of the incident light, Lo is emissivity of an emergent light, pi is a contact point between the incident light and an object, po is a contact point between the emergent light and the object, ri is a direction vector of the incident light, ro is a direction vector of the emergent light, φi is an included angle between the incident light and a normal vector of a micro-plane of the contact point, S(po, ro, pi, ri) is a bidirectional sub-surface reflection distribution function, Ft is a Fresnel coefficient, Rd is a diffusion approximation function, and d is a distance.


Further, a process of generating paired color image data and depth image data of the wave field in the step 1 is as follows:

    • step a, generating a two-dimensional image of the virtual wave field through adopting the data generation platform for the virtual wave field, and meanwhile, writing the minimum value of a depth in the same pixel into a depth cache;
    • step b, according to the depth value of the two-dimensional image of the virtual wave field, selecting a color closest to the camera to sequentially render the pixels, so as to generate a required color image of the virtual wave field; and
    • step c, carrying out inverse perspective transformation on values in the depth cache, converting an image coordinate system into a world coordinate system to obtain a real depth value, and generating a required depth image of the virtual wave field.


Further, a specific process of the point cloud mapping algorithm in the step 6 is as follows:

    • step A, through translating a pixel coordinate system, translating the center point of the coordinate system from the top left corner of the image to the center of the image, and then multiplying by a conversion factor to convert the pixel coordinate system to the image coordinate system, as shown in a formula (V) below,









{




x
=


r
x

·

(

u
-

w
2


)








y
=


r
y

·

(

v
-

h
2


)










(
V
)







wherein uv is the pixel coordinate system, the origin is the top left corner of the image, w and h are pixel values of a width and a height of the image respectively, o-xy is the image coordinate system, the origin is the center point of the image, rx and ry are conversion factors, and represent absolute distances represented by a transversal pixel and a longitudinal pixel of the image respectively, and a calculation formula is shown in a formula (VI) below,









{





r
x

=


2

f


tan

(

β
2

)


w








r
y

=


2

f


tan

(

α
2

)


h









(
VI
)







wherein α is an opening angle of a visual vertebrae of the camera in a vertical direction, β is an opening angle of the visual vertebrae of the camera in a horizontal direction, and f is a focal length of the camera;

    • step B, normalizing a vector (x, y, f) pointing from the optical center of the camera to the pixel point according to a formula (VII) below to obtain (xe, ye, ze) which represents a direction pointing from the optical center of the camera to a point in the image, and the direction is also a direction pointing from the optical center of the camera to a real point in the world coordinate system:









{





x
e

=

x



x
2

+

y
2

+

f
2











y
e

=

y



x
2

+

y
2

+

f
2











z
e

=

z



x
2

+

y
2

+

f
2












(
VII
)









    • step C, multiplying the vector (xe, ye, ze) with the distance between the optical center of the camera, which is stored in the depth image, and the point in the world coordinate system, so as to obtain a position of a point in the camera coordinate system with the optical center of the camera as the origin, as shown in a formula (VIII) below












{






x
C

=


d

(

u
,
v

)

·

x
e









y
C

=


d

(

u
,
v

)

·

y
e









z
C

=


d

(

u
,
v

)

·

z
e






.





(
VIII
)







The advantages or beneficial effects of the present invention are that:

    • 1) through introducing virtual wave field data to achieve the purpose of data obtaining, and using unsupervised learning for connecting virtual data and real data, a technical framework for water surface wave field perception, which is feasible and capable of being implemented under existing technological means is proposed;
    • 2) with regard to the problem of difficulty in obtaining true values of water surface wave field data, a large amount of virtual wave field data may be generated for an algorithm for training through using Gerstner wave superposition for simulating a shape of the wave field, using a method of compounding a plurality of illumination models for simulating optical features of the wave field, and using rendering pipelines for generating the required color image data and depth image data, so that the purpose of training dataset obtaining is achieved;
    • 3) a relationship between the color image and the depth image is regarded as an image translation problem, so that the problem of “edge blur” caused due to manual setting for a loss function in traditional monocular depth estimation is avoided;
    • 4) the problem of projection occlusion of the wave field is considered in the process of generating the images of the virtual wave field by the built data generation platform for the virtual wave field, so that reliability and authenticity of wave field reconstruction are ensured; and
    • 5) the method has no special requirements on a water body, and is high in adaptability, and moreover, a speed of the wave field reconstruction achieves millisecond class, and real-time wave field reconstruction may be carried out in outdoor water areas.


To make the aforementioned more comprehensible, several embodiments accompanied with drawings are described in detail as follows.





BRIEF DESCRIPTION OF THE DRAWINGS

The accompanying drawings are included to provide a further understanding of the disclosure, and are incorporated in and constitute a part of this specification. The drawings illustrate exemplary embodiments of the disclosure and, together with the description, serve to explain the principles of the disclosure.



FIG. 1 is a schematic flow diagram of a wave field reconstruction method based on optical perception, of the present invention;



FIG. 2 is a schematic diagram of Gerstner wave superposition simulation under different wave conditions;



FIG. 3 is an effect diagram of a wave field added with a diffuse reflection model, a specular reflection model, and a sub-surface scattering model;



FIG. 4A and FIG. 4B are rendered images of the virtual wave field, where FIG. 4A is a color image of the virtual wave field, and FIG. 4B is a depth image of the virtual wave field;



FIG. 5 is a schematic diagram of coordinate mapping of a camera;



FIG. 6A and FIG. 6B are schematic diagrams of input and output of a domain converter module, where FIG. 6A is the input of the domain converter module, that is, a real water surface image, and FIG. 6B is the output of the domain converter module, that is, an image with a virtual-water-surface style;



FIG. 7A and FIG. 7B are schematic diagrams of input and output of a depth estimation module, where FIG. 7A is the input of the depth estimation module, that is, an image with a virtual-water-surface style, and FIG. 7B is the output of the depth estimation module, that is, a depth image with a virtual-water-surface style; and



FIG. 8A to FIG. 8C are schematic diagrams of input data and output data of a point cloud mapping algorithm, where FIG. 8A and FIG. 8B are the input of a point cloud mapping module, that is, an image with a virtual-water-surface style and a depth image with a virtual-water-surface style, and FIG. 8C is the output of the point cloud mapping module, that is, a point cloud data image of the wave field.





DESCRIPTION OF THE EMBODIMENTS

The technical solution of the present invention will be further described below in combination with the drawings of the specification.


As shown in FIG. 1, specific steps of a wave field reconstruction method based on optical perception, of the present invention, are as follows:

    • step 1, building a data generation platform for a virtual wave field. A shape model and an optical model of a virtual wave field are built respectively through using a Unity development platform, and images of the virtual wave field are generated by means of a Unity engine. The Unity is a real-time 3D interactive content creation and operation platform that provides a whole set of comprehensive software solution for services including game development, art, architecture, automobile design, and film and television, and may be used for creating, operating, and monetizing any real-time interactive 2D and 3D content. A shape of the wave field is simulated through using Gerstner wave superposition to obtain height information and normal information of each point, optical features of a water surface are simulated through using composite illumination models, color brightness of each point is calculated by means of Gerstner wave output information, and color image rendering is carried out according to the brightness information to generate paired color image data and depth image data of the virtual wave field, and specific steps are as follows:
    • value ranges of parameters of the models are Qi ∈[0,1], ki ∈[0,512], θi ∈[0,2π] and, ωi ∈[0,20]. As shown in FIG. 2, in the example, n is taken to be equal to 6, that is, there is superposition of six groups of waveforms, where values of Qi are all 1, values of t and φi are all 0, values of θi are 0.187, 0.867, 1.590, 0.006, 2.625, and 1.328 respectively, values of Ai are 0.1 0.2, 0.5, 0.01, 0.01, and 0.01 respectively, values of ki are 0.125, 0.25, 0.5, 1, 2, and 4 respectively, and different Gerstner wave superpositions are generated through permutation and combination to simulate the shape of the wave field, where a Gerstner wave formula is as follows:









{




x
=


x
0

+




i
=
1

n



Q
i


cos


θ
i



A
i



sin
[



k
i

(



x
0


cos


θ
i


+


y
0


sin


θ
i



)

+


ω
i


t

+

φ
i


]










y
=


y
0

+




i
=
1

n



Q
i


sin


θ
i



A
i



sin
[



k
i

(



x
0


cos


θ
i


+


y
0


sin


θ
i



)

+


ω
i


t

+

φ
i


]










z
=


z
0

+




i
=
1

n



A
i



cos
[



k
i

(



x
0


cos


θ
i


+


y
0


sin


θ
i



)

+


ω
i


t

+

φ
i


]












(
I
)







where t is time, (x, y, z) is a position at the time t, (x0, y0, z0) is a position at rest, Qi is a sharpness degree of a wave peak of the ith wave superposed, and has a value range of







[

0
,

1


k
i



A
i




]

,




Ai is an amplitude of the ith wave superposed, ki is a wave number of the ith wave superposed, θi is a direction angle of the ith wave superposed, ωi is an angular speed of the ith wave superposed, and φi is an initial phase of the ith wave superposed;


as shown in FIG. 3, the composite illumination models are a diffuse reflection model, a specular reflection model, and a sub-surface scattering model which are shown in a formula (II), a formula (III), and a formula (IV) below respectively,










I
d

=


k
d



I

i

n



cos

α





(
II
)







wherein Id is observed color brightness, kd is a diffuse reflectance coefficient, Iin is brightness of incident light, and α is an included angle between a plane normal and the incident light;










I
m

=


k
m





I

i

n


(

cos

(

ϕ
-
θ

)

)

n






(
III
)







where Im is observed color brightness, km is a reflection coefficient, Iin is the brightness of the incident light, ϕ is an included angle between a plane normal and the incident light, θ is an included angle between the plane normal and an observation line of sight, and n is a highlight coefficient; and









{






L
o

(


p
o

,

r
o


)

=






S

(


p
o

,

r
o

,

p
i

,

r
i


)



L
i



(


p
i

,

r
i


)






"\[LeftBracketingBar]"


cos



φ
i




"\[RightBracketingBar]"





dr
i


d

A










S

(


p
o

,

r
o

,

p
i

,

r
i


)

=


1
π




F
t

(


η
o

,

r
o


)




R
d

(




p
i

-

p
o




)




F
t

(


η
i

,

r
i


)










(
IV
)







where Li is emissivity of the incident light, Lo is emissivity of an emergent light, pi is a contact point between the incident light and an object, po is a contact point between the emergent light and the object, ri is a direction vector of the incident light, ro is a direction vector of the emergent light, φi is an included angle between the incident light and a normal vector of a micro-plane of the contact point, S(po, ro, pi, ri) is a bidirectional sub-surface reflection distribution function, Ft is a Fresnel coefficient, Rd is a diffusion approximation function, and d is a distance.


A final brightness value of the point is obtained through superposing the color brightness values calculated by the three groups of illumination models. A process of generating paired color image data and depth image data of the wave field is as follows:

    • step a, generating a two-dimensional image of the virtual wave field through adopting the data generation platform for the virtual wave field, and meanwhile, writing the minimum value of a depth in the same pixel into a depth cache;
    • step b, according to the depth value of the two-dimensional image of the virtual wave field, carrying out a rendering process by means of a Unity rendering function, and according to calculation results of the composite illumination models, selecting a color closest to the camera to sequentially render the pixels, so as to generate a required color image of the virtual wave field, as shown in FIG. 4A; and
    • step c, carrying out inverse perspective transformation on values in the depth cache, converting an image coordinate system into a world coordinate system to obtain a real depth value, and generating a required depth image of the virtual wave field, as shown in FIG. 4B.
    • Step 2, constructing a domain converter module, and carrying out pre-training. The domain converter module, also known as an image style conversion technology, is a technology that converts an image into a new image with the same content but containing other styles, and usually adopts Dual Gan, Cycle Gan, Pix2Pix, and SSIM-Gan models. An existing Cycle Gan model of Pytorch software is adopted in the method of the present invention, the real water surface image captured by the camera, and the color image of the virtual wave field, which is generated in the step 1, are specifically adopted as the training set of the model, style transfer from the real water surface image to the virtual water surface image is learned, a network is composed of a generator I and a discriminator I, where in the example, a structure of the generator I is an encoder composed of two convolutional layers, a converter composed of nine Resnet blocks, and a decoder composed of anti-convolutional layers and one convolutional layer, a PatchGAN network is adopted for the discriminator I, and a structure of the discriminator I is a feature extractor I composed of four convolutional layers, and a classifier I composed of one convolutional layer.
    • Step 3, constructing a depth estimation module, and carrying out pre-training. Depth estimation is a computer vision task aimed at estimating a depth from a 2D image. The task requires inputting an RGB image and outputting the depth image, the depth image includes information about a distance from a viewpoint to an object in the image, and the viewpoint is usually the camera for capturing the image. An existing Pix2Pix model of Pytorch software is adopted in the patent, the paired color image data and depth image data of the virtual wave field, which are generated in the step 1 are specifically adopted as the training set, transformation from the color image of the wave field to the depth image is learned, the model is composed of a generator II and a discriminator II, where in the example, a U-Net model is adopted for the generator II, in the example, the U-Net model is composed of four convolutional and anti-convolutional layers, meanwhile, a skip connection is added between the symmetric convolutional layers, a PatchGAN network is adopted for the discriminator II, and a structure of the discriminator II is a feature extractor II composed of four convolutional layers, and a classifier II composed of one convolutional layer.
    • Step 4, as shown in FIG. 6A and FIG. 6B, capturing a real water surface image, and converting the real water surface image into an image with a virtual-water-surface style through using the domain converter module.
    • Step 5, as shown in FIG. 7A and FIG. 7B, inputting the converted image with the virtual-water-surface style, outputting the depth image with the virtual-water-surface style and recording a distance between a wave surface sampling point and an optical center of the camera through using the depth estimation module.
    • Step 6, as shown in FIG. 8A to FIG. 8C, carrying out coordinate mapping on the depth image with the virtual-water-surface style through adopting a point cloud mapping algorithm module, where a schematic diagram of coordinate mapping of the camera is shown in FIG. 5, and generating surface point cloud data that takes the optical center of the camera as a coordinate origin, of a water surface wave field, where specific steps are as follows:
    • step A, through translating a pixel coordinate system, translating the center point of the coordinate system from the top left corner of the image to the center of the image, and then multiplying by a conversion factor to convert the pixel coordinate system to the image coordinate system, as shown in a formula (V) below,









{




x
=


r
x

·

(

u
-

w
2


)








y
=


r
y

·

(

v
-

h
2


)










(
V
)







wherein uv is the pixel coordinate system, the origin is the top left corner of the image, w and h are pixel values of a width and a height of the image respectively, o-xy is the image coordinate system, the origin is the center point of the image, rx and ry are conversion factors, and represent absolute distances represented by a transversal pixel and a longitudinal pixel of the image respectively, and a calculation formula is shown in a formula (VI) below,









{





r
x

=


2

f



tan

(

β
2

)


w








r
y

=


2

f



tan

(

α
2

)


h









(
VI
)







wherein α is an opening angle of a visual vertebrae of the camera in a vertical direction, β is an opening angle of the visual vertebrae of the camera in a horizontal direction, and f is a focal length of the camera;

    • step B, normalizing a vector (x, y, f) pointing from the optical center of the camera to the pixel point according to a formula (VII) below to obtain (xe, ye, ze) which represents a direction pointing from the optical center of the camera to a point in the image, and the direction is also a direction pointing from the optical center of the camera to a real point in the world coordinate system:









{





x
e

=

x



x
2

+

y
2

+

f
2











y
e

=

y



x
2

+

y
2

+

f
2











z
e

=

z



x
2

+

y
2

+

f
2












(
VII
)









    • step C, multiplying the vector (xe, ye, ze) with the distance between the optical center of the camera, which is stored in the depth image, and the point in the world coordinate system, so as to obtain a position of a point in the camera coordinate system with the optical center of the camera as the origin, as shown in a formula (VIII) below












{






x
C

=


d

(

u
,
v

)

·

x
e









y
C

=


d

(

u
,
v

)

·

y
e









z
C

=


d

(

u
,
v

)

·

z
e






.





(
VIII
)







Via the above steps, three-dimensional reconstruction for waves may be realized through the water surface image captured by the camera, and the point cloud data of the water surface wave field is obtained.


Although the present invention has been disclosed above as the preferred examples, the examples are not intended to limit the present invention. Any equivalent variations or embellishments made without departing from the spirit and scope of the present invention are also within the protection scope of the present invention. Therefore, the protection scope of the present invention should be based on the contents defined by the claims of the present invention.

Claims
  • 1. A wave field reconstruction method based on optical perception, comprising the following steps: step 1, building a data generation platform for a virtual wave field, simulating optical features of a water surface through using composite illumination models, and generating paired color image data and depth image data of the virtual wave field;step 2, constructing a domain converter module, and pre-training the constructed domain converter module through taking the color image data of the virtual wave field, and real water surface image data captured by a camera as a training set;step 3: constructing a depth estimation module, and pre-training the constructed depth estimation module through taking the generated paired color image data and depth image data of the virtual wave field as a training set;step 4: capturing a real water surface image, and converting a style of the real water surface image into an image with a virtual-water-surface style through using the domain converter module pre-trained in the step 2;step 5, outputting the image with the virtual-water-surface style as a depth image with the virtual-water-surface style and recording a distance between a wave surface sampling point and an optical center of the camera through using the depth estimation module; andstep 6: carrying out coordinate mapping on the depth image with the virtual-water-surface style through adopting a point cloud mapping algorithm, and generating surface point cloud data that takes the optical center of the camera as a coordinate origin, of a water surface wave field.
  • 2. The wave field reconstruction method based on the optical perception according to claim 1, wherein the data generation platform for the virtual wave field in the step 1 is built through adopting a Gerstner wave model, and the Gerstner wave model is shown in a formula (I) below,
  • 3. The wave field reconstruction method based on the optical perception according to claim 2, wherein the composite illumination models in the step 1 are a diffuse reflection model, a specular reflection model, and a sub-surface scattering model which are shown in a formula (II), a formula (III), and a formula (IV) below respectively,
  • 4. The wave field reconstruction method based on the optical perception according to claim 3, wherein a process of generating paired color image data and depth image data of the wave field in the step 1 is as follows: step a, generating a two-dimensional image of the virtual wave field through adopting the data generation platform for the virtual wave field, and meanwhile, writing the minimum value of a depth in the same pixel into a depth cache;step b, according to the depth value of the two-dimensional image of the virtual wave field, selecting a color closest to the camera to sequentially render the pixels, so as to generate a required color image of the virtual wave field; andstep c, carrying out inverse perspective transformation on values in the depth cache, converting an image coordinate system into a world coordinate system to obtain a real depth value, and generating a required depth image of the virtual wave field.
  • 5. The wave field reconstruction method based on the optical perception according to claim 1, wherein a specific process of the point cloud mapping algorithm in the step 6 is as follows: step A, through translating a pixel coordinate system, translating the center point of the coordinate system from the top left corner of the image to the center of the image, and then multiplying by a conversion factor to convert the pixel coordinate system to the image coordinate system, as shown in a formula (V) below,
Priority Claims (1)
Number Date Country Kind
202310061919.0 Jan 2023 CN national
CROSS-REFERENCE TO RELATED APPLICATION

This application is a national stage of International Application No. PCT/CN2023/126753, filed on Oct. 26, 2023, which claims priority to Chinese Patent Application No. 202310061919.0, filed on Jan. 19, 2023. Both of the aforementioned applications are hereby incorporated by reference in their entireties.

Continuations (1)
Number Date Country
Parent PCT/CN2023/126753 Oct 2023 WO
Child 19055483 US