The present disclosure relates to a sensing device, and more particularly, to a sensing device for generating three-dimensional information and fusing three-dimensional information.
Virtual reality (VR) technology and augmented reality (AR) technology have been greatly developed and widely used in daily life. The three-dimensional (3D) sensing technology of target objects is an indispensable core technology of VR and AR technologies. In order to establish a more accurate 3D image and 3D model of the target object, so that the target object may be realistically presented and achieve better visual effects in VR and AR, it is necessary to obtain 3D information with high resolution (corresponding to high sampling rate) and high depth precision. That is, the 3D information of the target object must have both high resolution in image on the projection plane and high precision in depth.
Various existing 3D sensing technologies have their own advantages. Some sensing technologies have high resolution in images on projection plane, while other sensing technologies have high precision in depth. To achieve both high resolution and high depth precision, two or more 3D sensing technologies must be fused. However, the existing fusion technology can only generate fused information in a unit of an “object” (i.e., a target object), and the fineness of the fused results needs to be improved.
In view of the above-mentioned technical problems of the prior art, those skilled in the art are devoted to improve the 3D sensing fusion technology, expecting the fused information may have both high resolution and high depth precision, so as to achieve fineness with a unit of a “pixel” or a “point”.
According to an aspect of the present disclosure, a sensing device is provided. The sensing device includes a first sensor, a second sensor and a computing unit. The first sensor is for generating a plurality of first depth information, wherein the first depth information has a first sampling rate and a first precision. The second sensor is for generating a plurality of second depth information, wherein the second depth information has a second sampling rate and a second precision, the second sampling rate is greater than the first sampling rate, and the second precision is less than the first precision. The computing unit is configured to perform a fusion operation according to the first depth information and the second depth information to obtain a fused depth information, the fused depth information has the first precision and the second sampling rate.
In an example of the present disclosure, the first depth information corresponds to a plurality of first coordinate positions of a projection plane, and the second depth information corresponds to a plurality of second coordinate positions of the projection plane, the fusion operation performed by the computing unit comprises mapping an original coordinate position of the second coordinate positions to a mapped coordinate position, and the mapped coordinate position is located among the first coordinate positions, selecting a plurality of engaging coordinate positions from the first coordinate positions according to the mapped coordinate position, and performing a weighting operation according to the first depth information corresponding to each of the engaging coordinate positions and the second depth information corresponding to the original coordinate position to obtain the fused depth information, and the fused depth information corresponds to the mapped coordinate position.
In an example of the present disclosure, the engaging coordinate positions are adjacent to the mapped coordinate position, the first depth information corresponding to each of the engaging coordinate positions has a weight value, and the computing unit performs the weighting operation at least according to the first depth information corresponding to each of the engaging coordinate positions and the weight value.
In an example of the present disclosure, the weight value is an error weight, and the error weight is related to an absolute error value between the first depth information corresponding to each of the engaging coordinate positions and the second depth information corresponding to the original coordinate position.
In an example of the present disclosure, the weight value is a distance weight, and the distance weight is related to a length of a relative distance between each of the engaging coordinate positions and the mapped coordinate position.
In an example of the present disclosure, the weight value is an area weight, and the area weight is related to a size of a relative area between each of the engaging coordinate positions and the mapped coordinate position.
In an example of the present disclosure, the first depth information corresponding to the engaging coordinate positions has a first confidence weight, and the second depth information corresponding to the original coordinate position has a second confidence weight, the computing unit performs the weighting operation according to the first depth information corresponding to the engaging coordinate positions, the first confidence weight, the second depth information corresponding to the original coordinate position and the second confidence weight.
In an example of the present disclosure, the computing unit is further configured to calculate a basis value of the second depth information, calculate an offset value of each of the second depth information with respect to the basis value, and correct the fused depth information according to the offset values of the second depth information.
In an example of the present disclosure, the first sensor and the second sensor have a plurality of intrinsic parameters and a plurality of extrinsic parameters, and the computing unit maps the original coordinate position to the mapped coordinate position according to the intrinsic parameters and/or the extrinsic parameters.
In an example of the present disclosure, the first sensor is a radar sensor or a depth sensor using time-of-flight (ToF), and the second sensor is a color sensor, a Lidar sensor or a stereoscopic sensor.
According to another aspect of the present disclosure, a sensing device is provided. The sensing device includes a first sensor, a second sensor and a computing unit. The first sensor is for generating a plurality of first depth information, wherein the first depth information has a first sampling rate and a first precision. The second sensor is for generating a plurality of pixels and a plurality of image values, the image values respectively correspond to the pixels, wherein the pixels have a resolution, and a sampling rate corresponding to the resolution is greater than the first sampling rate. The computing unit is configured to perform a fusion operation according to the first depth information and the image values to obtain a fused depth information, the fused depth information has the first precision, and sampling rate of the fused depth information is substantially equal to the sampling rate corresponding to the resolution of the pixels.
In an example of the present disclosure, the pixels generated by the second sensor form an image, and the fusion operation performed by the computing unit comprises establishing a sampling coordinate position among the first coordinate positions, mapping the first coordinate positions to a plurality of main mapped coordinate positions, the main mapped coordinate positions are located in the image, mapping the sampling coordinate position to a sampling mapped coordinate position, the sampling mapped coordinate position is located in the image, and performing a weighting operation according to the first depth information corresponding to each of the first coordinate positions, the image value corresponding to each of the main mapped coordinate positions and the image value corresponding to the sampling mapped coordinate position to obtain the fused depth information, the fused depth information corresponds to the sampling coordinate position.
In an example of the present disclosure, the computing unit selects a plurality of adjacent pixels from the pixels, the adjacent pixels are adjacent to the sampling mapped coordinate position and the main mapped coordinate positions, and the computing unit performs an interpolation operation according to the image values corresponding to the adjacent pixels to obtain the image values corresponding to the sampling mapped coordinate positions and the main mapped coordinate positions.
In an example of the present disclosure, each of the first depth information has a weight value, and the computing unit performs the weighting operation at least according to each of the first depth information and the corresponding weight value.
In an example of the present disclosure, the weight value of each of the first depth information is an error weight, and the error weight is related to an image-value-error between the image value corresponding to each of the main mapped coordinate positions and the image value corresponding to the sampling mapped coordinate positions.
According to still another aspect of the present disclosure, a sensing device is provided. The sensing device includes a first sensor and a computing unit. The first sensor is for generating a plurality of first depth information, the first depth information is related to at least one first space, the at least one first space has a plurality of first coordinate positions, the first depth information respectively corresponds to the first coordinate positions. The computing unit is configured to convert the first depth information into a plurality of second depth information, the second depth information is related to a standard space, and the standard space has at least one second coordinate position. When the second depth information points to the same one of the second coordinate position, the computing unit performs a fusion operation according to the second depth information to obtain a fused depth information of the second coordinate position.
In an example of the present disclosure, the first sensor generates the first depth information at different time points, and the standard space corresponds to a real-world coordinate system.
In an example of the present disclosure, the computing unit converts the first depth information into the second depth information according to a space conversion between the at least one first space and the standard space.
In an example of the present disclosure, each of the second depth information has a weight value, and the computing unit performs the fusion operation at least according to each of the second depth information and the corresponding weight value.
In an example of the present disclosure, the weight value of each of the second depth information is a confidence weight, and the confidence weight is related to a confidence level of each of the second depth information.
In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the disclosed embodiments. It will be apparent, however, that one or more embodiments may be practiced without these specific details. In other instances, well-known structures and devices are schematically illustrated in order to simplify the drawing.
The first sensor 100a generates a plurality of first depth information dA-dM related to the target 400. The first sensor 100a is, for example, a radar sensor (i.e., the corresponding second sensor 200a is a Lidar sensor), a depth sensor using time of flight (ToF) (i.e., the corresponding second sensor 200a is a color sensor or stereoscopic sensor, etc.), and so on. The depth sensor with ToF may utilize direct-ToF (dToF) or indirect-ToF (iToF). The first depth information dA-dM are, for example, absolute or relative depth information corresponding to a normal direction of a projection plane, such as depth map, disparity information, distance information, point cloud information, mesh information, etc. The first sensor 100a of this embodiment generates first depth information dA-dM in the normal direction. The first depth information dA-dM are projected on the first coordinate positions A-M of the projection plane, and the first depth information dA-dM respectively correspond to the first coordinate positions A-M.
The second sensor 200a generates a plurality of second depth information da-dn related to the target 400. The second sensor 200a is, for example, a Lidar sensor (i.e., the corresponding first sensor 100a is a radar sensor), a color sensor, a sensor using monocular-vision or stereoscopic, and so on. The second sensor 200a of this embodiment generates the second depth information da-dn. The second depth information da-dn are projected on the second coordinate positions a-n of the projection plane, and the second depth information da-dn respectively correspond to the second coordinate positions a-n. Regarding the depth precision and sampling rate, the second depth information da-dn generated by the second sensor 200a are different from the first depth information dA-dM generated by the first sensor 100a.
Each of the first depth information dA-dM has a first precision and a first sampling rate. The first precision is defined as: the precision of the depth sensing for the first sensor 100a on the target 400, that is, the precision of the first depth information dA-dM in the normal direction. The first sampling rate is defined as: the sampling rate of the first sensor 100a on the projection plane when sensing the target 400. Which is, when the first depth information dA-dM are projected on the projection plane, the equivalent sampling rate on the projected plane for the first depth information dA-dM. The first sensor 100a has higher precision in depth sensing, but the sampling rate of the first sensor 100a on the projection plane is lower, therefore, the first precision of the first depth information dA-dM is higher, but the first sampling rate is lower.
On the other hand, the second depth information da-dn generated by the second sensor 200a are related to the normal direction of the projection plane, and the second depth information da-dn are projected on the second coordinate positions a-n of the projection plane. The (x, y, z) 3D coordinates may be calculated according to the second depth information da-dn and the second coordinate positions a-n. Each of the second depth information da-dn has a second precision and a second sampling rate. The second precision is defined as: the precision of the second depth information da-dn in the normal direction. The second sampling rate is defined as: “the equivalent sampling rate” of the second depth information da-dn on the projection plane when the second depth information da-dn are projected on the projection plane. The sampling rate of the second sensor 200a on the projection plane is higher, but the precision of the depth sensing of the second sensor 200a is lower. Therefore, the second sampling rate of the second depth information da-dn is higher but the second precision is lower. The second sampling rate of the second depth information da-dn is greater (or higher) than the first sampling rate of the first depth information dA-dM, but the second precision of the second depth information da-dn is smaller (or lower) than the first precision of the first depth information dA-dM.
The computing unit 300 performs a fusion operation according to the first depth information dA-dM and the second depth information da-dn of different precisions and sampling rates to obtain the fused depth information fD1. The fused depth information fD1 may have both a higher first precision and a higher second sampling rate. Compared with other fusion techniques which performs the fusion operation in a unit of an object (e.g., the target 400), the computing unit 300 of the sensing device 1000a of the present disclosure performs fusion operation in a unit of depth information, for example, one second depth information dp of the second depth information da-dn is used as a unit to perform fusion operation. The fused depth information obtained by operating each of the depth information may be integrated into an overall fused depth information fD1.
On the other hand, in addition to generating the second depth information da-dn in the normal direction, the second sensor 200b may also generate the second image IMG2 of the projection plane. The basis plane of the second image IMG2 is substantially the projection plane of the second depth information da-dn. The second image IMG2 includes a plurality of second pixels a″-n″. The positions of the second pixels a″-n″ on the projection plane substantially correspond to the second coordinate positions a-n of the embodiment in
The operation of the sensing device 1000b of this embodiment is similar to the sensing device 1000a of
In the following, the sensing device 1000a of
Then, the computing unit 300 selects a plurality of “engaging coordinate positions” from the first coordinate positions A-M according to the mapped coordinate position pm, and these engaging coordinate positions are adjacent to the mapped coordinate position pm. In one example, the coordinate value of the mapped coordinate position pm may be unconditionally rounded-off to obtain the first coordinate position A, that is, the first coordinate position A has the shortest distance from the mapped coordinate position pm. Then, selecting the first coordinate position E below the first coordinate position A, selecting the first coordinate position B to the right of the first coordinate position A, and selecting the first coordinate position F to the right and below the first coordinate position A. The selected first coordinate positions A, B, E and F are used as engaging coordinate positions.
In another example, an engaging area R_pm may be defined according to the mapped coordinate position pm, and the first coordinate positions located in the engaging area R_pm are selected as the engaging coordinate positions. For example, with the mapped coordinate position pm as the center, a circular area with a specific radius is defined as the engaging area R_pm. Alternatively, taking the mapped coordinate position pm as the geometric center, a rectangular area with specific length and width is defined as the engaging area R_pm.
Next, referring to
weA=we(diffA)=we(|dA−dp|) (1)
In equation (1), the absolute error value diffA is defined as: the absolute value of the difference between the first depth information dA corresponding to the first coordinate position A and the second depth information dp corresponding to the original coordinate position p. The absolute error value diffA is inputted to the error weight function we( ) and the error weight function we( ) correspondingly outputs error weight weA. The error weight function we( ) is, for example, a linear function conversion, a nonlinear function conversion, or a look-up-table conversion. When the inputted absolute error value diffA is small, the error weight function we( ) correspondingly outputs larger value of error weight weA.
Please refer to
In the example of
Similarly, the error weights weB, weE and weF are calculated according to equations (2) to (4):
weB=we(diffB)=we(|dB−dp|) (2)
weE=we(diffE)=we(|dE−dp|) (3)
weF=we(diffF)=we(|dF−dp|) (4)
Then, a weighting operation is performed according to the first depth information dA, dB, dE and dF and the corresponding error weights weA, weB, weE and weF to obtain a weighting operation result dpm1. The weighting operation result dpm1 of the first depth information dA, dB, dE and dF is the fused depth information dpm of the mapped coordinate position pm, as shown in equation (5-1):
Similar to the fusion operation of the mapped coordinate position pm corresponding to the original coordinate position p, the same fusion operation may be performed for other coordinate positions among the second coordinate positions a-n. The fused depth information of the mapped coordinate positions (not shown in the FIG.) of each of the second coordinate positions a-n may be integrated into the overall fused depth information fD1.
Next, referring to
The engaging coordinate positions A, B, E and F adjacent to the mapped coordinate position pm correspond to the first depth information dA, dB, dE and dF. Wherein, the absolute error value diffA between the first depth information dA (with the value “66”) and the second depth information dp (with the value “100”) at the original coordinate position p is “34”. Please also refer to
From the above, the error weights weA, weB, weE and weF of the first depth information dA, dB, dE and dF corresponding to the engaging coordinate positions A, B, E and F are “0.91”, “0.91”, “0.91” and “0”, therefore, the fused depth information dpm of the mapped coordinate position pm may be obtained through the weighting operation of equation (5-2):
According to the weighting operation of equation (5-2), the second depth information dp corresponding to the original coordinate position p (the value “100” in
In another example, the weighting operation result dpm1 of the first depth information dA, dB, dE and dF in the engaging coordinate positions A, B, E and F may be further weighted with the second depth information dp of the original coordinate position p to obtain a weighting operation result. The weighting operation result dpm2 is obtained and is used as the final fused depth information dpm, as shown in equation (6):
In the weighting operation of equation (6), each of the first depth information dA, dB, dE and dF corresponding to the engaging coordinate positions A, B, E and F has a first confidence weight wc1, which is related to the confidence level CL1 (i.e., reliability level) of the first depth information dA, dB, dE and dF. In an example, the confidence level CL1 of the first depth information of the stereoscopic of the first sensor 100 may be calculated according to the cost value of the block matching related to the first sensor 100. The cost value related to the first sensor 100 is, for example, a minimum cost, a uniqueness, and etc. In another example, the confidence level CL1 of the first sensor 100 may be calculated according to a left right check or a noise level check. In yet another example, the first sensor 100 is a ToF sensor, and the confidence level CL1 of the ToF sensor may be calculated according to the reflected light intensity of the object (e.g., the object 400 in
wc1=wc(CL1) (7-1)
In equation (7-1), the confidence level CL1 is inputted to the confidence weight function wc( ) and the confidence weight function wc( ) outputs the first confidence weight wc1 correspondingly. See
Similarly, the second depth information dp corresponding to the original coordinate position p has a second confidence weight wc2, which is related to the confidence level CL2 of the second depth information dp, as shown in equation (7-2):
wc2=wc(CL2) (7-2)
From the above, according to the first depth information dA, dB, dE and dF of the engaging coordinate positions A, B, C and D, the first confidence weight wc1, the second depth information dp and the second confidence weight wc2 of the original coordinate position p, a weighting operation is performed to obtain a weighting operation result dpm2, and the weighting operation result dpm2 is used as the final fused depth information dpm.
wdi=wd(Li)i=A,B,E,F (8)
In equation (8), the relative distance Li is the input value of the distance weight function wd( ) and the distance weight wdi is the output value of the distance weight function wd( ) The conversion relationship between the input value and the output value of the distance weight function wd( ) is similar to the error weight function we( ) in
Then, a weighting operation is performed according to the first depth information dA, dB, dE and dF, the corresponding distance weights wdA, wdB, wdE and wdF, and the error weights weA, weB, weE and weF to obtain a weighting operation result dpm3. The weighting operation result dpm3 is used as the fused depth information dpm, as shown in equation (9):
wai=wa(AR_i),i=A,B,E,F (10)
The conversion relationship of input value and output value of the area weight function wa( ) is similar to that of the distance weight function wd( ). When the engaging coordinate positions A, B, E and F are far away from the mapped coordinate position pm, the corresponding relative area AR_i is larger, and the area weight wai outputted by the area weight function wa( ) is smaller.
Then, a weighting operation is performed according to the first depth information dA, dB, dE and dF, the corresponding area weights waA, waB, waE and waF, and the error weights weA, weB, weE and weF to obtain a weighting operation result dpm4. The weighting operation result dpm4 is used as the fused depth information dpm, as shown in equation (11):
In another example, two or three of the aforementioned error weight function we( ) distance weight function wd( ) and area weight function wa( ) may also be integrated into a single function. For example, the error weight function we( ) and the distance weight function wd( ) are integrated into a hybrid weight function wm1( ). The absolute error value diffi and the relative distance Li (where i=A, B, E, F) are the input values of the hybrid weight function wm1( ), and the hybrid weight function wm1( ) correspondingly outputs hybrid weight wm1i, as shown in equation (12). When the absolute error value diffi is larger or the relative distance Li is larger, the outputted hybrid weight wm1i is smaller.
wm1i=wm1(Li,diffi)i=A,B,E,F (12)
Alternatively, the error weight function we( ) and the area weight function wa( ) are integrated into a hybrid weight function wm2( ). The absolute error value diffi and the relative area AR_i (where i=A, B, E, F) are the input values of the hybrid weight function wm2( ) and the hybrid weight function wm2( ) correspondingly output hybrid weight wm2i, as shown in equation (13):
wm2i=wm2(AR_i,diffi)i=A,B,E,F (13)
In another example, the weight selection function W(i1,i2, . . . , in) provides weight values wA, wB, wE and wF of the first depth information dA, dB, dE and dF. The weight selection function W(i1,i2, . . . , in) has multiple input values i1-in, and the input values i1-in are related rather than independent. The weight selection function W(i1,i2, . . . , in) may comprehensively judge according to the input values i1-in, and output the weight values wA, wB, wE and wF accordingly. For example, when the input values i1-in as a whole satisfy a specific condition, the outputted weight values wA, wB, wE and wF are “1”, “0”, “0” and “0”. In this case, only the weight value wA is “1”, hence only the first depth information dA is selected to calculate the fused depth information dpm. For another example, when the input values i1-in as a whole satisfy another specific condition, the outputted weight values wA, wB, wE and wF are “0”, “0”, “1” and “1”. In this case, the first depth information dE and the first depth information dF are selected to calculate the fused depth information dpm. From the above, in some examples, one or more of the first depth information dA, dB, dE and dF may be selected by the weight selection function W(i1,i2, . . . , in) to calculate the fused depth information dpm.
In the above, the fusion operations in
Next, referring to
Next, referring to
There is an increasing trend in value between the corrected fused depth information dbm′ and the fused depth information dcm′, which may reflect the increasing trend in value between the original second depth information db and the second depth information dc in
On the other hand, the second sensor 200c of this embodiment is different from the second sensor 200a of
The computing unit 300c performs a fusion operation according to the first depth information dA, dB, dE and dF and the image values Ya, Yb, Ye and Yf, so as to obtain the fused depth information corresponding to each pixel. Each fused depth information may be integrated into the overall fused depth information fD2, and the fused depth information fD2 may have both the first precision of the first depth information dA, dB, dE and dF and the sampling rate corresponding to the resolution of the image IMG2′.
Please refer to
Since the main mapped coordinate positions a′, b′, e′ and f′ and the sampling mapped coordinate position g′ do not necessarily overlap with the original pixels a″, b″, e″, f″ and g″ of the image IMG2′″, so the image values Ya″, Yb″, Ye″, Yf″ and Yg″ corresponding to the original pixels a″, b″, e″, f″ and g″ may not be directly used. Interpolation must be performed to obtain the image values corresponding to the main mapped coordinate positions a′, b′, e′ and f′ and the sampling mapped coordinate position g′. For example, select a plurality of adjacent pixels q, s, r and t from the original pixels of the image IMG2′, these adjacent pixels q, s, r and t are adjacent to the sampling mapped coordinate position g′. Then, the interpolation operation is performed according to the image values Yq, Ys, Yr and Yt corresponding to the adjacent pixels q, s, r and t, so as to obtain the image value Yg′ of the sampling map coordinate position g′.
Similarly, adjacent pixels (not shown in the figure) of the main mapped coordinate position a′ are also selected, and interpolation operations are performed according to the image values corresponding to the adjacent pixels to obtain the image value Ya′ corresponding to the main mapped coordinate position a′. The image values Yb′, Ye′ and Yf′ corresponding to the main mapped coordinate positions b′, e′ and f′ are obtained by a similar interpolation operation.
Referring to
wej=we(|(Yg′-Yi′)|)j=A,B,E,F i=a,b,e,f (14)
Then, according to the error weights weA, weB, weE and weF, the weighting operation of the first depth information dA, dB, dE and dF is performed to obtain the fused depth information dG of the sampling coordinate position G, as shown in equation (15):
Similarly, the same fusion operation is performed for the other 11 sampling coordinate positions than the sampling coordinate position G (shown as the coordinate positions with the dark screen bottom in
In an example, when the first coordinate positions A-F are arranged in a rectangle, and the main mapped coordinate positions a′-f′ corresponding to the image IMG2′ are also arranged in a rectangle, the interpolation operation of the sampling mapped coordinate position g′ may be relatively simple. In another example (not shown in
In another example, the first depth information dAi, dBi and dCi sensed at different time points ti may be related to different coordinate spaces. For example, the first depth information dA1, dB1, dC1 obtained at time point t1 are related to the first space SP1′, the first depth information dA2, dB2, dC2 obtained at time point t2 is related to the second space SP2′, and the first depth information dA3, dB3, dC3 obtained at the time point t3 are related to the third space SP3′.
On the other hand, a standard space SP0 is a coordinate space corresponding to the real-world, that is, the standard space SP0 is a “unity coordinate system” or a “world coordinate system”. The standard space SP0 and the first space SP1 (or, the first space SP1′, the second space SP2′, the third space SP3′) have a corresponding-relationship or a mapping-relationship of space conversion. The computing unit 300c converts the first depth information dAi, dBi and dCi into second depth information da, db and dc of the standard space SP0 according to the space conversion between the standard space SP0 and the first space SP1 (or, the first space SP1′, the second space SP2′, the third space SP3′). The standard space SP0 has at least one second coordinate position (e.g., the second coordinate position e).
The computing unit 300c determines whether the plurality of second depth information da, db and dc point to the same one of the second coordinate positions. For example, when all the second depth information da, db and dc point to the same second coordinate position e, it means that all the second depth information da, db and dc point to the same physical position in the coordinate space of the real-world. Hence, a fusion operation is performed on the second depth information da, db and dc to obtain the fused depth information de at the second coordinate position e.
The fusion operation performed by the computing unit 300c is, for example, a weighting operation performed according to the second depth information da, db and dc and the corresponding weight values. The second depth information da, db and dc and the corresponding weight values are, for example, the confidence weights wca, wcb and wcc. Furthermore, the confidence weights wca, wcb and wcc may be calculated by the confidence weight function wc( ) of the equation (7-1). The second depth information da, db and dc have confidence levels CLa, CLb and CLc, which are related to the cost value of the block matching of the first sensor 100c or the reflected light intensity of the target 400. The confidence levels CLa, CLb and CLc are respectively inputted to the confidence weight function wc( ) so as to correspondingly output confidence weights wca, wcb and wcc. The confidence weight function wc( ) is, for example, linear function conversion, nonlinear function conversion, or look-up-table conversion. When the inputted confidence levels CLa, CLb and CLc are larger, the correspondingly outputted confidence weights wca, wcb and wcc are larger.
Then, according to the confidence weights wca, wcb and wcc, a weighting operation is performed on the second depth information da, db and dc to obtain the fused depth information de of the second coordinate position e, as shown in equation (16):
Based on the same fusion operation mechanism, the fused depth information of other coordinate positions in the standard space SP0 is calculated. The fused depth information of each coordinate position of the standard space SP0 is integrated into the overall fused depth information fD3.
In one example, the first depth information dA-1, dA-2 and dA-3 generated by the sensors 100-1, 100-2 and 100-3 are all related to the same coordinate space, for example, the first depth information dA-1, dA-2 and dA-3 are all related to the same first space SP1. The computing unit 300d converts the first depth information dA-1, dA-2 and dA-3 into the second depth information da-1, da-2 and da-3 of the standard intermediate SP0.
In another example, the first depth information dA-1, dA-2 and dA-3 generated by the sensors 100-1, 100-2 and 100-3 are related to different coordinate spaces. For example, the first depth information dA1 generated by the sensor 100-1 is related to the first space SP1′, the first depth information dA2 generated by the sensor 100-2 is related to the second space SP2′, and the first depth information dA3 generated by the sensor 100-2 is related to the third space SP3′. The computing unit 300d converts the first depth information dA-1 of the first space SP1′, the first depth information dA-2 of the second space SP2′ and the first depth information dA-3 of the third space SP3′ into the second depth information da-1, da-2 and da-3 of the standard space SP0.
When the second depth information da-1, da-2 and da-3 all point to the same second coordinate position e in the standard space SP0, the computing unit 300c uses the second depth information da, db and dc and the corresponding confidence weight wca1, wca2 and wca3 to perform weighting operation, so as to obtain the fused depth information de of the second coordinate position e, as shown in equation (17):
Similarly, the fused depth information of each coordinate position of the standard space SP0 is integrated into the overall fused depth information fD4.
It will be apparent to those skilled in the art that various modifications and variations may be made to the disclosed embodiments. It is intended that the specification and examples be considered as exemplary only, with a true scope of the present disclosure being indicated by the following claims and their equivalents.
Number | Date | Country | Kind |
---|---|---|---|
111133877 | Sep 2022 | TW | national |
This application claims the benefit of U.S. provisional application Ser. No. 63/343,547, filed May 19, 2022 and Taiwan application Serial No. 111133877, filed Sep. 7, 2022, the subject matters of which are incorporated herein by references.
Number | Date | Country | |
---|---|---|---|
63343547 | May 2022 | US |