This application claims the benefit of priority to AU Application No. 2019240717, filed Oct. 4, 2019, the contents of which are hereby expressly incorporated by reference in their entirety.
The present invention relates generally to image processing and, in particular, to processing images to determine radiosity of an object.
A solar thermal receiver is a component of a solar thermal system that converts solar irradiation to high-temperature heat. Efficiency of the solar thermal receiver is reduced because energy losses (such as radiative reflection and thermal emission losses).
Measuring the radiative losses can provide an indication as to the efficiency of the solar thermal receiver 110A, 110B, 110C. However, such measurements are challenging due to the directional and spatial variations of the radiative reflection and thermal emission losses. Such measurements are made more difficult when the solar thermal receiver 110A, 110B, 110C is deployed on the field, due to the different environmental conditions and the requirement that the measurements cannot affect the operation of the solar thermal receiver 110A, 110B, 110C.
Conventional camera-based measurements enable direct observation of radiative reflection 12 and thermal emission 14 of a solar thermal receiver 110A, 110B. Cameras have been used to measure flux distributions on a flat billboard Lambertian target or on an external convex solar thermal receiver (e.g., the solar thermal receivers 110A, 110B) with the assumption that the solar thermal receiver 110A, 1108 has a Lambertian surface, where the directional radiative distributions are disregarded. Such an assumption is unimportant for the solar thermal receivers 110A, 1108 (having a flat or convex surface) as the radiation reflection 12 and thermal emission 14 do not interact further with the solar thermal receivers 110A, 1108.
However, cavity-shape solar thermal receivers (e.g., solar thermal receiver 110C) typically use objects having radiation reflected 10 and emitted 14 that are directional (unlike the non-directional Lambertian surface) to enable multiple reflection from the internal surface of the cavity shape, which in turn enable light-trapping effects. Therefore, assuming that the solar thermal receiver 110C has Lambertian surface would result in inaccurate results.
It is an object of the present invention to substantially overcome, or at least ameliorate, one or more disadvantages of existing arrangements.
Disclosed are arrangements which seek to address the above problems by determining the directional and spatial distribution of radiosity (e.g., reflection 12, thermal emission 14) from the surface of an object (e.g., a solar thermal receiver 110C). Such determination is performed by acquiring images of the object and processing the acquired images using a method of the present disclosure.
The present disclosure uses a solar thermal receiver to describe the method. However, it should be understood that the method of determining radiosity can be used on other objects (e.g., an engine, an electronic component, a heatsink, a furnace, a luminaire, a building, a cityscape, etc.).
According to an aspect of the present disclosure, there is provided a method comprising: receiving images of an object, the images comprising first and second images; determining feature points of the object using the first images; determining a three-dimensional reconstruction of a scene having the object; aligning the three-dimensional reconstruction with a three-dimensional mesh model of the object; mapping pixel values of pixels of the second images onto the three-dimensional mesh model; determining directional radiosity of each mesh element of the three-dimensional mesh model; and determining hemispherical radiosity of the object based on the determined directional radiosity.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable medium having a software application program for performing a method comprising: receiving images of an object, the images comprising first and second images; determining feature points of the object using the first images; determining a three-dimensional reconstruction of a scene having the object; aligning the three-dimensional reconstruction with a three-dimensional mesh model of the object; mapping pixel values of pixels of the second images onto the three-dimensional mesh model based on the alignment; determining directional radiosity of each mesh element of the three-dimensional mesh model; and determining hemispherical radiosity of the object based on the determined directional radiosity.
Other aspects are also disclosed.
The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee.
Some aspects of the prior art and at least one embodiment of the present invention will now be described with reference to the drawings and appendices, in which:
Where reference is made in any one or more of the accompanying drawings to steps and/or features, which have the same reference numerals, those steps and/or features have for the purposes of this description the same function(s) or operation(s), unless the contrary intention appears.
In one arrangement, the imaging devices 120 are located on drones to acquire images of the object 110. In another arrangement, each imaging device 120 includes multiple cameras (such as a combination of any one of the cameras).
The imaging devices 120 are located in an area 140, which is a spherical area surrounding the object 110. The imaging devices 120 are in communication with the computer system 130, such that images acquired by the imaging devices 120 are transmitted to the computer system 130 for processing. The transmission of the images from the imaging devices 120 to the computer system 130 can be in real-time or delayed. When the computer system 130 receives the images from the imaging devices 120, the computer system 130 performs method 500 (see
As seen in
The computer module 1301 typically includes at least one processor unit 1305, and a memory unit 1306. For example, the memory unit 1306 may have semiconductor random access memory (RAM) and semiconductor read only memory (ROM). The computer module 1301 also includes an number of input/output (I/O) interfaces including: an audio-video interface 1307 that couples to the video display 1314, loudspeakers 1317 and microphone 1380; an I/O interface 1313 that couples to the keyboard 1302, mouse 1303, scanner 1326, camera 1327 and optionally a joystick or other human interface device (not illustrated); and an interface 1308 for the external modem 1316 and printer 1315. In some implementations, the modem 1316 may be incorporated within the computer module 1301, for example within the interface 1308. The computer module 1301 also has a local network interface 1311, which permits coupling of the computer system 1300 via a connection 1323 to a local-area communications network 1322, known as a Local Area Network (LAN). As illustrated in
The I/O interfaces 1308 and 1313 may afford either or both of serial and parallel connectivity, the former typically being implemented according to the Universal Serial Bus (USB) standards and having corresponding USB connectors (not illustrated). Storage devices 1309 are provided and typically include a hard disk drive (HDD) 1310. Other storage devices such as a floppy disk drive and a magnetic tape drive (not illustrated) may also be used. An optical disk drive 1312 is typically provided to act as a non-volatile source of data. Portable memory devices, such optical disks (e.g., CD-ROM, DVD, Blu-ray Disc™), USB-RAM, portable, external hard drives, and floppy disks, for example, may be used as appropriate sources of data to the system 1300.
As shown in
The components 1305 to 1313 of the computer module 1301 typically communicate via an interconnected bus 1304 and in a manner that results in a conventional mode of operation of the computer system 1300 known to those in the relevant art. For example, the processor 1305 is coupled to the system bus 1304 using a connection 1318. Likewise, the memory 1306 and optical disk drive 1312 are coupled to the system bus 1304 by connections 1319. Examples of computers on which the described arrangements can be practised include IBM-PC's and compatibles, Sun Sparcstations, Apple Mac™ or like computer systems.
The method of determining radiosity of an object may be implemented using the computer system 130 wherein the processes of
The software may be stored in a computer readable medium, including the storage devices described below, for example. The software is loaded into the computer system 130 from the computer readable medium, and then executed by the computer system 130. A computer readable medium having such software or computer program recorded on the computer readable medium is a computer program product. The use of the computer program product in the computer system 130 preferably effects an advantageous apparatus for determining radiosity of an object.
The software 1333 is typically stored in the HDD 1310 or the memory 1306. The software is loaded into the computer system 130 from a computer readable medium, and executed by the computer system 130. Thus, for example, the software 1333 may be stored on an optically readable disk storage medium (e.g., CD-ROM) 1325 that is read by the optical disk drive 1312. A computer readable medium having such software or computer program recorded on it is a computer program product. The use of the computer program product in the computer system 130 preferably effects an apparatus for determining radiosity of an object.
In some instances, the application programs 1333 may be supplied to the user encoded on one or more CD-ROMs 1325 and read via the corresponding drive 1312, or alternatively may be read by the user from the networks 1320 or 1322. Still further, the software can also be loaded into the computer system 130 from other computer readable media. Computer readable storage media refers to any non-transitory tangible storage medium that provides recorded instructions and/or data to the computer system 130 for execution and/or processing. Examples of such storage media include floppy disks, magnetic tape, CD-ROM, DVD, Blu-ray™ Disc, a hard disk drive, a ROM or integrated circuit, USB memory, a magneto-optical disk, or a computer readable card such as a PCMCIA card and the like, whether or not such devices are internal or external of the computer module 1301. Examples of transitory or non-tangible computer readable transmission media that may also participate in the provision of software, application programs, instructions and/or data to the computer module 1301 include radio or infra-red transmission channels as well as a network connection to another computer or networked device, and the Internet or Intranets including e-mail transmissions and information recorded on Websites and the like.
The second part of the application programs 1333 and the corresponding code modules mentioned above may be executed to implement one or more graphical user interfaces (GUIs) to be rendered or otherwise represented upon the display 1314. Through manipulation of typically the keyboard 1302 and the mouse 1303, a user of the computer system 130 and the application may manipulate the interface in a functionally adaptable manner to provide controlling commands and/or input to the applications associated with the GUI(s). Other forms of functionally adaptable user interfaces may also be implemented, such as an audio interface utilizing speech prompts output via the loudspeakers 1317 and user voice commands input via the microphone 1380.
When the computer module 1301 is initially powered up, a power-on self-test (POST) program 1350 executes. The POST program 1350 is typically stored in a ROM 1349 of the semiconductor memory 1306 of
The operating system 1353 manages the memory 1334 (1309, 1306) to ensure that each process or application running on the computer module 1301 has sufficient memory in which to execute without colliding with memory allocated to another process. Furthermore, the different types of memory available in the system 130 of
As shown in
The application program 1333 includes a sequence of instructions 1331 that may include conditional branch and loop instructions. The program 1333 may also include data 1332 which is used in execution of the program 1333. The instructions 1331 and the data 1332 are stored in memory locations 1328, 1329, 1330 and 1335, 1336, 1337, respectively. Depending upon the relative size of the instructions 1331 and the memory locations 1328-1330, a particular instruction may be stored in a single memory location as depicted by the instruction shown in the memory location 1330. Alternately, an instruction may be segmented into a number of parts each of which is stored in a separate memory location, as depicted by the instruction segments shown in the memory locations 1328 and 1329.
In general, the processor 1305 is given a set of instructions which are executed therein. The processor 1305 waits for a subsequent input, to which the processor 1305 reacts to by executing another set of instructions. Each input may be provided from one or more of a number of sources, including data generated by one or more of the input devices 1302, 1303, data received from an external source across one of the networks 1320, 1302, data retrieved from one of the storage devices 1306, 1309 or data retrieved from a storage medium 1325 inserted into the corresponding reader 1312, all depicted in
The disclosed radiosity determination arrangements use input variables 1354, which are stored in the memory 1334 in corresponding memory locations 1355, 1356, 1357. The radiosity determination arrangements produce output variables 1361, which are stored in the memory 1334 in corresponding memory locations 1362, 1363, 1364. Intermediate variables 1358 may be stored in memory locations 1359, 1360, 1366 and 1367.
Referring to the processor 1305 of
a fetch operation, which fetches or reads an instruction 1331 from a memory location 1328, 1329, 1330;
a decode operation in which the control unit 1339 determines which instruction has been fetched; and
an execute operation in which the control unit 1339 and/or the ALU 1340 execute the instruction.
Thereafter, a further fetch, decode, and execute cycle for the next instruction may be executed. Similarly, a store cycle may be performed by which the control unit 1339 stores or writes a value to a memory location 1332.
Each step or sub-process in the processes of
The method of determining radiosity of an object may alternatively be implemented in dedicated hardware such as one or more integrated circuits performing the functions or sub functions of
The rate of radiation leaving a specific location (at (x, y, z)) on a surface of the object 110 by reflection 12 and emission 14 ({dot over (Q)}r+e) at a wavelength A and in the direction (θ, φ) per unit surface area (A), per unit solid angle (Ω) and per unit wavelength interval is determined using the spectral directional radiosity equation:
The method 500 commences at step 510 by receiving images of the object 110 from the imaging devices 120. An image of a solar thermal receiver (e.g., 110A, 1108, 110C) contains information of radiosity from the surface of the receiver. Each of the imaging devices 120 captures the images in a specific spectral range and from a single specific direction with a specific camera angle. The spectrum in which images are captured depends on the type of the imaging devices 120. A CCD camera acquires radiosity in the visible range, which predominantly comprises reflected solar irradiation 12. An infra-red camera acquires the radiosity in the infra-red range, which predominantly captures thermal emission 14 from the surface of the receiver 110A, 1108, 110C. A hyperspectral camera captures images at different specific spectrum range to obtain a breakdown of the radiative losses at each spectrum range.
For simple shape receivers 110A, 1108, an imaging device 120 can acquire an image of the entire receiver 110A, 1108 from a single camera position and orientation. However, for a complex-shaped cavity-like receiver 110C, it is not possible to capture all of the different surfaces of the receiver 110C in a single image. The difficulty in capturing all the surfaces in one image is shown in
Therefore, in step 510, images of the object 110 (e.g., a receiver 110C) are taken by the imaging devices 120. The images can be randomly captured from many directions. The number of images assists in the 3D reconstruction step (step 530 of method 500).
The receiver 110C can be modelled with finite surface elements, each surface element locally having a relative direction to the imaging devices 120. The imaging devices 120 should be directed to cover (as far as practicable) the hemispherical domain of each individual surface element. In practical terms, the imaging devices 120 capture images of the object 110 around the spherical area 140.
For example, fora receiver with an aperture facing one side, the imaging devices 120 should capture images of the receiver in the spherical area 140 at the front of the receiver aperture. Therefore, a spherical radiosity of the object 110 can be established when multiple images are taken in the spherical area 140 surrounding the object 110. For a receiver with an aperture facing to the surrounding area (e.g., the solar thermal receiver 110A or 110B), the imaging devices 120 should capture images of the receiver in the spherical area 140 at the front of the receiver aperture.
Solar thermal receivers 110A, 1108, 110C operate at high-flux and high-temperature conditions. An imaging device 120 having a smaller camera aperture and/or a quicker shutter speed is used to capture images with low exposure, to ensure that the images are not saturated. In one arrangement, neutral density (ND) filters are used to avoid saturation. ND filters can reduce the intensity of all wavelengths of light equally, but ND filters do not perfectly reduce the intensity equally, which would bring additional measurement errors.
In addition to the low exposure images, identical images taken at higher exposure are required for 3D reconstruction (step 530). Higher exposure images capture features of the surrounding objects (e.g. the receiver supporting frame) to provide the necessary features for performing 3D reconstruction. The high exposure images are not valuable for determining the receiver losses, since many pixels will be saturated (at their maximum value) in the brightly illuminated part of the images.
Therefore, the images received at step 510 are taken by the imaging devices 120 from many directions surrounding the object 110. In particular, the imaging devices 120 capture images of the object from the spherical area 140 surrounding the object 110. Hereinafter, high exposure images will be referred to as the first images, while other images (e.g., low exposure images, infra-red images, hyperspectral images) will be referred to as the second images.
The method 500 proceeds from step 510 to step 520.
In step 520, the method 500 determines the type of the received images. If the received images are the first images, then the method 500 proceeds from step 520 to step 530. Otherwise (if the received images are the second images), the method 500 proceeds from step 520 to sub-process 570. Therefore, the received first images are used to develop the 3D mesh model (steps 530 to 560). Once the 3D mesh model is developed, the radiosity data of the object 110 (which is contained in the received second images) is mapped to the 3D mesh model generated using the first images.
In step 530, the method 500 determines feature points on the first images. The first images are analysed to determine descriptors of an image point. The descriptors are the gradients of local pixel greyscale value in multiple directions, which can be calculated by using the scale-invariant feature transform (SIFT). If the same descriptors are found in another image, this point is identified as the identical point (i.e. feature point).
A solar receiver is exposed to high-flux solar irradiation, the radiosity of which may vary in different directions and disturb the feature detection by SIFT. Thus, the first images capturing constant features of the surrounding objects are used in the 3D reconstruction step.
When the feature points in images from different directions are identified, the triangulation method can be applied to establish their positions in the 3D space and the corresponding camera poses. This process is called structure from motion (SFM). It allows images to be taken in random positions, providing feasibility of incorporating with a drone flying in the solar field to inspect the performance of the receiver.
In one alternative arrangement, retro-reflective markers or 2D barcodes (e.g. ArUco code) are applied to the object 110 to provide specified feature points in images.
The method 500 proceeds from step 530 to steps 540 and 550.
In step 540, the method 500 determines 3D point cloud based on the determined feature points. The 3D point cloud comprises the feature points in the arbitrary camera coordinates. The 3D point cloud generated contains the object 110 as well as the surrounding objects and drifting noisy points. The method 500 proceeds from step 540 to 560.
In step 560, the 3D point cloud is aligned with a 3D mesh model. The 3D mesh model is a computer aided drawing (CAD) model of the object 110 that is discretised into a mesh element having a triangular shape. In alternative arrangements, the mesh element can be of any polygon shape.
Aligning the 3D point cloud to the 3D mesh model enables the object 110 to be distinguished from the surrounding points. Further, the 3D mesh model can be transferred into the camera coordinates and be projected onto each image plane by the corresponding camera matrix. Hence, the alignment of the 3D point cloud with the 3D mesh model provides a link between the surface of the object 110 and pixel data on each second image.
The 3D point cloud is aligned with the 3D mesh model by scaling, rotation, and transformation. At least four matching points are required to align the 3D point cloud with the 3D mesh model. The alignment can be optimised by minimising the distance between the two sets of points.
The method 500 proceeds from step 560 to sub-process 570. Before describing sub-process 570, step 550 is described first.
In step 550, the method 500 determines a camera matrix. The camera matrix (also called “projection matrix”) includes camera poses (i.e., the camera position and orientation of each image in the same coordinates) and a camera calibration matrix. The camera matrix is a 3 by 4 matrix that can project a 3D point onto the 2D image plane based on the principle of collinearity of a pinhole camera.
The method 500 proceeds from step 550 to sub-process 570.
As described above, the method 500 proceeds from step 520 to sub-process 570 if the method 500 determines that the received images are of a type classified as the second images (i.e., low exposure images, infra-red images, hyperspectral images). Similarly, the method 500 proceeds from steps 550 and 560 to sub-process 570. Therefore, sub-process 570 can be performed after aligning the 3D point cloud with the 3D mesh model.
Sub-process 570 maps the pixel values of the second images onto the 3D mesh model based on the alignment (performed at step 560) and the camera matrix (determined at step 550). In other words, sub-process 570 populates the 3D mesh model with the data of the second images. Each of the second images is processed by sub-process 570 in series so that the pixel values of one second image are mapped onto one or more mesh elements before processing the next second image. Sub-process 570 will be described below in relation to
In step 580, the method 500 determines the directional radiosity of each mesh element of the 3D mesh model.
A factor K for converting a pixel value to energy (watt) is first determined.
The general equation to determine the factor K is as follows:
where K is a factor that converts a pixel value on a pixel to Watt, E is the rate of energy on the pixel (W/px2), P is the greyscale pixel value representing the brightness of the pixel, and px denotes the side length of the (square) pixel.
The factor K is constant if the imaging device 120 has a linear response to the irradiation 10 and the settings of the imaging devices 120 are kept constant.
In the present disclosure, the equation to determine the factor K is as follows:
where Qr,c is the energy reflected by a reference sample and received by the camera iris aperture Ac; and ΣPref is the sum of pixel values that represents the reference sample in the images.
Qr,c is determined using the equation:
Qr,c=InΩcArcosθr (4)
where In is the radiance reflected from the reference sample; Ar is the surface area of the reference sample; Ωc is the solid angle subtended by the camera sensor iris from the point of view of the surface of interest, which is equal to Ac/l2 where Ac is the camera iris aperture and l is the distance between the camera iris and the centre of the reference sample, and θr is a direction of the camera.
In is determined using the equation:
where ρ is the reflectivity of the reference sample; {right arrow over (S)} is the direction of the sun; and {right arrow over (n)} is the normal vector of the reference sample.
The energy reflected by the reference sample is determined using the equation:
DNI·Arρ({right arrow over (s)}·{right arrow over (n)})=πInAr (6)
where DNI is a measurement of the direct normal irradiance of the sun on the surface of the reference sample.
To obtain the factor K of the third arrangement, a reference sample having a surface with diffuse reflectance, and known surface reflectivity and surface size and shape is used. The reference sample is arranged horizontally under the sun and images of the reference sample are captured by a camera. Equations (4) to (6) are then used to obtain equation (3).
The K factor of equation (3) can be used to determine the directional radiosity of each mesh element of the 3D mesh model.
Assuming a receiver mesh element i is associated with n pixels in an image from (θ, φ) direction (see sub-process 570), the radiation leaving a mesh element i that is received by the iris aperture of an imaging device 120 can be calculated using the equation:
where Pi,j is the greyscale pixel value representing the brightness of a pixel j mapped at a mesh element i; and px denotes the side length of the (square) pixel.
Assuming the directional radiosity of the object 110 from the mesh element i in the camera direction is Ii(θ, φ), then
{dot over (Q)}i,c=Ii(θ, φ)·Ωc·Ai cos(θ) (8)
where Ai is the area of the mesh element, and are the zenithal and azimuthal angle between the normal vector of the mesh element and the direction of the imaging device 120 (see the discussion on step 590), and
is the solid angle subtending at the camera iris aperture of the imaging device 120 when viewed from the mesh element i, L is the distance between the imaging device 120 and the mesh element.
The directional radiosity of the object 110 from the mesh element i in the direction of (θ, φ) is then obtained by combining equation (3) with equations (8) and (9). The directional radiosity equation is as follows:
The method 500 proceeds from step 580 to step 590.
In step 590, the method 500 determines the hemispherical radiosity of the object 110 based on the determined directional radiosity.
The directional radiosity of each mesh element (determined at step 580) is integrated in the hemispherical directions to determine the hemispherical radiosity of the object 110. It should be noted that the camera direction is defined locally at each individual mesh element by the zenithal angle θ and azimuthal angle φ, as shown in
where {right arrow over (n)} is the normal vector of the mesh element, O is the centre of the mesh element, and C is the position of the imaging device 120 that is obtained at step 550. A global reference vector {right arrow over (r)} is assigned manually to define the starting point of a local azimuth angle φ. As shown in
The total radiative losses from the mesh element i is calculated by integrating the radiance distribution It(θ, φ)over the hemisphere:
The radiance distribution can then be used for determining temperature distribution, flux distribution, and the like of the object 110.
The method 500 concludes at the conclusion of step 590.
Sub-process 570 commences at step 610 by determining mesh elements of the 3D mesh model that are facing the direction of the second image (which is currently being processed by the sub-process 570). Step 610 therefore disregards mesh elements that are not relevant for a particular second image.
For example, an imaging device 120 faces the north to capture an image of the object 110. Such an imaging device 120 would capture the south facing surface (i.e., mesh elements) of the object 110 but would not capture the north facing surface (i.e., mesh elements) of the object 110.
As the imaging devices 120 capture images of the object 120, the positions of the imaging devices 120 are known. As described in step 550, the camera matrix stores the respective positions of the imaging devices 120. For ease of description, the camera positions can be denoted by C(xC, yC, zC) and the camera matrices can be denoted by P=K[R|t].
As described above, the 3D mesh model of the object 110 includes mesh elements.
For each mesh element i, the following is known:
To determine whether a mesh element is facing the second image, sub-process 570 checks whether OC·n≥90°. If this condition is met, then the mesh element is excluded. However, if the condition is not met, then the mesh element is determined to be a mesh element that faces the second image.
Once the relevant mesh elements for an image are determined, sub-process 570 proceeds from step 610 to step 620.
In step 620, sub-process 570 determines an order of the relevant mesh elements (determined at step 610) based on the positions of the mesh elements and the second image. In one arrangement, the determination is performed by calculating the distance of each relevant mesh element to the second image, where the distance is between the centre O of each mesh element and the position of the imaging device capturing the second image. In another arrangement, an octree technique is implemented to determine the closest mesh elements.
The determined mesh elements are therefore ordered where the mesh element closest to the imaging device position is ranked first. Sub-process 570 proceeds from step 620 to sub-process 640 for processing the determined mesh elements according to the ordered rank. Each determined mesh element is processed by sub-process 640 to map the pixel values of the second image to the mesh element. The closest mesh element is first processed by sub-process 640 to map certain pixel values of the second image to that closest mesh element. Once the certain pixel values are mapped to the closest mesh element, the certain pixel values are set to 0. Setting the certain pixel values to 0 prevent the same pixel value from being assigned to multiple mesh element. More importantly, the same pixel value cannot be assigned to a mesh element behind the mesh element closer to the image.
Sub-process 640 is shown in
In step 720, sub-process 640 determines whether a pixel of the second image is within the boundary of the projected mesh element.
In step 740, the pixel value of the determined pixel is associated with the projected mesh element. In other words, the pixel value now belongs to the mesh element. Sub-process 640 proceeds from step 740 to step 750.
In step 750, the pixel value that has been associated with the mesh element is indicated to have been assigned to prevent the pixel value from being assigned to more than one mesh elements. In one arrangement, the associated pixel value is set to zero. In another arrangement, each pixel value has a flag to indicate whether the pixel value has been associated with a mesh element. If the pixel value is associated with a mesh element, the flag indicates so. Sub-process 640 proceeds from step 750 to step 760.
In step 760, sub-process 640 determines whether there are more pixels to process in the second image. If YES, sub-process 640 proceeds to step 730. In step 730, sub-process 640 moves to the next pixel, then returns to step 720. If NO, sub-process 640 concludes. At the conclusion of sub-process 640, the pixel values of all the relevant pixels of one second image are assigned to the mesh elements of the 3D mesh model.
Sub-process 570 then proceeds from sub-process 640 to step 650.
In step 650, sub-process 570 checks whether there are more second images to process. If YES, sub-process 570 returns to step 610 to process the next second image. If NO, sub-process 570 concludes. At the conclusion of sub-process 570, all the second are processed so that the pixel values of the second images are associated with the mesh elements of the 3D mesh model.
The arrangements described are applicable to the computer and data processing industries and particularly for applications for determining radiosity of an object.
The foregoing describes only some embodiments of the present invention, and modifications and/or changes can be made thereto without departing from the scope and spirit of the invention, the embodiments being illustrative and not restrictive.
In the context of this specification, the word “comprising” means “including principally but not necessarily solely” or “having” or “including”, and not “consisting only of”. Variations of the word “comprising”, such as “comprise” and “comprises” have correspondingly varied meanings.
Number | Date | Country | Kind |
---|---|---|---|
2019240717 | Oct 2019 | AU | national |
Number | Date | Country | |
---|---|---|---|
20210104094 A1 | Apr 2021 | US |