This application is based on and claims priority under 35 USC 119 from Japanese Patent Application No. 2021-091788 filed on May 31, 2021, the contents of which are incorporated herein by reference.
The present disclosure relates to a fluid density gradient detection method and a fluid density gradient detection system that visualize a fluid density gradient.
As a method of visualizing a density gradient of an airflow, a Schlieren method is known in which a phenomenon in which passing light is refracted due to the density gradient in a transparent gas is used to observe a change in refractive index as a difference in brightness and darkness.
In order to visualize the density gradient of the airflow by using the Schlieren method, an optical system for creating parallel light and a knife edge for blocking a main light flux are necessary, and there are problems that a technique is required for adjusting an optical axis of the optical system, a place for installation, and an imaging direction is restricted.
As a method of visualizing the density gradient of the airflow that does not require the optical system that generates the parallel light or a light blocking unit, a background oriented schlieren method (hereinafter, referred to as a “BOS method”) is known (for example, see Non-Patent Literature 1). For example, Patent Literature 1 discloses a method of visualizing a density gradient by blocking light by a light blocking unit according to a pattern projected by a projector using the BOS method.
However, in general, a camera is not provided with the light blocking unit as described above, and it is difficult to visualize the density gradient of the airflow by the method disclosed in Patent Literature 1 in a general-purpose camera or an existing camera.
The present disclosure has been made in view of such problems in the related art. An object of the present disclosure is to provide a fluid density gradient detection method and a fluid density gradient detection system that visualize a density gradient of an airflow with high sensitivity in a general-purpose camera or an existing camera.
The present disclosure provides a fluid density gradient detection method, including: capturing, by an imaging device under a predetermined imaging condition, a background image that forms a periodic pattern over an observation target area; and outputting an image indicating a fluid density gradient in the observation target area based on a captured image captured by the imaging device. The imaging condition is determined based on a relationship between a width of a pixel of the captured image in a direction in which the pattern periodically changes on the captured image and a pattern period width of the pattern on the captured image.
Further, the present disclosure provides a fluid density gradient detection system including: an imaging unit configured to image, under a predetermined imaging condition, a background image forming a periodic pattern over an observation target area; and an image output unit configured to output an image indicating a fluid density gradient in the observation target area based on the captured image captured by the imaging unit. The imaging condition is determined based on a relationship between a width of a pixel of the captured image in a direction in which the pattern periodically changes on the captured image and a pattern period width of the pattern on the captured image.
According to the present disclosure, detection sensitivity of density gradient can be improved.
(Knowledge that is the Basis of Fluid Density Gradient Detection According to the Present Disclosure)
Moire (interference fringe) is a stripe pattern generated by optical interference between periodic patterns when the periodic patterns are superimposed. For example, when the periodic pattern is projected onto an image sensor of a digital camera which is an aggregate of periodic pixels, moire may be generated. The moire generated by interference with the image sensor specific to the digital camera may be regarded as a problem in comparison with a film camera, and imaging may be performed by shifting a focus or attaching a low-pass filter in order to prevent generation of the moire.
However, in the present disclosure, by actively generating moire due to interference with the image sensor, a displacement enlargement effect of the moire is utilized to realize the visualization of a minute density change which is difficult to perform by the general BOS method.
Moire has a characteristic that a small relative displacement between patterns is largely enlarged as a movement amount of the Moire. For example,
In the following embodiments, an example will be described in which high precision visualization of a density gradient by a general-purpose camera is implemented without using special optical equipment by using moire due to interference with an image sensor.
Hereinafter, embodiments specifically disclosing a fluid density gradient detection method and a fluid density gradient detection system according to the present disclosure will be described in detail with reference to the drawings as appropriate. Unnecessarily detailed description may be omitted. For example, a detailed description of a well-known matter or a repeated description of substantially the same configuration may be omitted. This is to avoid unnecessary redundancy in the following description and to facilitate understanding of those skilled in the art. The accompanying drawings and the following description are provided for a thorough understanding of the present disclosure for those skilled in the art, and are not intended to limit the subject matter in the claims.
In the present embodiment, the observation target 10 is assumed to be a gas such as air in which an airflow is generated and which has a density gradient, but may be a liquid, a solid, or the like through which light having a wavelength that can be captured by the camera 20 can be transmitted.
The imaging system 100 may further include a display medium 35 for installing or displaying the background 30. That is, the display medium 35 may be provided on a back side of the background 30 as viewed from the camera 20. The display medium 35 may be configured by a part of a facility such as, for example, a wall. In this case, the background 30 may be formed as a pattern of the wall or the like, or may be projected onto the wall or the like by a projector. Alternatively, the display medium 35 may be a portable medium such as paper or plastic. In this case, the background 30 is printed on the portable medium and attached to the wall, a screen, or the like in the facility. Alternatively, the display medium 35 may be configured by, for example, a display. In this case, the background 30 is displayed on the display.
The imaging system 100 may further include a computer (hereinafter referred to as a “camera control unit 26”) that transmits a control signal for controlling an operation of the camera 20. The control signal includes, for example, a signal for changing optical conditions such as focus and zoom of the camera 20, and a signal for controlling start or stop of imaging. If the camera 20 can pan or tilt, the camera control unit 26 may transmit a control signal for controlling the pan or tilt to the camera 20.
The camera control unit 26 may be integrally incorporated in the camera 20. All or a part of the control of the camera 20 may be executed by a user directly operating an input unit (a button, a lens, a touch panel, or the like) attached to the camera 20 instead of the camera control unit 26.
The processor 41 controls each element of the camera control unit 26 via the bus 42. The processor 41 is configured by using, for example, a general-purpose central processing unit (CPU), and a digital signal processor (DSP), or a field programmable gate array (FPGA). The processor 41 may execute a predetermined program stored in the memory 43 to generate a focus adjustment signal or a zoom adjustment signal based on a captured image captured by the camera 20 or to visualize the density gradient by processing to be described later using the captured image captured by the camera 20.
The memory 43 acquires various kinds of information from other elements, and temporarily or permanently holds the information. The memory 43 is a generic term for a so-called primary storage device and a secondary storage device. A plurality of memories 43 may be physically disposed. As the memory 43, for example, a dynamic random access memory (DRAM), a hard disk drive (HDD), or a solid state drive (SSD) is used.
The display 44 is configured by using, for example, a liquid crystal display (LCD) or an organic electroluminescence (EL) device, and may display a captured image sent from the camera 20 or a result of visualization processing of the density gradient.
The input unit 45 is configured by using an operation device for receiving an operation from the user, and may be, for example, a mouse or a keyboard. The input unit 45 may constitute a touch panel integrated with the display 44.
The communication unit 46 is configured by using a communication circuit for functioning as a communication interface with the camera 20, and, for example, outputs the focus adjustment signal or the zoom adjustment signal to the camera 20 or receives a captured image from the camera 20. In addition, the communication unit 46 may transfer the captured image and the result of the visualization processing of the density gradient to another external device via a network such as the Internet.
The optical unit 21 includes, for example, a plurality of lenses, includes a focus function unit 22 and a zoom function unit 23, and changes the optical condition of the camera 20. By adjusting a position or the like of one or more lenses constituting the optical unit 21, the focus function unit 22 for adjusting a focus position and the zoom function unit 23 for adjusting a magnitude of the angle of view by zooming are implemented. The optical unit 21 may execute the focus function unit 22 and the zoom function unit 23 by a direct operation by the user or by the focus adjustment signal and the zoom adjustment signal from the camera control unit 26 to be described later.
The image sensor 25 is an aggregate of a plurality of pixels disposed two-dimensionally, and has RH pixels in a length WS of an imaging surface with respect to a direction in which the density change is desired to be visualized. That is, in the image sensor 25 of the camera 20 according to the present embodiment, a total of RH pixels are arranged in a horizontal direction. The image sensor 25 obtains an image signal or image data as an output obtained by imaging, but may image visible light incident on the imaging surface or may image invisible light such as infrared light incident on the imaging surface.
In the present embodiment, the stripe pattern 31 is a monochrome binary stripe pattern, but may be a color stripe pattern, a stripe pattern due to a gradation change, or a lattice pattern.
In the present embodiment, in the captured image 32 shown in
Under the imaging condition described above, a displacement enlargement effect due to moire appears in the captured image 32 obtained by capturing the observation target 10 and the background image by the camera 20. Accordingly, it is possible to enlarge a displacement less than one pixel generated by a density change gradient of an imaging target (in other words, the observation target 10), and it is possible to improve the detection sensitivity of the density gradient of the BOS method.
The reason why IPH=2SH is selected as the optical condition will be described with reference to
Then, in the pixels of the lower two stages due to the shift ΔS, a ratio of the bright portion (white) and the dark portion (black) of the stripe pattern in the pixels changes as compared with the pixels of the upper two stages, and thus values of the pixels constituting the image captured by the image sensor 25 change. When the change of the pixel appears as moire, movement of the background (that is, the refraction of the light due to the density gradient of the gas) can be visualized.
For example, when the optical condition (for example, the period IPH) is larger than 2SH, in a state of IPH=3SH shown in
Here, when the IPH is brought close to 2SH to be in a state of IPH=2.5SH shown in
Even when the optical condition (for example, the period IPH) is smaller than 2SH, the shift ΔS can be visualized as the IPH approaches 2SH.
For example, in a state of IPH=1 SH shown in
Here, when the IPH is brought close to 2SH to be in a state of IPH=1.5SH shown in
As described above, in the relationship between IPH and 2SH, in the range where IPH is larger than 1.5SH and smaller than 2.5SH, the density gradient of the gas can be clearly visualized by moire particularly under the condition of IPH=2SH.
Next, an adjustment procedure of the angle of view α of the camera 20 will be described with reference to
First, the camera control unit 26 sets a zoom to a telephoto end using the zoom function unit 23 of the camera 20, and images the stripe pattern 31 of the background 30 in an enlarged manner (ST100).
Next, the camera control unit 26 adjusts the focus on the background 30 using the focus function unit 22 (ST110). The focus adjustment may be performed by a general autofocus function.
Next, the camera control unit 26 performs rough adjustment (an example of a first adjustment) of the angle of view α by the zoom function unit 23 (ST120). An adjustment target value of the angle of view α is calculated from the following (Formula 1) using the distance L from the camera 20 to the background 30, the horizontal pixel number RH of the image sensor 25, and the period PH of the stripe pattern 31 of the background 30 (see
When a focus length f is used as a unit of zoom adjustment, a relationship between the angle of view α and the focus length f can be obtained using the following (Formula 2). WS is a length of an effective region of the imaging surface of the image sensor 25 (see
In the captured image 32 captured by focusing on the background 30 under the optical condition of (Formula 1), the relationship between the period IPH of the stripe pattern 31 and the pixel width SH of the image is 1.5SH<IPH<2.5SH, and moire occurs due to interference between a pixel row of the image sensor 25 and the stripe pattern 31.
When the image sensor 25 is an image sensor that generates a grayscale image, since one pixel on the image sensor corresponds to one pixel of generated grayscale image data, the horizontal pixel number RH corresponds to the horizontal pixel number on the image sensor 25. On the other hand, when the image sensor 25 is configured by a Bayer array and generates a color image, since one set of arrays corresponds to one pixel in image data, the horizontal pixel number RH corresponds to the number of sets of arrays.
After the rough adjustment (an example of the first adjustment) of the angle of view α in step ST120, the camera control unit 26 performs fine adjustment (an example of second adjustment) of the zoom position (focus length f) using moire (ST130).
Here, the fine adjustment of the zoom position of the camera will be described in detail with reference to
In step ST131, the camera control unit 26 stores the focus length corresponding to the angle of view adjusted in step ST120 as a focus length f(0) when the number of repetitions k=0.
In step ST132, the camera control unit 26 measures a moire period PM(0) by a method described later.
In step ST133, the camera control unit 26 changes the zoom position so that the focus length f is shifted by Δf(K).
In step ST134, the camera control unit 26 measures the moire period PM(K) after the zoom position is changed.
In step ST135, the camera control unit 26 calculates a difference ΔPM(K) between PM(K) and PM(K−1) before the zoom change.
In step ST136, the camera control unit 26 compares an absolute value of ΔPM(K) with an end determination threshold PTH. The end determination threshold PTH is set to a value at which a change in ΔPM(K) is reduced to such an extent that the change cannot be visually determined, for example.
When ΔPM(K) is larger than PTH, the processing proceeds to the next step. When ΔPM(K) is smaller than PTH, the processing is completed assuming that the adjustment is completed.
A processing completion condition may include not only the comparison between ΔPM(K) and PTH, but also a condition to end when a loop counter k exceeds a predetermined value. Accordingly, even when the zoom adjustment is performed, it is possible to end the processing even when the zoom adjustment does not converge to the end determination threshold PTH or less.
In step ST137, the camera control unit 26 determines whether ΔPM(K) is equal to or greater than 0. When ΔPM(K) is 0 or more, the processing proceeds to step ST137. When ΔPM(K) is smaller than 0, the processing proceeds to step ST138.
In step ST138, a next zoom adjustment width Δf(K+1) is set to be equal to Δf(K), and the processing returns to step ST133.
In step ST139, the camera control unit 26 sets the next zoom adjustment width Δf(K+1) to −XΔf(k), and the processing returns to step ST133. X is an update coefficient that satisfies 0<X<1.
Here, a method of measuring the period PM of the moire (interference fringe) will be described.
According to the above procedure, the optical unit 21 of the camera 20 can be adjusted such that the relationship between the stripe period IPH of the captured image 32 and the pixel width SH of the image is IPH≈2SH.
Next, a method of visualizing the density gradient of the observation target 10 using the captured image 32 captured under the above-described optical conditions will be described.
The density gradient of the airflow is visualized by performing image processing using a reference background image captured using a camera in a state where the density gradient of the airflow is not generated in the observation target area and an observation background image captured in a state where the density gradient of the airflow is generated in the observation target area.
The image processing is performed by the camera control unit 26, and for example, a method described in Patent Literature 2 may be used.
Alternatively, the background image for observation may be continuously acquired without using a difference between the reference background image and the observation background image, and a frame difference between the latest background image and the background image captured a unit time before (one frame before) the latest background image may be used.
By using the frame difference, it is possible to prevent the influence when the position of the camera is shifted between a time of capturing the reference image and a time of capturing the observation image and the optical condition is changed.
Next, a second embodiment will be described. Members described in the first embodiment are denoted by the same reference numerals, and detailed description thereof will be omitted. In the first embodiment, the stripe pattern 31 is fixed on the background 30. The second embodiment is an imaging system projected by a projector 50 on the background 30 as shown in
The projector 50 projects an image input from the camera control unit 26.
The camera control unit 26 includes a background creation unit 70 that is connected to the camera 20, analyzes the background image captured by the camera 20, and changes the period PH of the stripe pattern 31 projected onto the display medium 35 such as a screen or a wall by the projector 50. The background creation unit 70 is implemented by a processor 41 (see
In the second embodiment, a single focus camera can be used as the camera 20. In the first embodiment, a camera having an optical zoom function is used. However, in general, there is also a single focus camera without the optical zoom function.
When the single focus camera without the optical zoom function is used to approach IPH=2SH, it is necessary to change the distance L because the angle of view α cannot be adjusted in the first embodiment. However, when installation places of the camera 20 and the display medium 35 are restricted, it is difficult to change the distance L.
In the second embodiment, even if the angle of view α of the camera 20 is fixed, or even if the installation places of the camera 20 and the display medium 35 are restricted, the imaging condition of IPH≈2SH can be adjusted by changing the period PH that can be said to be an interval of the stripes of the stripe pattern 31 of the background 30.
Hereinafter, a method of generating the background 30 in the background creation unit 70 will be described. In order to adjust the imaging condition by changing the background 30, first, the background creation unit 70 creates a stripe pattern of the background so as to satisfy the above-described (Formula 1) as an initial value PH(0) of the stripe pattern (an example of the first adjustment).
Here, the distance L between the camera 20 and the background in (Formula 1) may be actually measured and input, or the background creation unit 70 may input a figure of a known size to the projector 50, measure the size of the figure from the image captured by the camera 20, and estimate the distance L from the size.
Next, according to
In step ST201, the background creation unit 70 measures a moire period PM(0).
In step ST202, the background creation unit 70 changes the period PH of the stripe pattern 31 by an adjustment width ΔPH(k) of the stripe pattern.
In step ST203, the background creation unit 70 measures a moire period PM(k).
In step ST204, the background creation unit 70 calculates a difference ΔPM(k) between the PM(k) and PM(k−1) before the stripe pattern is changed.
In step ST205, the background creation unit 70 compares an absolute value of ΔPM(k) with the end determination threshold PTH. When |ΔPM(k)| is equal to or greater than PTH, the processing proceeds to the next step. When |ΔPM(k)| is smaller than the threshold, the processing is completed assuming that the adjustment of the period PH of the stripe pattern 31 is completed. The end determination threshold PTH is set to a value at which a change in ΔPTH(K) is reduced to such an extent that the change cannot be visually determined, for example.
In step ST206, the background creation unit 70 determines whether ΔPM(K) is equal to or greater than 0. When ΔPM(K) is 0 or more, the processing proceeds to step ST207. When ΔPM(K) is smaller than 0, the processing proceeds to step ST208. In step ST207, the background creation unit 70 sets the adjustment width ΔPH(K+1) of the next stripe pattern to be equal to ΔPH(K), and the processing returns to step ST202.
In step ST208, the background creation unit 70 changes the adjustment width ΔPH(K+1) of the next stripe pattern to −XΔPH(K), and the processing returns to step ST202. X is an update coefficient that satisfies 0<X<1.
According to the above procedure, the stripe pattern 31 of the background 30 can be adjusted such that the relationship between the stripe period IPH of the captured image 32 and the pixel width SH of the image is IPH≈2SH.
The background creation unit 70 may rotate the stripe pattern 31 of the background 30 so that the stripe pattern 31 is distributed in the direction of the density gradient desired to be visualized.
As shown in
Further, when the displacement due to the density change is large, the stripe pattern 31 of the background 30 may be changed, and the method may be switched to the BOS method in which the effect of enlarging the moire is not used. As the stripe pattern 31 in this case, for example, the stripe pattern described in Non-Patent Literature 1 may be used.
According to the method, when atmospheric pressure density change is large, measurement can be performed by a general BOS method, and a range of the measurable density change can be enlarged.
A fluid density gradient detection method according to an aspect of the present disclosure is a method of visualizing a density gradient of an observation target area using a background image in which a stripe pattern is present as a background of the observation target area and the background is captured over the observation target area. A background image captured under an optical condition in which a period of stripes of a stripe pattern in the background image falls within a range of 150% to 250% of a width of one pixel.
A fluid density gradient detection method according to an aspect of the present disclosure includes: capturing, by an imaging device under a predetermined imaging condition, a background image that forms a periodic pattern over an observation target area; and outputting an image indicating a fluid density gradient in the observation target area based on a captured image captured by the imaging device. The imaging condition is determined based on a relationship between a width (SH) of a pixel of the captured image in a direction in which the pattern periodically changes on the captured image and a pattern period width (IPH) of the pattern on the captured image. Accordingly, since the observation target area is captured under the imaging condition in which the width of the pixel of the captured image and the pattern period width have an appropriate relationship, it is possible to improve the detection sensitivity of the fluid density gradient without using a special optical system.
A fluid density gradient detection method according to an aspect of the present disclosure includes: capturing, by an imaging device under a predetermined imaging condition, a background image that forms a periodic pattern over an observation target area; and outputting an image indicating a fluid density gradient in the observation target area based on a captured image captured by the imaging device. In the imaging condition, a relationship between a width (SH) of a pixel of the captured image in a direction in which the pattern periodically changes on the captured image and a pattern period width (IPH) of the pattern on the captured image is determined such that the pattern period width (IPH) is in a range of 150% to 250% with respect to the width (SH) of the pixel. Accordingly, it is possible to enlarge a change of less than one pixel of the captured image, and it is possible to improve the detection sensitivity of the fluid density gradient without using a special optical system.
A fluid density gradient detection method according to another aspect of the present disclosure includes: capturing, by an imaging device under a predetermined imaging condition, a background image that forms a periodic pattern over an observation target area; and outputting an image indicating a fluid density gradient in the observation target area based on a captured image captured by the imaging device. The imaging condition is determined based on a relationship between a width (SH) of a pixel of the captured image in a direction in which the pattern on the captured image periodically changes and a pattern period width (IPH) of the pattern on the captured image and a period of moire generated on the captured image. Accordingly, since the observation target area is captured under the appropriate imaging condition in which the relationship between the width of the pixel of the captured image and the pattern period width is determined by the period of the moire generated in the captured image, it is possible to improve the detection sensitivity of the fluid density gradient without using a special optical system.
A fluid density gradient detection method according to an aspect of the present disclosure includes: capturing, by an imaging device under a predetermined imaging condition, a background image that forms a periodic pattern over an observation target area; and outputting an image indicating a fluid density gradient in the observation target area based on a captured image captured by the imaging device. The imaging condition performs a first adjustment based on a relationship between a width (SH) of a pixel of the captured image in a direction in which the pattern periodically changes on the captured image and a pattern period width (IPH) of the pattern on the captured image, and further performs a second adjustment based on a period of moire generated on the captured image captured under the imaging condition adjusted by the first adjustment. Accordingly, first, moire is generated in the captured image, the width of the pixel of the captured image and the pattern period width are adjusted to an appropriate relationship using the period of the moire generated next, and the observation target area is captured, so that the detection sensitivity of the fluid density gradient can be improved without using a special optical system.
A fluid density gradient detection method according to an aspect of the present disclosure includes: capturing, by an imaging device under a predetermined imaging condition, a background image that forms a periodic pattern over an observation target area; and outputting an image indicating a fluid density gradient in the observation target area based on a captured image captured by the imaging device. The imaging condition performs a first adjustment based on a relationship between a width (SH) of a pixel of the captured image in a direction in which the pattern periodically changes on the captured image and a pattern period width (IPH) of the pattern on the captured image, and further performs a second adjustment based on a period of moire generated on the captured image captured under the imaging condition adjusted by the first adjustment so that the period of the moire becomes the longest. Accordingly, first, moire is generated in the captured image, the width of the pixel of the captured image and the pattern period width are adjusted to an appropriate relationship using the period of the moire generated next, and the observation target area is captured, so that the detection sensitivity of the fluid density gradient can be improved without using a special optical system.
A fluid density gradient detection method according to an aspect of the present disclosure includes: capturing, by an imaging device under a predetermined imaging condition, a background image that forms a periodic pattern over an observation target area; and outputting an image indicating a fluid density gradient in the observation target area based on a captured image captured by the imaging device. The imaging condition is determined based on a relationship between a width (SH) of a pixel of the captured image in a direction in which the pattern periodically changes on the captured image and a pattern period width (IPH) of the pattern on the captured image, and the imaging condition is changed by adjusting an optical condition of the imaging device including an angle of view. Accordingly, since the observation target area is captured under the imaging condition in which the width of the pixel of the captured image and the pattern period width have an appropriate relationship, it is possible to improve the detection sensitivity of the fluid density gradient without using a special optical system.
A fluid density gradient detection method according to an aspect of the present disclosure includes: capturing by an imaging device, under a predetermined imaging condition, a background image that forms a periodic pattern over an observation target area; and outputting an image indicating a fluid density gradient in the observation target area based on a captured image captured by the imaging device. The imaging condition is determined based on a relationship between a width (SH) of a pixel of the captured image in a direction in which the pattern periodically changes on the captured image and a pattern period width (IPH) of the pattern on the captured image, and the imaging condition is changed by adjusting the pattern period width (IPH) of the pattern. Accordingly, since the observation target area is captured under the imaging condition in which the width of the pixel of the captured image and the pattern period width have an appropriate relationship, it is possible to improve the detection sensitivity of the fluid density gradient even in a camera without a zoom function.
A fluid density gradient detection method according to an aspect of the present disclosure includes: capturing, by an imaging device under a predetermined imaging condition, a background image that forms a periodic pattern over an observation target area; and outputting an image indicating a fluid density gradient in the observation target area based on a captured image captured by the imaging device. The imaging condition is determined based on a relationship between a width (SH) of a pixel of the captured image in a direction in which the pattern periodically changes on the captured image and a pattern period width (IPH) of the pattern on the captured image, and the imaging condition is changed by adjusting the pattern period width (IPH) of the pattern. Further, the pattern period width (IPH) is changed in accordance with a magnitude of a gas density gradient in the observation target area. Accordingly, since the observation target area is captured under the imaging condition in which the width of the pixel of the captured image and the pattern period width have an appropriate relationship in accordance with the magnitude of the gas density gradient of the observation target area, it is possible to change the detection sensitivity in accordance with the magnitude of the fluid density gradient without using a special optical system.
A fluid density gradient detection method according to an aspect of the present disclosure includes: capturing by an imaging device, under a predetermined imaging condition, a background image that forms a periodic pattern over an observation target area; and outputting an image indicating a fluid density gradient in the observation target area based on a captured image captured by the imaging device. The imaging condition is determined based on a relationship between a width (SH) of a pixel of the captured image in a direction in which the pattern periodically changes on the captured image and a pattern period width (IPH) of the pattern on the captured image, and further determines a direction in which the pattern periodically changes in accordance with a direction of a gas density gradient in the observation target area. Accordingly, even when the direction of the density gradient of the measurement target changes, the detection sensitivity of the fluid density gradient can be improved by maintaining an appropriate relationship between the width of the pixel of the captured image and the pattern period width.
A fluid density gradient detection method according to an aspect of the present disclosure includes: capturing, by an imaging device under a predetermined imaging condition, a background image that forms a periodic pattern over an observation target area; and outputting an image indicating a fluid density gradient in the observation target area based on a captured image captured by the imaging device. The imaging condition is determined based on a relationship between a width (SH) of a pixel of the captured image in a direction in which the pattern periodically changes on the captured image and a pattern period width (IPH) of the pattern on the captured image. Further, the background image is divided into a plurality of sections, and the direction of the pattern is determined for each section in accordance with a direction of a gas density gradient in the observation target area. Accordingly, even when there are a plurality of directions of the density change of the measurement target in an imaging target area, it is possible to maintain an appropriate relationship between the width of the pixel of the captured image and the pattern period width according to each direction, and it is possible to improve the detection sensitivity of the fluid density gradient.
A fluid density gradient detection method according to an aspect of the present disclosure includes: capturing, by an imaging device under a predetermined imaging condition, a background image that forms a periodic pattern over an observation target area; and outputting an image indicating a fluid density gradient in the observation target area based on a captured image captured by the imaging device. Adjustment of the imaging condition is performed in a state where a focus of the imaging device is adjusted to the background image. First, a first adjustment is performed based on a relationship between a width (SH) of a pixel of the captured image in a direction in which the pattern periodically changes on the captured image and a pattern period width (IPH) of the pattern on the captured image, and further a second adjustment is performed based on a period of moire generated on the captured image captured under the imaging condition adjusted by the first adjustment. Accordingly, since the observation target area is captured under the appropriate imaging condition in which the relationship between the width of the pixel of the captured image and the pattern period width is determined by the period of the moire generated in the captured image, it is possible to improve the detection sensitivity of the fluid density gradient without using a special optical system.
A fluid density gradient detection system according to an aspect of the present disclosure includes: an imaging unit configured to image, under a predetermined imaging condition, a background image forming a periodic pattern over an observation target area; and an image output unit configured to output an image indicating a fluid density gradient in the observation target area based on the captured image captured by the imaging unit. The imaging condition is determined based on a relationship between a width (SH) of a pixel of the captured image in a direction in which the pattern periodically changes on the captured image and a pattern period width (IPH) of the pattern on the captured image. Accordingly, since the observation target area is captured under the imaging condition in which the width of the pixel of the captured image and the pattern period width have an appropriate relationship, it is possible to improve the detection sensitivity of the fluid density gradient without using a special optical system.
A fluid density gradient detection system according to an aspect of the present disclosure includes: an imaging unit configured to image, under a predetermined imaging condition, a background image forming a periodic pattern over an observation target area; an image output unit configured to output an image indicating a fluid density gradient in the observation target area based on the captured image captured by the imaging unit; and a control unit configured to change an optical condition of the imaging unit. The imaging condition is determined based on a relationship between a width (SH) of a pixel of the captured image in a direction in which the pattern periodically changes on the captured image and a pattern period width (IPH) of the pattern on the captured image. The control unit adjusts the optical condition such that a period of moire of the captured image is longest. Accordingly, first, moire is generated in the captured image, the width of the pixel of the captured image and the pattern period width are adjusted to an appropriate relationship using the period of the moire generated next, and the observation target area is captured, so that the detection sensitivity of the fluid density gradient can be improved without using a special optical system.
A fluid density gradient detection system according to an aspect of the present disclosure includes: an imaging unit configured to image, under a predetermined imaging condition, a background image forming a periodic pattern over an observation target area; an image output unit configured to output an image indicating a fluid density gradient in the observation target area based on the captured image captured by the imaging unit; and a background creation unit configured to change the background image. The imaging condition is determined based on a relationship between a width (SH) of a pixel of the captured image in a direction in which the pattern periodically changes on the captured image and a pattern period width (IPH) of the pattern on the captured image. The background creation unit adjusts the background image such that a period of moire of the captured image is longest. Accordingly, first, moire is generated in the captured image, the width of the pixel of the captured image and the pattern period width are adjusted to an appropriate relationship by using the period of the moire generated next, and the observation target area is captured, so that the detection sensitivity of the fluid density gradient can be improved even in a camera without a zoom function.
The present disclosure can improve detection sensitivity of a method of visualizing a density gradient of an airflow in a general-purpose camera, and is useful for a fluid density gradient detection method and a fluid density gradient detection system that visualize a minute airflow in an indoor space and visualize a temperature change.
Number | Date | Country | Kind |
---|---|---|---|
2021-091788 | May 2021 | JP | national |