This application claims the priority benefit of Taiwan application serial no. 112147842, filed on Dec. 8, 2023. The entirety of the above-mentioned patent application is hereby incorporated by reference herein and made a part of this specification.
The disclosure relates to an object detection technology, and in particular relates to a blur object detection system and a blur object detection method.
In order to ensure the safety of flying or traveling of drones or unmanned vehicles, it is important to accurately detect the positions of dynamic objects. The traditional method of identifying a blur object requires recovering a blur image first and then performing object identification. Specifically, after a dynamic object is captured into the blur image by a camera, the blur image must first be recovered (e.g., HDR). Next, image processing such as white balance, autofocus, background removal, etc., is performed. Then, general object detection is performed to obtain the object position. The disadvantage of the traditional approach is that it is time-consuming, and further consumes computing resources to recover the blur object. Therefore, how to identify the blur object in the image without first recovering the blur object, save time, and reduce terminal resource requirements, are problems that the field intends to solve.
The disclosure provides a blur object detection system, which includes a memory, an image capturing device, and a processor. The image capturing device is configured to capture a dynamic image, in which the dynamic image includes a blur object and a general object. The processor is coupled to the image capturing device and the memory, and configured to simultaneously or non-simultaneously perform image compression, blur object detection, and general object detection based on the dynamic image to obtain an image compression file, a blur object position of the blur object in the dynamic image, and a general object position of the general object in the dynamic image, and store the image compression file, the blur object position, and the general object position to the memory.
In an embodiment, the processor is further configured to read the image compression file from the memory, decompress the image compression file into an image stream, and recover the blur object in the image stream based on the blur object position.
In an embodiment, the processor is further configured to determine that the blur object is a physical object based on an object feature of the blur object, and recover the blur object in the image stream based on the blur object position.
In an embodiment, the processor is further configured to select the recovered blur object in the image stream based on the blur object position
In an embodiment, the processor is further configured to perform blur object detection and general object detection on the dynamic image through an optical flow method.
In an embodiment, the optical flow method further includes an image calibration method, a clustering algorithm, and a motion compensation method.
The disclosure provides a blur object detection method, which includes the following steps. A dynamic image is captured by an image capturing device, in which the dynamic image includes a blur object and a general object. Image compression, blur object detection, and general object detection are simultaneously or non-simultaneously performed based on the dynamic image by the processor to obtain an image compression file, a blur object position of the blur object in the dynamic image, and a general object position of the general object in the dynamic image. The image compression file, the blur object position, and the general object position are stored to the memory by the processor.
Based on the above, the blur object detection system and the blur object detection method of the disclosure can identify the blur object without the need for the object in the image to be clear. Even if only partial recovery of the blur object is required when identifying the blur object in the image, the consumption of computing resources can be reduced.
Some embodiments of the disclosure accompanied with drawings are described in detail as follows. The reference numerals used in the following description are regarded as the same or similar elements when the same reference numerals appear in different drawings. These embodiments are only a part of the disclosure, and do not disclose all the possible implementation modes of the disclosure.
The image capturing device 11 is configured to capture a dynamic image Imgm of a dynamic object Om for the dynamic object Om. Practically speaking, the image capturing device 11 can be a device with an image capturing function built into a computer device implementing the blur object detection system 1, or can be a device with an image capturing function of a computer device implementing the blur object detection system 1 such as a wired or wireless connected camera and a video recorder, and the disclosure is not limited thereto.
The processor 12 is coupled to the image capturing device 11 and the memory 13. Practically speaking, the processor 12 can be a central processing unit (CPU), a micro-processor, or an embedded controller built into the computer device implementing the blur object detection system 1, and the disclosure is not limited thereto.
Practically speaking, the memory 13 is, for example, a static random-access memory (SRAM), a dynamic random access memory (DRAM), or other memory, and the disclosure is not limited thereto.
The image capturing device 11, the processor 12, and the memory 13 included in the blur object detection system 1 cooperate together to execute a blur object detection method 2.
First, in step S202, the dynamic image Imgm of the dynamic object Om is captured for the dynamic object Om by the image capturing device 11, in which the dynamic image Imgm includes a blur object and a general object. It should be noted that when the image capturing device 11 captures the dynamic image Imgm of the dynamic object Om for the dynamic object Om, the dynamic object Om must exist in a capturing range. However, there may also be other objects in the capturing range that enter the capturing range when the image capturing device 11 captures the dynamic image Imgm. The other objects may be static objects or may be in motion. Therefore, the blur object included in the dynamic image Imgm may be part of the dynamic object Om and/or other dynamic objects. Next, the blur object and the general object included in the dynamic image Imgm will be further explained through
Referring to
In an embodiment, the processor 12 can simultaneously perform image processing such as image compression, blur object detection, and general object detection based on the dynamic image Imgm to obtain the image compression file, the blur object position of the blur object in the dynamic image Imgm, and the general object position of the general object in the dynamic image Imgm. As shown in
In another embodiment, the processor 12 can, using time-division multiplexing, perform image processing, such as image compression, blur object detection, and general object detection based on the dynamic image Imgm to obtain the image compression file, the blur object position of the blur object in the dynamic image Imgm, and the general object position of the general object in the dynamic image Imgm. That is, the processor 12 can perform step S211, step S221, and step S231 using time-division multiplexing.
Once obtaining the image compression file in step S211, the blur object position in step S222, and the general object position in step S232, the processor 12 stores the image compression file, the blur object position, and the general object position to the memory 13 in step S241. The image compression file corresponding to the dynamic image Imgm can be further used as other applications.
In the blur object detection system 1 and the blur object detection method 2 of the disclosure, the processor 12 reads the image compression file corresponding to the dynamic image Imgm from the memory 13, decompresses the image compression file into an image stream, and recovers the blur object in the image stream based on the blur object position. In step S212, the processor 12 decompresses the image compression file into an image stream. Once obtaining the blur object position of the blur object in the dynamic image in step S222, the processor 12 recovers the blur object in the image stream based on the blur object position in step S213.
In an embodiment, as shown in
Specifically, when performing blur object detection on the dynamic image Imgm through the artificial neural network 41, the processor 12 will first detect all blur objects 42 and obtain the blur object positions of a blur object 43, a blur object 44, etc., in the dynamic image Imgm respectively. When performing general object detection on the dynamic image Imgm through the artificial neural network 41, the processor 12 will first detect all clear general objects 45 and obtain the blur object positions in the dynamic image Imgm.
In an embodiment, if the blur objects (such as the propeller 32 and the other objects 33 of the drone as shown in
In order to improve optical flow quality, reduce probability of misjudgment of an object caused by optical flow noise, and improve accuracy of determining whether the blur object is a real object, the optical flow method 62 also includes an image calibration method, a clustering algorithm, and a motion compensation method.
Based on the above, the blur object detection system and the blur object detection method of the disclosure can identify the blur object without the need for the object in the image to be clear. Even if only partial recovery of the blur object is required when identifying the blur object in the image, the consumption of computing resources can be reduced.
| Number | Date | Country | Kind |
|---|---|---|---|
| 112147842 | Dec 2023 | TW | national |