This application claims priority under 35 U.S.C. § 119 to Japanese Patent Application No. 2017-254163, which was filed on Dec. 28, 2017, the entire disclosure of which is hereby incorporated by reference.
The present disclosure relates to a device, method, and program which detect a target object around a ship, as well as a device, method, and program which trains a model used for detecting the target object around the ship.
Radar devices mounted on a ship display a radar image indicating target objects around the ship based on echo signals corresponding to radio waves emitted from a radar antenna. The target objects displayed in the radar image are typically other ships. Sailors visually check the radar image where other ships are displayed to grasp an exact situation around the ship, thereby enabling safe cruising. A suitable gain adjustment is made in signal processing for generating the radar image from the echo signals so that the images of other ships are displayed clearly. Since echo signals caused by sea surface reflections, and rain and/or snow clutter may become noise in the ship images, a further adjustment of the signals is performed to remove the noise.
Here, the target objects needed to be observed by the radar device are not only other ships. For example, images of land and banks may also be useful information for grasping the situation around the ship. In addition, there is also a need for observation of a situation of birds. Although there are various reasons for detecting the birds, for example, if a flock of birds can be identified, the existence of a school of fish therebelow can naturally be predicted.
Generally, radar devices for ships are mainly built to display the image of other ships around the ship, and therefore, images of target objects other than ships are often filtered out as noise. Particularly, since the echo signals from a bird, a small rock, a current rip, ice, rain, clouds, SART (Search And Rescue Transponder), etc. which are generally weak compared with the echo signal of a ship, these features are difficult to clearly be identified in the radar image. Therefore, it is still difficult to accurately detect various target objects other than ships in the radar image, and it can be said that such technologies are still in a developmental stage. Moreover, without limiting to radar devices, a similar demand for accurately detecting various target objects exists in fields of other target object detecting devices, such as fish finders and sonar.
One purpose of the present disclosure is to provide a device, method, and program which can accurately detect various target objects around a ship, and a device, method, and program which train a model used for accurately detecting various target objects around the ship.
According to one aspect of the present disclosure, a target object detecting device is provided, which may include an acquisition part, a generation part, and a detecting part. The acquisition part may acquire echo signals from target objects around a ship. The generation part may generate a first echo image based on the echo signals. The detecting part may input the first echo image into a model built by a program that implements a machine learning algorithm, and may detect a first target object that is a target object other than a ship corresponding to the model, based on an output from the model.
According to this configuration, the first echo image may be generated based on the echo signal from the target object. The first echo image may be inputted into the model built by machine learning, and based on the output from this model, the first target object other than a ship may be accurately detected. Thus, various target objects around the ship (other than ships) may accurately be detected. In addition, according to another aspect of the present disclosure, a model suitable for detecting the target objects other than ships may be built.
The present disclosure is illustrated by way of example and not by way of limitation in the figures of the accompanying drawings, in which like reference numerals indicate like elements and in which:
Hereinafter, a target object detecting device, method, and program, and a learning device, method, and program will be described with reference to the accompanying drawings according to one embodiment of the present disclosure.
The radar device 1 may include a radar antenna 10, and a radar indicator 20 connected to the radar antenna 10. The radar antenna 10 may emit a pulse-shaped radio wave, and receive an echo signal which is a reflected wave of the emitted radio wave at a target object. The radar antenna 10 may repeatedly transmit the radio wave and receive the corresponding echo signal, while swiveling in a horizontal surface, thereby scanning 360° around the ship. The echo signal from the target object, which is received by the radar antenna 10, may be sequentially converted into digital data by an A/D converter (not illustrated), and the converted echo signal may be sequentially outputted to the radar indicator 20.
The radar indicator 20 may be connected to a GPS compass 60 which is also mounted on the ship. The GPS compass 60 may measure, at a given time interval, information on a bow direction or heading of the ship (hereinafter, is referred to as “the directional information”), information on latitude and longitude of the ship (hereinafter, is referred to as “the LL information”), and information on a speed of the ship. The GPS compass 60 may sequentially output the information to the radar indicator 20.
The display part 21 may be a user interface which displays a screen for presenting a variety of information to the user, and in this embodiment, is comprised of a liquid crystal display. The input part 22 may be a user interface which receives various operations to the radar indicator 20 from the user, and in this embodiment, is comprised of a keyboard 22a and a trackball 22b. The input part 22 may also include a touch panel laminated on the display part 21.
The memory part 23 may be a nonvolatile storage device comprised of a hard-disk drive and/or a flash memory. The memory part 23 may store an analytical model 41 built by machine learning. Details of the analytical model 41 will be described later. The control part 24 may be comprised of a CPU 30, a ROM 31, and a RAM 32. The ROM 31 may store a computer program 40 which causes the CPU 30 to perform various operations. The CPU 30 may read and executes the program 40 in the ROM 31 to virtually operate as a screen generating module 31a, a bird detecting module 31b, a bird tracking module 31c, and an information outputting module 31d. Details of operations of these parts 31a-31d will be described later. Note that the program 40 may be stored not in the ROM 31 but in the memory part 23, or distributedly stored in both the memory part 23 and the ROM 31.
Next, various processings executed by the radar device 1 are described. The radar indicator 20 included in the radar device 1 can execute a display processing and a bird detection processing. The display processing may be to generate a radar screen 50 (see
The display processing may be performed mainly by the screen generating module 31a. The screen generating module 31a may sequentially acquire the echo signals through the radar interface part 25, and then generate the radar screen 50 (see
The screen generating module 31a may display, in addition to the radar image 51, an information display area 52 on the radar screen 50. In the example of
A main menu button 55 may be displayed on the radar screen 50. The main menu button 55 may desirably be disposed so as not to overlap with the radar image 51. A cursor 56 is also displayed on the radar screen 50. The cursor 56 may be freely moved within the radar screen 50 by the user operating the input part 22. In this embodiment, when the user performs a given operation by using the input part 22 while the cursor 56 is located on the main menu button 55, the main menu button 55 may open hierarchically to show various sub menu buttons. The sub menu buttons may desirably be disposed so as not to overlap with the radar image 51. The user may execute a desired function implemented in the radar indicator 20 by operating the input part 22 and selecting a suitable one of the sub menu buttons.
There may be a normal mode (auto mode) and a bird mode in the radar image 51. The user may arbitrarily switch the mode between the normal mode and the bird mode by performing a given operation through the input part 22. The radar image 51 in the normal mode may be an echo image generated mainly for the purpose of clearly indicating images T1 of other ships. On the other hand, the radar image 51 in the bird mode may be an echo image generated for the purpose of indicating images T2 of flocks of birds which return weak echo signals as compared with other ships. The normal mode is typically used for observing movements of other ships while the ship is traveling to avoid collisions with other ships and grasping the locations of sister ships etc. On the other hand, the bird mode is typically used for finding out a flock of birds, which may lead to a school of fish normally existing below a flock of birds.
The screen generating module 31a may adjust the echo signals according to the currently-selected mode, either the normal mode or the bird mode, when generating the radar image 51. In the bird mode, the screen generating module 31a may adjust the echo signals so that the reflected waves at flocks of birds, which are weaker than the reflected waves at other ships, may be caught, and then generate the radar image 51 based on the adjusted echo signals. The method of adjusting the echo signals includes, for example, an adjustment of a gain (sensitivity) and a removal of the noise caused by sea surface reflections, and rain and snow clutters. In the bird mode, the gain may be raised as compared with the normal mode, in order to capture the images T2 of flocks of birds based on the echo signals weaker than the images T1 of other ships. In addition, in the bird mode, levels of the removal of the sea surface reflections and the removal of the rain and snow clutter may be lowered as compared with the normal mode so that the images of flocks of birds will not disappear by an excessive noise removal. Moreover, in the bird mode, it may be desirable not to perform the removal of the rain and snow clutter.
Moreover, the radar image 51 may have a relative-motion mode and a true-motion mode. The relative-motion mode may be a mode in which the location of the ship is always set at a fixed location in the radar image 51 (typically, at the center of the radar image 51). On the other hand, the true-motion mode may be a mode in which the locations of stationary target objects, such as land, are fixed in the radar image 51. The user may switch the mode between the relative-motion mode and the true-motion mode by performing a given operation through the input part 22.
The screen generating module 31a may also display a heading bright line U1 on the radar image 51. The heading bright line U1 may be displayed on the radar image 51 by a line extending in the bow direction of the ship from the current location of the ship to the perimeter of the radar image 51. That is, an inner end of the heading bright line U1 may represent the current location of the ship. In the true-motion mode, the inner end of the heading bright line U1, i.e., the location of the ship, may be located at various locations on the radar image 51. On the other hand, in the relative-motion mode, the inner end of the heading bright line U1 may always be located at the fixed location, such as the center on the radar image 51.
The screen generating module 31a may also display on the radar image 51 the images T4 of echo trails of target objects, such as other ships and flocks of birds. For example, the images T4 of the echo trails of target objects may be formed by superimposing the echo images of target objects, such as the images T1-T3 indicated in the past radar images 51, on the latest radar image 51. The user may switch a display setting of the images T4 of the echo trails between “Display” and “Not Displaying” by performing a given operation through the input part 22.
Generally, since birds fly higher and lower between a water surface and above the sky, a variation may be caused in the intensity of the echo signal of birds, particularly a flock of birds, resulting in a variation in the echo image.
As described above, the image T2 of a flock of birds and the image T4 of the echo trail may indicate different features from the image T1 of a ship and the image T4 of an echo trail. Therefore, the user may know the existence and the locations of the flocks of birds around the ship by finding out on the radar image 51 in the bird mode, the images T2 of the flocks of birds and the images T4 of the echo trails having the above features.
Next, based on the radar image 51, the bird detection processing in which flocks of birds are automatically detected is described. In the bird detection processing, flocks of birds may be detected based on the analytical model 41 which uses the radar image 51 as an input and uses information indicative of the existence of the flocks of birds as an output. The radar image 51 used in the bird detection processing of this embodiment may be the radar image 51 in the bird mode. The analytical model 41 may be built by a program executing a machine learning algorithm, typically carried out beforehand, and information which defines the analytical model 41 may be stored in the memory part 23. Below, after describing a machine learning process of the analytical model 41, details of a flow of the bird detection processing will be described.
The learning device 101 may be a general-purpose computer as hardware, and include a display part 121, an input part 122, a memory part 123, a control part 124, and a communication part 125. These parts 121-125 may be communicatably connected with each other through bus lines.
The display part 121 may be a user interface which displays a screen for indicating a variety of information to the user, and may be comprised of a liquid crystal display. The input part 122 may be a user interface which receives various operations to the learning device 101 from the user, and may be comprised of a mouse, a keyboard, a touch panel, etc. The memory part 123 may be a nonvolatile storage device comprised of a hard disk and/or a flash memory. The control part 124 may be comprised of a CPU, a ROM, and a RAM, and read and execute a computer program 140 stored in the memory part 123 to virtually operate as a screen generating module 124a and a learning module 124b.
The communication part 125 may be a port for communicating with external apparatuses, and receives many radar images 51 in the bird mode from a device like the radar indicator 20. Alternatively, the communication part 125 may receive the echo signals, and if needed, the directional information, the LL information, and the ship speed information, from devices like the radar antenna 10 and the GPS compass 60. In the latter case, the screen generating module 124a may create the radar image 51 in the bird mode based on the echo signals, and if needed, the directional information, the LL information, and the ship speed information, by a similar method to the screen generating module 31a. The radar image 51 which is acquired by the communication part 125 or is generated by the screen generating module 124a may be stored in the memory part 123.
In the learning process, the echo signals when various target objects, such as other ships, rocks, current rips, ice, rain, clouds, and SART, exist in addition to flocks of birds, and if needed, the directional information, the LL information, and the ship speed information, or the radar image 51 based on the echo signals and the information, may be inputted into the communication part 125. In addition, information indicative of types of target objects caught in the echo signals or the radar image 51 (hereinafter, referred to as “the correct answer information”) may be inputted to the communication part 125. Alternatively, the user may input the correct answer information through the input part 122. The correct answer information may indicate the location and type of each object. The inputted correct answer information may be stored in the memory part 123 so as to be associated with the radar image 51. As one particular example, during the training phase the radar indicator 20 may be configured as a data capture device on board the ship. The radar indicator 20 captures radar signals of various objects and stores them as a set of successively captured radar images. During the training phase an initial filtering is performed on the radar images to identify objects (e.g., “blobs” appearing in the image) that are of an appropriate size, position, shape, and/or intensity to be candidate objects that might be determined to be birds. Acceptable ranges of values for these parameters are set according to a corresponding function for each parameter that varies based on the distance to the blob from the ship, since these parameters will vary with the distance to the imaged object, and blobs that are not within the acceptable ranges are filtered out, leaving only candidate objects remaining. Once the candidate objects are determined at an imaging time (T1), in the training phase, human observation is then performed. For example, the ship is then steered toward the real-world location of a candidate object in the radar image, and human observation is performed at observation time (To) to confirm whether or not birds are present in the identified location. If birds are present, then the corresponding blob that was identified as a candidate object is tagged with appropriate label data in the stored radar image captured at imaging time (T1). The label data may include a confirmation that birds were present, a number of the birds in the observed flock, a species of the birds, and an activity of the birds (feeding, resting on the surface of the water, etc.), for example. It will be appreciated that the present disclosure is particularly useful for identifying birds that are out of sight of the ship, for example, at distances greater than 2 or 3 kilometers away. At such distances, the ship may take a while to arrive at the candidate object location for observation. Thus, a time lag may result between imaging time (T1) and observation time (To). This can be acceptable in the present application because of particular use to fishing ships is information on the presence of flocks of birds that are relatively stationary and engaged in feeding activity on large schools of fish. To cut the time lag down during the training phase, for example, remote observation ships or aircraft, such as drones, may be used to confirm the presence of birds via direct human observation or indirect observations through cameras. As a result of such a training phase, a tagged training data set is produced that contains radar images that are tagged with metadata in the form of the labeling information discussed above, for confirmed bird sightings. This tagged metadata is referred to herein as the correct answer information.
It will also be appreciated that at such distances, a flock of birds will often appear in the radar images as a unitary or congealed blob representing the flock rather than a pattern of individual dots for each bird. For this reason, discriminating the flock from other objects can be difficult, since the pattern of echoes of individual birds cannot be seen in the radar image. Further, as discussed elsewhere herein, the candidate blob for a flock of birds may disappear or fade in successive images. This is due to the behavior of the flock as it flies around and dives in unison toward its prey in the ocean. As the flock approaches the ocean surface, the radar echoes from the flock become weaker, due interference by waves at the surface of the sea, for example. As the flock rises into the sky, the radar echoes typically become stronger. In frames in which the radar echo from a flock is weak, the prefiltering discussed above for the various parameters than are used to identify candidate blobs will also filter out the weakly reflected radar echoes of the flock, resulting in a false negative. In a later frame, as the flock rises after feeding in the same general latitudinal and longitudinal position, the echoes will strengthen from the flock and it will be included in the candidate blobs once again. The processing shown and described below in relation to
A set of the radar image 51 and the correct answer information may serve as training data for training the analytical model 41. The analytical model 41 of this embodiment may be a convolutional neural network. As one specific example, the AlexNet convolutional neural network may be used. The convolutional neural network is typically fed one radar image at a time, and feature extraction is performed by the neural network itself. Alternatively, the analytical model 41 may be a recurrent convolutional neural network configured to analyze a series of images over time, and analyze relationships between candidate objects in a current radar image and candidate objects in radar images from prior time steps. With such a recurrent convolutional neural network, time-dependent features such as the pulsing intensity of echo signals may be extracted as features by the recurrent neural network itself. As another alternative, a non-recurrent convolutional neural network may be fed two or more images taken at two or more respective timesteps in a series. In this way, the convolutional neural network may analyze relationships between the intensity in echo signals in the different images, without the complexity of training a recurrent convolutional neural network. As yet another alternative, to simplify the training of the convolutional neural network, feature descriptors such as scale invariant feature transforms (SIFT) may be defined by the user in the input layer, instead of solely inputting pixel values from regions of interest in the radar image. The learning module 124b may learn in the radar image 51 where the various target objects actually exist, including flocks of birds, based on the correct answer information, and build the analytical model 41. For example, the learning module 124b sequentially reads the radar image 51 in the memory part 123, cuts a partial image indicative of the echo image of each target object from the radar image 51, and inputs the partial images into the analytical model 41. Then, the learning module 124b may acquire, as an output of the analytical model 41, information indicative of the type of each target object indicated by the inputted partial image, and update a parameter of the analytical model 41 so that the information indicative of the type of each target object corresponds with the correct answer information. Here, the parameters that are adjusted during training are, for example, various coefficients which define the convolutional neural network as the analytical model 41. The analytical model 41 may be optimized while being applied with the teacher data one after another. Since various learning methods for supervised learning using a neural network are known, a detailed description thereof is herein omitted; however, it will be appreciated that a backpropagation algorithm may be used. As described above, once the learning process is completed, the analytical model 41 is then derived. The derived analytical model 41 may be stored in the memory part 23 of the radar indicator 20.
The bird detection processing of this embodiment is executed when the bird mode is selected in the radar indicator 20. When the radar image 51 in the bird mode is generated, first, the bird detecting module 31b may sequentially cut the partial image indicative of the echo image of each target object indicated in the radar image 51. Further, the bird detecting module 31b may sequentially input these partial images into the analytical model 41, and acquire the information indicative of the types of the target objects indicated in the inputted partial images as the output of the analytical model 41. Then, the bird detecting module 31b may sequentially determine whether each target object indicated in the partial image is a flock of birds based on the output.
If the bird detecting module 31b determines that the target object indicated in the partial image is not a flock of birds, it may then continuously perform a similar processing to the next partial image or radar image 51. On the other hand, if the bird detecting module 31b determines that the target object indicated in the partial image is a flock of birds (i.e., if a flock of birds is detected in the partial image), a tracking processing to track the flock of birds may be performed as illustrated in
In the tracking processing of this embodiment, tracking by using a tracking filter, more specifically a αβ filter, may be performed.
E(n)=M(n)−P(n)
S(n)=P(n)+αE(n)
V(n)=V(n−1)+βE(n)/T
P(n+1)=S(n)+V(n)T
The tracking processing of
At subsequent Step S2, the bird tracking module 31c may set a prediction gate G(n) based on the derived predicted location P(n). Then, the radar image 51 in the bird mode in the n-th scan may be acquired, and an image T2 of a flocks of birds is searched within the prediction gate G(n) (in other words, the flock of birds is tracked). More specifically, the partial images indicative of the echo images of the target objects may be cut from an area within the prediction gate G(n) on the radar image 51 in the n-th scan. Further, the bird tracking module 31c may input the partial images into the analytical model 41, and then acquire information indicative of the types of target objects indicated in the inputted partial image as the output of the analytical model 41. The bird tracking module 31c may then determine whether the target object indicated in the partial image is a flock of birds based on the output. As a result, if a flock of birds is detected, the bird tracking module 31c may determine that the tracking of the flock of birds is successful (Step S3), and then transits to Step S4. At Step S4, the location (coordinates) of the flock of birds detected on the radar image 51 in the current scan may be set as the measured location M(n), and the predicted location P(n+1) in the next scan may be derived based on the measured location M(n).
On the other hand, at Step S2, if the partial image indicative of the echo image of the target object cannot be cut from the area within the prediction gate G(n) on the radar image 51 in the n-th scan, or if the partial image can be cut but there is no partial image determined to be a flock of birds, the bird tracking module 31c may determine that the tracking of the flock of birds is failed (Step S3), and then transit to Step S6. At Step S6, the predicted location P(n+1) in the next scan may be derived based on the location (coordinates) of the flock of birds detected at the last.
As illustrated in the flowchart of
At Step S5, a text message expressing that the flock of birds is detected may also be displayed in addition to the symbol T5. Moreover, in the above processing, since the moving speed V(n) of the flock of birds is calculated, a vector symbol representing the speed and direction of the flock of birds may also be displayed in addition to or instead of the symbol T5.
When the information indicating that the flock of birds exists is outputted, the tracking processing of
By the above tracking processing, a confirmation of whether the flock of birds really exists may be made after the “temporary” detection of the flock of birds based on the analytical model 41. That is, in the above tracking processing, the past detection results of the flock of birds and the latest detection result of the flock of birds may be averaged in time. Therefore, even if a flock of birds is detected based on the analytical model 41, it will not be finally detected as a flock of birds because of the time average, if the probability of occurrence is low. Moreover, although a flock of birds exists, and even when the flock of birds could not be detected I or fewer times due to the omission in detection, etc., the tracking can be continued without missing the flock of birds. Therefore, a false detection of the flock of birds may be prevented to improve the detection accuracy of the flock of birds. Moreover, the bird tracking module 31c may skip the tracking processing of a flock of birds temporarily detected by the bird detecting module 31b in an area located farther than an echo of a target object determined as land, from the ship. Thus, the bird tracking processing for the other side of land can be omitted to reduce the calculation load.
Note that, if it is finally determined that the flock of birds exists and the tracking processing of
Although the radar image 51 in the normal mode can capture a flock of birds depending on the adjustment of sensitivity, the user may overlook the echo image based on the flock of birds because the echo signal of the flock of birds is weak. In this regard, since the radar image 51 in the bird mode can capture the image T2 of the flock of birds comparatively clearly, it is useful. However, for example, a user of a pleasure boat seldom uses the bird tracking function, and the image T2 of the flock of birds does not appear so frequently, either. Therefore, even if the image T2 of the flock of birds appears on the radar image 51, it may be difficult for this type of user who is not used to recognize the image T2 to be a flock of birds. Moreover, for a safe cruise, the user may not focus only on the existence of the flock of birds by always watching the radar image 51. In this regards, according to the bird detection processing of this embodiment, the flock of birds may be automatically detected and the detection result may be outputted intelligibly for the user. In addition, since the flock of birds is detected with high precision by using a model trained by machine learning and the tracking processing, the user may discover the flock of birds easily and accurately.
As described above, although one embodiment of the present disclosure is described, the present disclosure is not limited to the above embodiment and various changes may be possible without departing from the subject matters of the present disclosure. For example, the following changes may be possible. Note that the following modifications may suitably be combined.
<4-1>
Although the bird detection processing of the above embodiment is applied to the ship radar device 1, it is also similarly applicable to other detecting devices, such as radars for weather and aviation, a fish finder, and sonar.
<4-2>
In the above embodiment, the example in which birds are detected based on the analytical model 41 is illustrated. However, based on the output of the analytical model 41, the analytical model 41 may be built so that at least one of a school of fish, a rock, a current rip, ice, rain, clouds, and SART may be detected, instead of or in addition to the birds. Note that the function for detecting the school of fish may desirably be mounted on an apparatus, such as the fish finder and sonar. Moreover, as the function for detecting the school of fish, the analytical model 41 may also be built so that the fish types, such as sardine and tuna, can be distinguished based on the output of the analytical model 41. Moreover, the function for detecting ice may desirably be mounted on an apparatus mounted on a ship which cruises the Arctic Ocean etc.
Moreover, although in the above embodiment the echo signals when various target objects, such as, in addition to the flocks of birds, other ships, rocks, current rips, ice, rain, clouds, and SART, exist are used as the teacher data for learning of the analytical model 41, the analytical model 41 may also learn with teacher data only including the echo signals when the flocks of birds exist.
<4-3>
Although the αβ filter is used in the tracking processing of the above embodiment, other filters, such as a αβγ tracking filter and a Kalman filter, can of course be used.
<4-4>
In the above embodiment, the bird detection processing may be executed by the radar indicator 20. However, as illustrated in
<4-5>
Although in the above embodiment the directional information and the LL information are acquired using the GPS compass 60, similar information may also be acquired using a magnetic compass, a gyrocompass, or a GPS sensor.
<4-6>
In the above embodiment, the bird detecting module 31b may calculate the accuracy of the final detection of the flock of birds. In this case, the information indicative of the existence of the flock of birds may be outputted only when the accuracy of the detection exceeds a given threshold. Further, the user may then adjust the threshold. The accuracy of the detection may be determined based on, for example, the output value from the analytical model 41 (in this case, the analytical model 41 learns beforehand so as to output a value corresponding to the probability of the target object indicated in the inputted image being a flock of birds), and the probability of succeeding in the tracking processing at Step S3.
<4-7>
Although in this embodiment the bird detection processing is executed when the bird mode is selected in the radar indicator 20, the bird detection processing may also be executed when the bird mode is not selected. For example, when the user commanded an execution of the bird detection processing through the input part 22, the radar image 51 in the bird mode is internally generated without being displayed on the display part 21, and the detection of the flock of birds is executed based on the radar image 51. Then, when the flock of birds is finally detected, audio or a message expressing the final detection may be outputted. In this case, the symbol of the flock of birds may also be displayed over the radar image 51 in the normal mode.
<4-8>
Although in this embodiment the bird tracking module 31c cuts the partial image for every scan up to the n-th scan, and detects and tracks the flock of birds based on the analytical model 41, the detection of the flock of birds based on the analytical model 41 may be executed only when the tracking begins (i.e., when n=1). In this case, the bird tracking module 31c may track the echo image from which the flock of birds is detected for the first time (i.e., when n=1) by the conventional TT (Target Tracking), and then determine the success or failure of tracking of the flock of birds by determining whether the location of the tracked echo image is located within the area of the prediction gate G(n) in the subsequent scan (n).
It is to be understood that not necessarily all objects or advantages may be achieved in accordance with any particular embodiment described herein. Thus, for example, those skilled in the art will recognize that certain embodiments may be configured to operate in a manner that achieves or optimizes one advantage or group of advantages as taught herein without necessarily achieving other objects or advantages as may be taught or suggested herein.
All of the processes described herein may be embodied in, and fully automated via, software code modules executed by a computing system that includes one or more computers or processors. The code modules may be stored in any type of non-transitory computer-readable medium or other computer storage device. Some or all the methods may be embodied in specialized computer hardware.
Many other variations than those described herein will be apparent from this disclosure. For example, depending on the embodiment, certain acts, events, or functions of any of the algorithms described herein can be performed in a different sequence, can be added, merged, or left out altogether (e.g., not all described acts or events are necessary for the practice of the algorithms). Moreover, in certain embodiments, acts or events can be performed concurrently, e.g., through multi-threaded processing, interrupt processing, or multiple processors or processor cores or on other parallel architectures, rather than sequentially. In addition, different tasks or processes can be performed by different machines and/or computing systems that can function together.
The various illustrative logical blocks and modules described in connection with the embodiments disclosed herein can be implemented or performed by a machine, such as a processor. A processor can be a microprocessor, but in the alternative, the processor can be a controlling module, microcontrolling module, or state machine, combinations of the same, or the like. A processor can include electrical circuitry configured to process computer-executable instructions. In another embodiment, a processor includes an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable device that performs logic operations without processing computer-executable instructions. A processor can also be implemented as a combination of computing devices, e.g., a combination of a digital signal processor (DSP) and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Although described herein primarily with respect to digital technology, a processor may also include primarily analog components. For example, some or all of the signal processing algorithms described herein may be implemented in analog circuitry or mixed analog and digital circuitry. A computing environment can include any type of computer system, including, but not limited to, a computer system based on a microprocessor, a mainframe computer, a digital signal processor, a portable computing device, a device controlling module, or a computational engine within an appliance, to name a few.
Conditional language such as, among others, “can,” “could,” “might” or “may,” unless specifically stated otherwise, are otherwise understood within the context as used in general to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without user input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular embodiment.
Disjunctive language such as the phrase “at least one of X, Y, or Z,” unless specifically stated otherwise, is otherwise understood with the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (e.g., X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present.
Any process descriptions, elements or blocks in the flow views described herein and/or depicted in the attached figures should be understood as potentially representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or elements in the process. Alternate implementations are included within the scope of the embodiments described herein in which elements or functions may be deleted, executed out of order from that shown, or discussed, including substantially concurrently or in reverse order, depending on the functionality involved as would be understood by those skilled in the art.
Unless otherwise explicitly stated, articles such as “a” or “an” should generally be interpreted to include one or more described items. Accordingly, phrases such as “a device configured to” are intended to include one or more recited devices. Such one or more recited devices can also be collectively configured to carry out the stated recitations. For example, “a processor configured to carry out recitations A, B and C” can include a first processor configured to carry out recitation A working in conjunction with a second processor configured to carry out recitations B and C. The same holds true for the use of definite articles used to introduce embodiment recitations. In addition, even if a specific number of an introduced embodiment recitation is explicitly recited, those skilled in the art will recognize that such recitation should typically be interpreted to mean at least the recited number (e.g., the bare recitation of “two recitations,” without other modifiers, typically means at least two recitations, or two or more recitations).
It will be understood by those within the art that, in general, terms used herein, are generally intended as “open” terms (e.g., the term “including” should be interpreted as “including but not limited to,” the term “having” should be interpreted as “having at least,” the term “includes” should be interpreted as “includes but is not limited to,” etc.).
It should be emphasized that many variations and modifications may be made to the above-described embodiments, the elements of which are to be understood as being among other acceptable examples. All such modifications and variations are intended to be included herein within the scope of this disclosure and protected by the following claims.
Number | Date | Country | Kind |
---|---|---|---|
2017-254163 | Dec 2017 | JP | national |