The present disclosure is generally related to electromagnetic imaging of containers.
The safe storage of grains is crucial to securing the world's food supply. Estimates of storage losses vary from 2 to 30%, depending on geographic location. Grains are usually stored in large containers, referred to as grain silos or grain bins, after harvest. Because of non-ideal storage conditions, spoilage and grain loss are inevitable. Consequently, continuous monitoring of the stored grain is an essential part of the post-harvest for the agricultural industry. Recently, electromagnetic inverse imaging (EMI) using radio frequency (RF) excitation has been proposed to monitor the moisture content of stored grain. The possibility of using electromagnetic waves to quantitatively image grains, and the motivation to do so, derives from the well-known fact that the dielectric properties of agricultural products vary with their attributes, such as the moisture content and the temperature, which in turn, indicates their physiological state.
Deep learning (DL) techniques, and in particular, convolutional neural networks (CNNs) have been applied to a very broad range of scientific and engineering problems. These include applications such as natural language processing, computer vision, and speech recognition. Convolutional neural networks have also been applied to medical imaging for segmentation, as well as detection and classification. For the case of medical imaging, DL techniques have been well investigated for many of the common modalities. CNNs are deep neural networks that were designed specifically for handling images as inputs. As is known, in CNNs, the parameterized local convolutions, at successively subsampled image-sizes, allow learning feature maps at multiple scales of pixel-organization. Historically, the most popular use of CNNs was as image classification neural networks. However, with the advent of encoder-decoder architectures, CNNs, and their variants, are increasingly being used for learning tensor-to-tensor (e.g. image-to-image, or vector-to-image) transformations, thereby enabling various data-driven, and learning based image reconstruction applications. In the case of electromagnetic inverse problems, researchers have been applying machine learning techniques to improve the performance of microwave imaging (MWI).
State-of-the-art, deep-learning-based MWI techniques generally fall into two categories. In the first category, CNNs have been combined with one of the traditional algorithms to enhance the performance of electromagnetic inversion. Using DL as a prior (or regularization) term, or using DL techniques as post-processing method for denoising and artifact removal, have been studied to indicate the performance of combination of deep learning with traditional methods. In the second category, DL techniques are employed to reconstruct the image from the measurement data. This second category is still quite preliminary but promising results have been obtained. While promising studies have been done in using DL techniques to reconstruct the image directly from the measurement data for other imaging modalities like MRI and Ultrasound, there is a need to investigate how deep learning can be utilized to perform the inversion in microwave imaging. Most recently, Li et al. (“Deepnis: Deep neural network for nonlinear electromagnetic inverse scattering”, L. Li, L. G. Wang, F. L. Teixeira, C. Liu, A. Nehorai, T. J. Cui, IEEE Transactions on Antennas and Propagation, vol. 67, no. 3, pp. 1819-1645, March 2019) tried to utilize a deep neural network for nonlinear electromagnetic inverse scattering. They have shown that the proposed deep neural network can learn a general model approximating the underlying EM inverse scattering system. However, the targets were simple homogeneous targets with low contrast, and only limited to two-dimensional (2D) inverse problems. In real-world imaging problems, the electromagnetic fields scatter, and propagate through three-dimensional (3D) objects. However, researchers usually attempt to simplify this 3D problem to a 2D model to reduce the time of image reconstruction and decrease the computational complexity. Studies have shown that using a 2D model can increase the level of artifacts in reconstructed images. In addition, when the object of interest is small, there is a chance that it places between two consecutive imaging slices; then, the reconstruction algorithm may not discover the target. Therefore, having a viable 3D imaging technique is important for having an appropriate and practically useful reconstruction technique.
Many aspects of the disclosure can be better understood with reference to the following drawings. The components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Moreover, in the drawings, like reference numerals designate corresponding parts throughout the several views.
In one embodiment, a system, comprising: a neural network, configured to: receive electromagnetic field measurement data from an object of interest as input to the neural network, the neural network trained on labeled data; and reconstruct a three-dimensional (3D) distribution image of a physical property of the object of interest from the received electromagnetic field measurement data, the reconstruction implemented without performing a forward solve during the reconstruction.
Certain embodiments of a deep learning system and method are disclosed that are used to solve electromagnetic inverse scattering problems for grain storage applications. In one embodiment, a deep learning system comprises a convolutional neural network that is trained with data from thousands of forward solves from many possible combinations of features, including grain heights, cone angles, and moisture distributions. Once trained, the neural network may determine a grain distribution for grain bins of similar structures and even for different cases without performing any iterative steps of a forward solve for new input data. That is, when applied after training, the neural network produces a three-dimensional (3D) image reconstruction for a given physical property (e.g., moisture distribution) of the grain in a matter of seconds, without the need for further forward solves. In some embodiments, a deep learning system directly reconstructs the 3D images of the physical property from the acquired electromagnetic field measurement data. For instance, in the case of grain monitoring and for a physical property of moisture content, certain embodiments of a deep learning system learn a reconstruction mapping from sensor-domain data (e.g., complex valued data array of transmitter-receiver measurements) to a 3D image of the moisture content, which avoids the need for explicitly modelling of a nonlinear transformation from acquired raw data to the 3D image of the moisture content and hence reduces modeling error that tends to plague traditional inverse scattering approaches.
Digressing briefly, in addition to some of the shortcomings to deep learning approaches described above, past approaches to solve the associated quantitative inverse scattering problem, which is ill-posed and nonlinear, have their own set of challenges. Obtaining highly accurate reconstructions of the complex-valued permittivity generally requires the use of computationally expensive iterative techniques, such as those found in contrast source inversion (CSI) techniques (e.g., Finite-Element (FEM) forward model CSI). This is especially true when trying to image highly inhomogeneous scatterers with high contrast values. Despite the advances made during the last twenty years, images containing reconstruction artifacts still remain an issue, and for biomedical imaging the resolution is still much lower when compared to other available modalities. For industrial applications, such as the monitoring of stored grain, the resolution may not be as much an issue, but the accuracy of the reconstructed complex valued permittivity is an issue, as is the high computational cost of traditional electromagnetic inversion techniques. In addition, for most cases, the permittivity, an electromagnetic property, is not the desired final outcome. In biomedical imaging, for example, the desired result may be an image of tissue-types, or a classification of cancerous versus noncancerous tissues (e.g., tumor or cancerous tissue detection). In the stored-grain application, ultimately, interest primarily lies in the moisture content of the grain as a function of position within the grain bin. Thus, there is an implied mapping from the complex valued permittivity to the physical property of interest. Such a mapping is difficult to incorporate directly into traditional inverse scattering algorithms. This subsequent mapping may also add to the inverse problem, now being defined as going from the electromagnetic field data to the property of interest. In some cases, an analytic expression for such a mapping may not be available. In contrast, certain embodiments of a deep learning system directly reconstructs 3D images of the physical property from the acquired electromagnetic field measurement data, thus providing a practical approach to solving the electromagnetic inverse problem while improving image quality and reducing modeling errors. Additionally, the deep learning system improves robustness to data noise. As for reconstruction time, the traditional CSI approach with its iterative approach may consume hours of processing time and require extensive computational resources, whereas after the initial training, the deep learning system may provide results almost instantly, thus improving upon the speed of processing and lowering the computational resource requirement for each case.
Having summarized certain features of a deep learning system of the present disclosure, reference will now be made in detail to the description of a deep learning system as illustrated in the drawings. While a deep learning system will be described in connection with these drawings, there is no intent to limit it to the embodiment or embodiments disclosed herein. For instance, in the description that follows, one focus is on grain bin monitoring. However, certain embodiments of a deep learning system may be used to determine other contents of a container, including one or any combination of other materials or solids, fluids, or gases, as long as such contents reflect electromagnetic waves. Additionally, certain embodiments of a deep learning system may be used in other industries, including the medical industry, among others. Further, although the description identifies or describes specifics of one or more embodiments, such specifics are not necessarily part of every embodiment, nor are all various stated advantages necessarily associated with a single embodiment or all embodiments. On the contrary, the intent is to cover all alternatives, modifications and equivalents included within the spirit and scope of the disclosure as defined by the appended claims. Further, it should be appreciated in the context of the present disclosure that the claims are not necessarily limited to the particular embodiments set out in the description.
As shown in
Note that in some embodiments, the antenna acquisition system 16 may include additional circuitry, including a global navigation satellite systems (GNSS) device or triangulation-based devices, which may be used to provide location information to another device or devices within the environment 10 that remotely monitors the container 18 and associated data. The antenna acquisition system 16 may include suitable communication functionality to communicate with other devices of the environment.
The uncalibrated, raw data collected from the antenna acquisition system 16 is communicated (e.g., via uplink functionality of the antenna acquisition system 16) to one or more devices of the environment 10, including devices 20A and/or 20B. Communication by the antenna acquisition system 16 may be achieved using near field communications (NFC) functionality, Blue-tooth functionality, 802.11-based technology, satellite technology, streaming technology, including LoRa, and/or broadband technology including 3G, 4G, 5G, etc., and/or via wired communications (e.g., hybrid-fiber coaxial, optical fiber, copper, Ethernet, etc.) using TCP/IP, UDP, HTTP, DSL, among others. The devices 20A and 20B communicate with each other and/or with other devices of the environment 10 via a wireless/cellular network 22 and/or wide area network (WAN) 24, including the Internet. The wide area network 24 may include additional networks, including an Internet of Things (IoT) network, among others. Connected to the wide area network 24 is a computing system comprising one or more servers 26 (e.g., 26A, 26N).
The devices 20 may be embodied as a smartphone, mobile phone, cellular phone, pager, stand-alone image capture device (e.g., camera), laptop, tablet, personal computer, workstation, among other handheld, portable, or other computing/communication devices, including communication devices having wireless communication capability, including telephony functionality. In the depicted embodiment of
The devices 20 provide (e.g., relay) the (uncalibrated, raw) data sent by the antenna acquisition system 16 to one or more servers 26 via one or more networks. The wireless/cellular network 22 may include the necessary infrastructure to enable wireless and/or cellular communications between the device 20 and the one or more servers 26. There are a number of different digital cellular technologies suitable for use in the wireless/cellular network 22, including: 3G, 4G, 5G, GSM, GPRS, CDMAOne, CDMA2000, Evolution-Data Optimized (EV-DO), EDGE, Universal Mobile Telecommunications System (UMTS), Digital Enhanced Cordless Telecommunications (DECT), Digital AMPS (IS-136/TDMA), and Integrated Digital Enhanced Network (iDEN), among others, as well as Wireless-Fidelity (Wi-Fi), 802.11, streaming, etc., for some example wireless technologies.
The wide area network 24 may comprise one or a plurality of networks that in whole or in part comprise the Internet. The devices 20 may access the one or more server 26 via the wireless/cellular network 22, as explained above, and/or the Internet 24, which may be further enabled through access to one or more networks including PSTN (Public Switched Telephone Networks), POTS, Integrated Services Digital Network (ISDN), Ethernet, Fiber, DSL/ADSL, Wi-Fi, among others. For wireless implementations, the wireless/cellular network 22 may use wireless fidelity (Wi-Fi) to receive data converted by the devices 20 to a radio format and process (e.g., format) for communication over the Internet 24. The wireless/cellular network 22 may comprise suitable equipment that includes a modem, router, switching circuits, etc.
The servers 26 are coupled to the wide area network 24, and in one embodiment may comprise one or more computing devices networked together, including an application server(s) and data storage. In one embodiment, the servers 26 may serve as a cloud computing environment (or other server network) configured to perform processing required to implement an embodiment of a deep learning system. When embodied as a cloud service or services, the server 26 may comprise an internal cloud, an external cloud, a private cloud, a public cloud (e.g., commercial cloud), or a hybrid cloud, which includes both on-premises and public cloud resources. For instance, a private cloud may be implemented using a variety of cloud systems including, for example, Eucalyptus Systems, VMWare vSphere®, or Microsoft® HyperV. A public cloud may include, for example, Amazon EC2®, Amazon Web Services®, Terremark®, Savvis®, or GoGrid®. Cloud-computing resources provided by these clouds may include, for example, storage resources (e.g., Storage Area Network (SAN), Network File System (NFS), and Amazon S3®), network resources (e.g., firewall, load-balancer, and proxy server), internal private resources, external private resources, secure public resources, infrastructure-as-a-services (laaSs), platform-as-a-services (PaaSs), or software-as-a-services (SaaSs). The cloud architecture of the servers 26 may be embodied according to one of a plurality of different configurations. For instance, if configured according to MICROSOFT AZURE™, roles are provided, which are discrete scalable components built with managed code. Worker roles are for generalized development, and may perform background processing for a web role. Web roles provide a web server and listen for and respond to web requests via an HTTP (hypertext transfer protocol) or HTTPS (HTTP secure) endpoint. VM roles are instantiated according to tenant defined configurations (e.g., resources, guest operating system). Operating system and VM updates are managed by the cloud. A web role and a worker role run in a VM role, which is a virtual machine under the control of the tenant. Storage and SQL services are available to be used by the roles. As with other clouds, the hardware and software environment or platform, including scaling, load balancing, etc., are handled by the cloud.
In some embodiments, the servers 26 may be configured into multiple, logically-grouped servers (run on server devices), referred to as a server farm. The servers 26 may be geographically dispersed, administered as a single entity, or distributed among a plurality of server farms. The servers 26 within each farm may be heterogeneous. One or more of the servers 26 may operate according to one type of operating system platform (e.g., WINDOWS-based O.S., manufactured by Microsoft Corp. of Redmond, Wash.), while one or more of the other servers 26 may operate according to another type of operating system platform (e.g., UNIX or Linux). The group of servers 26 may be logically grouped as a farm that may be interconnected using a wide-area network connection or medium-area network (MAN) connection. The servers 26 may each be referred to as, and operate according to, a file server device, application server device, web server device, proxy server device, or gateway server device.
In one embodiment, one or more of the servers 26 may comprise a web server that provides a web site that can be used by users interested in the contents of the container 18 via browser software residing on a device (e.g., device 20). For instance, the web site may provide visualizations that reveal physical properties (e.g., moisture content) and/or geometric and/or other information about the container and/or contents (e.g., the volume geometry, such as cone angle, height of the grain along the container wall, etc.).
The functions of the servers 26 described above are for illustrative purpose only. The present disclosure is not intended to be limiting. For instance, functionality of the deep learning system may be implemented at a computing device that is local to the container 18 (e.g., edge computing), or in some embodiments, such functionality may be implemented at the devices 20. In some embodiments, functionality of the deep learning system may be implemented in different devices of the environment 10 operating according to a primary-secondary configuration or peer-to-peer configuration. In some embodiments, the antenna acquisition system 16 may bypass the devices 20 and communicate with the servers 26 via the wireless/cellular network 22 and/or the wide area network 24 using suitable processing and software residing in the antenna acquisition system 16.
Note that cooperation between the devices 20 (or in some embodiments, the antenna acquisition system 16) and the one or more servers 26 may be facilitated (or enabled) through the use of one or more application programming interfaces (APIs) that may define one or more parameters that are passed between a calling application and other software code such as an operating system, a library routine, and/or a function that provides a service, that provides data, or that performs an operation or a computation. The API may be implemented as one or more calls in program code that send or receive one or more parameters through a parameter list or other structure based on a call convention defined in an API specification document. A parameter may be a constant, a key, a data structure, an object, an object class, a variable, a data type, a pointer, an array, a list, or another call. API calls and parameters may be implemented in any programming language. The programming language may define the vocabulary and calling convention that a programmer employs to access functions supporting the API. In some implementations, an API call may report to an application the capabilities of a device running the application, including input capability, output capability, processing capability, power capability, and communications capability.
An embodiment of a deep learning system may include any one or a combination of the components (or sub-components) of the environment 10. For instance, in one embodiment, the deep learning system may include a single computing device (e.g., one of the servers 26 or one of the devices 20) comprising all or in part a convolutional neural network, and in some embodiments, the deep learning system may comprise the antenna array 12, the antenna acquisition system 16, and one or more of the server 26 and/or devices 20 embodying the neural network. For purposes of illustration and convenience, implementation of an embodiment of a deep learning system is described in the following as being implemented in a computing device (e.g., comprising one or a plurality of GPUs or CPUs) that may be one of the servers 26, with the understanding that functionality may be implemented in other and/or additional devices.
In one example operation (and assuming a neural network that has been trained using labeled data (synthetic/numerical and optionally experimental field data)), a user (via the device 20) may request measurements of the contents of the container 18. This request is communicated to the antenna acquisition system 16. In some embodiments, the triggering of measurements may occur automatically based on a fixed time frame or based on certain conditions or based on detection of an authorized user device 20. In some embodiments, the request may trigger the communication of measurements that have already occurred. The antenna acquisition system 16 activates (e.g., excites) the antenna probes 14 of the antenna array 12, such that the acquisition system (via the transmission of signals and receipt of the scattered signals) collects a set of raw, uncalibrated electromagnetic data at a set of (a plurality of) discrete, sequential frequencies (e.g., 10-100 Mega-Hertz (MHZ), though not limited to this range of frequencies nor limited to collecting the frequencies in sequence). In one embodiment, the uncalibrated data comprises total-field, S-parameter measurements (which are used to generate both a calibration model or information and a prior model or information as described below). As is known, S-parameters are ratios of voltage levels (e.g., due to the decay between the sending and receiving signal). Though S-parameter measurements are described, in some embodiments, other mechanisms for describing voltages on a line may be used. For instance, power may be measured directly (without the need for phase measurements), or various transforms may be used to convert S-parameter data into other parameters, including transmission parameters, impedance, admittance, etc. Since the uncalibrated S-parameter measurement is corrupted by the switching matrix and/or varying lengths and/or other differences (e.g., manufacturing differences) in the cables connecting the antenna probes 14 to the antenna acquisition system 16, some embodiments of the deep learning system may use only magnitude (i.e., phaseless) data as input, which is relatively unperturbed by the measurement system. The antenna acquisition system 16 communicates (e.g., via a wired and/or wireless communications medium) the uncalibrated (S-parameter) data to the device 20, which in turn communicates the uncalibrated data to the server 26. At the server 26, data analytics are performed using a trained neural network as described further below.
Explaining further to highlight these differences, methods for solving the inverse scattering problems can broadly be categorized into objective-function based approaches and data-driven learning techniques. Traditional electromagnetic inverse scattering iterative methods, such as the CSI method described above, are classified as objective-function approaches, also known as model-based approaches. These methods attempt to solve for a desired unknown, say a property image Ip, by minimizing an inverse problem cost-function in terms of collected data d. For the above CSI formulation, the property image is Ip=x(r) or εr(r), where x(r) is a contrast function and εr(r) is complex-valued permittivity as a function of position. The general form of the inverse problem cost-function may then be written as Eqn. 1 below:
Unlike the objective cost-function approaches, which require an accurate forward model to solve the inverse problem, certain embodiments of learning approaches do not require that an explicit forward model be known beforehand. Rather, they utilize a large amount of data to implicitly learn a forward model while solving the inverse problem. To be able to train a network, labeled data is needed (e.g., which informs of almost everything about the data that is used for training, including the grain height, cone angle, moisture distribution, etc.). As would be expected, obtaining measured data from actual on-site storage bins is difficult and impractical for all bin dimensions, commodities, and combinations thereof. Accordingly, in some embodiments of a deep learning system, numerically generated data is generated as labeled data. For instance, if there are identical bins in the field with identical installations, numerical data generated for one bin may be used with the other bin as well (e.g., the CNN created for one bin can be used for all bins with the same physical properties (independent of the commodities getting stored)). Numerical or synthetic data may be the sole labeled data used for training in some embodiments. In some embodiments, a combination of numerically generated data and experimental data (e.g., measured data for different combinations of bin dimensions and content characteristics) may be used. Training is generally intended for a storage bin of a particular specification, and for different storage bins of different specifications (e.g., geometric specifications), the CNN may be trained specifically for those bin characteristics.
Learning approaches for inverse problems are classified as supervised learning because they employ a set of N ground truth images {Ipn} and their corresponding measurements {dn} in the training phase. Learning approaches learn a map IMθ, defined by a set of training parameters θ in a given space. In the training phase, the parameters θ are learned by solving a regression problem (Eqn. 2):
The objective function approach requires that an optimization problem be solved with each new data set and this is typically quite computationally expensive. On the other hand, the learning approach of certain embodiments of a deep learning system shifts the computational load to the training phase, which is performed only once. When new data is obtained, the learning approach efficiently produces a guess corresponding to that new data. In addition, obtaining an accurate forward model is crucial for objective function approaches, and this can often be quite difficult for some applications like grain monitoring, where the physical property of interest is the moisture content of the stored grain. A forward model which produces predicted scattered-field data, given the inhomogeneous moisture content of an unknown amount of grain stored in a grain-bin, is quite difficult in its own right. For example, even the mapping from complex-valued permittivity to moisture content is quite difficult to obtain. In comparison, the learning approach of certain embodiments of a deep learning system may be implemented to directly reconstruct any physical property desired assuming a sufficient amount of training data.
Referring specifically to
The CNN block 32, as expressed above, comprises a convolutional decoder that consists of two main stages. The first stage consists of a stack of four fully connected layers 36, though in some embodiments, other quantities of layers may be used. The vertical arrow symbols between each of the layers 36 signifies that the layers are fully connected. The CNN block 32 further comprises a reshaping of the output of the fourth layer into a 3D image, as denoted by reshaping block 38 (with the vertical arrow within the block 38 signifying the reshaping). In effect, the first stage serves at least one purpose of transforming the input domain from scattered field data to a 3D moisture distribution image. In some embodiments, dropout layers (signified by the vertical arrow located between the layers 36 and the reshaping block 38) are used to prevent overfitting after each fully connected layer. The second stage comprises successive deconvolutional and upsampling layers 40 to produce the reconstructed 3D volume of moisture content of output block 34. Batch normalization has been used after each convolutional layer to accelerate convergence in the training phase. Each horizontal arrow located in and between the layers 40 signifies the operations of convolution, batch normalization, and an activation function (e.g., rectifier or also referred to as a ramp function), and each vertical arrow located between layers signifies upconversion operations as understood in the field of convolutional neural networks. In effect, the CNN block 32 is trained to output the corresponding true 3D volume of moisture content.
Referring now to
Referring to the CNN block 46, as explained above, the CNN block 46 comprises a first branch comprising the four fully connected layers 36, the reshaping block 38, and the successive deconvolutional and upsampling layers 40 as explained above in conjunction with architecture1 28 of
The 3D U-Net 52 comprises successive convolutional and downsampling layers 52A, followed by successive deconvolutional and upsampling layers 52B, where the quantity of layers may different in some embodiments. The horizontal arrow symbols within each of the layers 52A, 52B signify the operations of convolution, batch normalization, and an activation function, the downward arrow symbols between layers 52A signify dropouts as explained above, and the upward arrow symbols between layers 52B signify upconversion operations, as understood in the field of convolutional neural networks. The successive convolutional and downsampling layers 52A function as a feature extraction stage (e.g., encoder), while the successive deconvolutional and upsampling layers 52B function as a reconstruction network (e.g., decoder). Concatenative layers, represented by dashed horizontal arrows extending between layers 52A, 52B, have been added between the corresponding contractive and expansive layers to prevent the loss of information along the contractive path. In one embodiment, the outputs of the two branches 40, 52 are then fused together through, for instance, a parameterized linear combination.
One benefit of using a simple additive fusion approach (signified by the right-most summation symbol in the CNN block 46) is to force the individual branches to contribute as much as possible to the reconstruction task, by learning meaningful feature representations along the layers of each branch. In some embodiments, a more complicated fusion model may be used, though implementation of such an embodiment entails the risk of putting more burden on the fusion model itself and the risk it may learn idiosyncratic mappings, given its complexity, at the cost of not learning intrinsically useful representations along each of the input branches. Moreover, a simple fusion strategy has the added advantage of introducing interpretability to architecture2 42 in terms of how much scattered field data and prior information contribute to the final reconstruction.
The output block 48 comprises a relatively higher resolution 3D image (compared to the architecture 28 of
Note that certain intermediate neural network training functions, known to those skill in the art, such as the generation of validation and/or test sets, are omitted here for brevity.
Having described an embodiment of a neural network-based parametric inversion system, attention is directed to
In one embodiment, the application software 68 comprises an input block module 70, neural network module 72, and output block module 74. The input block module 70 is configured to receive and format and process scattered field data and prior information, in addition to electromagnetic measurement data for a given field bin (e.g., for input to the trained neural network). Functionality of the input block module 70 is similar to that described for input block 30 (
Memory 62 also comprises communication software that formats data according to the appropriate format to enable transmission or receipt of communications over the networks and/or wireless or wired transmission hardware (e.g., radio hardware). In general, the application software 68 performs the functionality described in association with the architectures depicted in
In some embodiments, one or more functionality of the application software 68 may be implemented in hardware. In some embodiments, one or more of the functionality of the application software 68 may be performed in more than one device. It should be appreciated by one having ordinary skill in the art that in some embodiments, additional or fewer software modules (e.g., combined functionality) may be employed in the memory 62 or additional memory. In some embodiments, a separate storage device may be coupled to the data bus 64, such as a persistent memory (e.g., optical, magnetic, and/or semiconductor memory and associated drives).
The processor 56 may be embodied as a custom-made or commercially available processor, a central processing unit (CPU), graphics processing unit (GPU), or an auxiliary processor among several processors, a semiconductor based microprocessor (in the form of a microchip), a macroprocessor, one or more application specific integrated circuits (ASICs), a plurality of suitably configured digital logic gates, and/or other well-known electrical configurations comprising discrete elements both individually and in various combinations to coordinate the overall operation of the computing device 54.
The I/O interfaces 58 provide one or more interfaces to the networks 22 and/or 24. In other words, the I/O interfaces 58 may comprise any number of interfaces for the input and output of signals (e.g., analog or digital data) for conveyance over one or more communication mediums. For instance, inputs may be received at the I/O interfaces 58 under management/control/formatting of the input block module 70 and the I/O interfaces 58 may output information under management/control/formatting of the output block module 74.
The user interface (UI) 60 may be a keyboard, mouse, microphone, touch-type display device, head-set, and/or other devices that enable visualization of the contents, container, and/or physical property or properties of interest, as described above. In some embodiments, the output may include other or additional forms, including audible or on the visual side, rendering via virtual reality or augmented reality based techniques.
Note that in some embodiments, the manner of connections among two or more components may be varied. Further, the computing device 54 may have additional software and/or hardware, or fewer software.
The application software 68 comprises executable code/instructions that, when executed by the processor 56, causes the processor 56 to implement the functionality shown and described in association with the deep learning system. As the functionality of the application software 68 has been described in the description corresponding to the aforementioned figures, further description here is omitted to avoid redundancy.
Execution of the application software 68 is implemented by the processor(s) 56 under the management and/or control of the operating system 66. In some embodiments, the operating system 66 may be omitted. In some embodiments, functionality of application software 68 may be distributed among plural computing devices (and hence, plural processors), or among plural cores of a single processor.
When certain embodiments of the computing device 54 are implemented at least in part with software (including firmware), as depicted in
When certain embodiments of the computing device 54 are implemented at least in part with hardware, such functionality may be implemented with any or a combination of the following technologies, which are all well-known in the art: a discrete logic circuit(s) having logic gates for implementing logic functions upon data signals, an application specific integrated circuit (ASIC) having appropriate combinational logic gates, a programmable gate array(s) (PGA), a field programmable gate array (FPGA), etc.
Having described certain embodiments of a deep learning system, it should be appreciated within the context of the present disclosure that one embodiment of a deep learning method, denoted as method 76, illustrated in
Any process descriptions or blocks in flow diagrams should be understood as representing logic (software and/or hardware) and/or steps in a process, and alternate implementations are included within the scope of the embodiments in which functions may be executed out of order from that shown or discussed, including substantially concurrently, or with additional steps (or fewer steps), depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present disclosure.
Certain embodiments of a deep learning system and method uses deep machine learning techniques to create maps of the physical parameters of stored grain relevant to monitoring the health of the grain. The machine learning algorithms are trained from data acquired using electromagnetic and other types of sensors and produce the shape of the stored-grain as well as maps of such physical parameters as the grain's moisture-content, temperature, and density. The machine learning algorithms include convolutional neural networks in various forms, as well as fully connected neural networks.
It should be emphasized that the above-described embodiments of the present disclosure are merely possible examples of implementations, merely set forth for a clear understanding of the principles of the disclosure. Many variations and modifications may be made to the above-described embodiment(s) of the disclosure without departing substantially from the scope of the disclosure. All such modifications and variations are intended to be included herein within the scope of this disclosure and protected by the following claims.
This application claims the benefit of U.S. Provisional Application No. 63/163,957, filed Mar. 22, 2021, which is hereby incorporated by reference in its entirety.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/IB2022/052391 | 3/16/2022 | WO |
Number | Date | Country | |
---|---|---|---|
63163957 | Mar 2021 | US |