A portion of the disclosure of this patent document contains material, which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
The present invention relates to analyze experiment data related to quantum system, and more particularly, to a method and device for classifying particles in the quantum system (e.g., SU(N) fermion) and exploring thermodynamics from density profile of those particles by performing image analyzing on the experiment data of those particles using neural network.
Pronounced effects include atom number and fugacity, which can be extracted from simple analysis or even distinguishable with human eyes. Less-pronounced effects include compressibility and Tan's contact, which is challenging to extract with conventional method. Although less-pronounced effects are hard to extract, they are very important physical properties. Tan's contact is a central quantity which governs many physical observables such as the momentum distribution, the energy, the pressure and a variety of spectroscopies. The atom number fluctuation/compressibility in fermions is also a key quantity, which reflects the Pauli exclusion principle.
In other words, it is essential, for people who is skilled in the art, to study how to efficiently probe the subtle effects directly related to physical observables and to understand physics behind from ordinary experimental data.
In accordance to one aspect of the present invention, a computer-implemented method for analyzing experimental data related to quantum system by using neural network by an electronic device is provided. The method includes: generating a training dataset according to experimental data; performing one or more filtering operations on the training dataset to generate one or more filtered training datasets respectively corresponding to the filtering operations; training a first neural network by inputting the training dataset into the first neural network; training a second neural network by inputting the filtered training datasets into the second neural network; inputting a test dataset into the trained first neural network, so as to obtain a standard classification accuracy of the trained first neural network, wherein the test dataset is generated from further experimental data; performing the one or more filtering operations on the test dataset to generate one or more filtered test datasets; inputting the filtered test datasets into the trained first neural network, so as to obtain one or more first classification accuracies respectively corresponding to the filtered test datasets; inputting the filtered test datasets into the trained second neural network, so as to obtain one or more second classification accuracies respectively corresponding to the filtered test datasets; identifying the differences between first pairs of the standard classification accuracy and the first classification accuracies, second pairs of the standard classification accuracy and the second classification accuracies and third pairs of the first classification accuracies and the second classification accuracies; and determining impact level of each information preserved or removed by each of the filtering operations according to the differences, wherein information which has higher impact level more affect accuracy of the first neural network and the second neural network, such that, while generating the experimental data in the future, one or more the information with higher impact level serve as a guidance for the high-sensitivity image analysis of the quantum system with the neural network.
In accordance with another aspect of the present invention, an electronic device for analyzing experimental data related to quantum system by using neural network is provided, and the electronic device includes one or more processors configured to execute machine instructions to implement the method described above.
The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee.
Embodiments of the invention are described in more details hereinafter with reference to the drawings, in which:
In the following description, method and electronic device for applying machine learning analysis to image analysis in quantum gas experiments, while one of the applications of this invention is to classify the SU(N) gas and explore the thermodynamics in the density profile of SU(N) gas, are set forth as preferred examples. It will be apparent to those skilled in the art that modifications, including additions and/or substitutions may be made without departing from the scope and spirit of the invention. In other words, the conventional problem is that it is hard to probe the subtle effects directly related to spin multiplicity N and to understand physics behind from the experimental density profiles of SU(N) fermions. However, by applying the provided method and electronic device, a thermodynamic compressibility from density fluctuations within the single image can be directly measured. Although it takes momentum distribution of SU(N) Fermi gases in bulk as an example in the invention, it can be easily generalized to other quantum system, such as studying the thermometry of degenerate Fermi gas and detecting the temperature dependent physical properties. Specific details may be omitted so as not to obscure the invention; however, the disclosure is written to enable one skilled in the art to practice the teachings herein without undue experimentation.
Multi-component fermions with SU(N)-symmetric interactions hold a singular position as a prototype system for understanding quantum many-body phenomena in condensed matter physics, high-energy physics, and atomic physics. In condensed matter, e.g., interacting electrons usually possess SU(2) symmetry, whereas there are emergent higher spin symmetries for the low-energy property of systems as the SU(4) symmetry in graphene due to the combination of spin and valley degrees of freedom. In quantum chromodynamics, nuclear interactions are represented by SU(3) symmetry. In the past decades, developments in cooling and trapping of alkaline-earth-like fermions have opened possibilities to achieve even higher spin symmetries, owing to their distinctive inter-particle interactions, and thus provided ideal platforms to study various SU(N) fermionic systems. Although the role of SU(N) symmetry has been probed in optical lattices, the comprehensive characterization of interacting SU(N) fermions in bulk, wherein the SU(N) Fermi liquid description is valid, has still remained challenging. One of the bottlenecks is that the interaction-induced effect enhanced by enlarged SU(N) symmetry is sufficiently pronounced by the tight confinement only in one-dimensional (1D) or two-dimensional (2D) cases. It is only recently that thermodynamics and contact interactions are investigated in three-dimensional 3D SU(N) fermions, but a comprehensive experimental study of SU(N) fermions still remains to be done. Developing experimental techniques or designing approaches to uncover the subtle connection of various spin multiplicity-dependent properties with the available experimental measurements in SU(N) interacting fermions is crucial to advance the understanding of SU(N) symmetry and the corresponding many-body phenomena.
The provided method uses machine learning (ML) as a guidance for the image analysis in quantum gas experiments and demonstrate the thermodynamic study of SU(N) fermions. The main idea of this heuristic approach can be summarized into a three-step process as follows: (1) manually control the amount of information within each of the images fed to the neural networks (NNs) during the training or testing processes; (2) determine the relative importance of the given (or removed) information based on the changes in the accuracy of the training or testing processes; and (3) identify the connection between the information and specific physical observables, so as to further focus analytical efforts on.
To demonstrate the proposed method concretely, density profile of SU(N) Fermi gases is taken as an example and show how the density profile can guide the analytical studies. Besides the pronounced effects such as atom number and fugacity, the connection between the spin multiplicity and less-pronounced features is explored, such as compressibility and Tan's contact in the density profile. Based on this method, one can extract less-pronounced effects even in the most ordinary density profiles and successfully reveal thermodynamic features, which depend on the spin multiplicity, from density fluctuations and high-momentum distributions. This allows one to detect the spin multiplicity with a high accuracy ˜94% in a single snapshot classification of SU(N) density profiles.
To further verify the validity of the connection between the less-pronounced effects and physical observable, the thermodynamic compressibility is measured from density fluctuations within a single image benchmarking ML processes, which turns out to be in very good agreement with SU(N) Fermi liquid descriptions. Besides providing general-purpose methods to extract various less-pronounced effects and consolidate the understanding of SU (N) fermions, recent ML studies of quantum many-body physics are complemented by the provided method to explore the underlying physics. More details are described below.
Referring to
The data communication circuit 120 is configured to establish a connection with other electronic device (e.g., an experimental device) 200. The non-transient memory circuit 130 is configured to store programs 131 (or machine instructions 131) and to host the database 132. The database 132 may be used to store trained neural network (including the hyper-parameters thereof), experiment data ED, control data CD, and/or analysis result (e.g., outputted classification results, or called as result data RD).
The processor 110 executes the machine instructions 131 to implement methods provided by the presented disclosure. The foregoing neural network is executed by the processor 110.
The electronic device 100 can receive experimental data ED from other electronic device 200 via the established connection. The control data CD may comprise data for training/testing the neural network and predetermined parameters.
The experimental data is prepared by the following steps: preparing degenerate SU(N) Fermi gases with predetermined value of N respectively in optical traps, wherein N=1, 2, 5, 6; and taking, after time-of-flight expansion, spin-insensitive absorption images of the degenerate SU(N) Fermi gases as experimental images, so as to record density profiles of the degenerate SU(N) Fermi gases, wherein each experimental image shows atoms momentum distribution via each experimental image's density profile. Each experimental image is an image generated by performing time-of-flight (TOF) experiment on a SU(N) fermion. The size of the experimental image inputted to the neural network is set as 201×201 pixels, and the predicted classification result comprises one of following classes: SU(1), SU(2), SU(5) and SU(6), and outputs a probability of the outputted class.
In more detail, the experimental measurements (experimental data) with appropriate labels are prepared. Here, the density profile is selected for studying SU(N) Fermi gases, and the corresponding spin multiplicity of each of the SU(N) Fermi gases is indicated by the label. In the experiment, a degenerate SU(N) Fermi gas with N=1, 2, 5, 6 is prepared in an optical trap and the density profile is recorded by taking spin-insensitive absorption images after time-of-flight (TOF) expansion, yielding the momentum distribution. The spin multiplicity is confirmed by optical “Stern Gerlach” measurements. In principle, the density profile contains the momentum-space information of SU(N)-interacting fermions, which reflects various thermodynamic observables, such as Tan's contact or the compressibility, which is the underlying reason for the success of using ML techniques to detect the spin multiplicity. However, the effect of spin multiplicity on the momentum distribution is small compared to other features such as the fugacity and atom number because of small interaction strength.
Referring to
Therefore, the dataset should be prepared in such a way that images are indistinguishable based on the pronounced features (i.e., atom number or temperature), which forces the NN to seek for less-pronounced features. Less-pronounced effects include compressibility and Tan's contact, which is challenging to extract with conventional method. Although less-pronounced effects are hard to extract, they are very important physical properties. Tan's contact is a central quantity which governs many physical observables such as the momentum distribution, the energy, the pressure and a variety of spectroscopies. The atom number fluctuation/compressibility in fermions is also a key quantity, which reflects the Pauli exclusion principle.
The datasets are post-selected and possible correlations between spin multiplicity and atom number or temperature are minimized. For example, a post-selection procedure with Gaussian fitting is used to select data with similar density profile and consequently make the dataset indistinguishable with simple analysis.
More specifically, all snapshots are first preprocessed by the fringe removal algorithm reported in Song, B. et al. [Effective statistical fringe removal algorithm for high-sensitivity imaging of ultracold atoms. Phys. Rev. Appl. 14, 034006 (2020)]. Then, cropped images are inputted into the NN for further classification. For SU(N) data, it is natural to prepare the same number of atoms per spin at constant T/TF, in which the normalized density profile is the same for different SU(N) cases. In the embodiment, however, it is found that the diffraction of the imaging light induces fringe patterns that depend on the total atom number in the experiment. One can normalize the image by the total atom number, but it inevitably changes the level of background noise. Therefore, the total atom number is kept unchanged, otherwise the NN uses the background fringe patterns or noises to classify the SU(N) data. In the experiment, 200 images are post-selected per each SU(N) class by using a Gaussian fitting, which allows to obtain samples with similar profiles but different T/TF. If different SU(N) gases are kept at constant T/TF, the profiles are identical in the unit of kF, instead of pixel. Subsequently, 75% of the data (e.g., training dataset) is used for training NNs and the remaining (e.g., test dataset) is for test.
In the embodiment, the density profiles with the interaction parameters kFas≃=0.3 are focused on, where kF is the Fermi wave vector and as the scattering wavelength. The profiles only selected based on similarities in widths of Gaussian fitting of the density profiles to result in indistinguishable momentum profiles as shown in
Referring to
The density profile of each experimental image contains momentum-space information of the degenerate SU(N) Fermi gases, and the which momentum-space information reflects thermodynamic observables comprising Tan's contact and compressibility, such that spin multiplicity of each of the degenerate SU(N) Fermi gases is predicted according to the thermodynamic observables.
Furthermore, in step S220, the processor 110 performs one or more filtering operations on the training dataset to generate one or more filtered training datasets respectively corresponding to the filtering operations.
The filtering operations includes: a binned image operation; a radially averaged image operation; a Fermi-Dirac fitting profile operation; a Gaussian fitting profile operation; a low momentum mask operation; and a high momentum mask operation.
In step S230, the processor 110 trains a first neural network by inputting the training dataset into the first neural network. Furthermore, in step S240, the processor 110 trains a second neural network by inputting the filtered training datasets into the second neural network. It should be mentioned that the structures of the first neural network and the second neural network are the same, but they are trained by different dataset.
To maximize the accuracy of NNs, an architecture of convolutional NNs (CNNs) is chose since it is suited to explore the less-pronounced effects in an image, as shown in
In the embodiment, a supervised learning/training is performed on the provided neural network, such that the NN can find a correspondence rule between the inputted experimental image and the corresponding label, and it allows to predict the labels of data beyond the training dataset.
Convolutional neural network. Machine learning (ML) techniques used in the embodiment are based on CNNs, which takes a supervised learning approach for classification task. NNs, inspired by the biological NNs that constitute animal brains, are composed of a series of artificial neurons, among which the connection is a real-valued function ƒ: Rk→R, parameterized by a vector of weights (w1, w2, . . . , wi, . . . )=w∈Rk and the activation function Φ: R→R, given by
ƒ(x)=Φ(w·x) with x=(x1, . . . ,xi, . . . )∈Rk.
By combining the artificial neurons in a network or in a layer of network, NNs are obtained. A CNN has one or more convolutional layers where neurons are arranged in two dimensions, providing an efficient way of detecting the spatial structure. The convolutional layer first accepts an input 2D image from the previous layer or the whole NN. Then, the kernel of the convolutional layer slides (i.e., convolve) across the width and height of the input image, with dot products between the kernel and the input being computed. Consequently, a 2D feature map in which each pixel is the response at the corresponding position is obtained. If the convolutional layer has N different kernels, the same procedure will be repeated for each kernel and finally N 2D feature maps will be produced. These 2D feature maps will then be loaded into the next layer as input. N is predetermined as 24, but not limited hereto.
Referring to
The proposed NN (e.g., the first neural network or the second neural network) includes: an input layer, wherein an experimental image of the training dataset, the filtered training datasets, the test dataset and the filtered test datasets is inputted to the first neural network or the second neural network by the input layer; a convolutional layer, connected from the input layer, comprising 24 different kernels, wherein size of each kernel is set to 24, and a stride of each kernel is set to 1; an activation function layer, connected from the convolutional layer, wherein activation function of the activation function layer is ReLU function; an average pooling layer, connected from the activation function layer, wherein size of pooling area is 2×2 and stride of the pooling area is 1; a dropout layer, connected from the average pooling layer, wherein dropout percentage is 50%; a fully connected layer, connected from the dropout layer, comprising 4 neurons; a further activation function layer, connected from the fully connected layer, wherein activation function of the further activation function layer is Softmax function; and an output layer, wherein the output layer outputs a predicted classification result. size of the experimental image is 201×201 pixels (the invention is not limited by this size), and the predicted classification result comprises one of following classes: SU(1), SU(2), SU(5) and SU(6), and outputs a probability of the outputted class.
For example, the CNNs used in the embodiment can be realized by using the Tensorflow in Python. The concrete parameters taken in the embodiment are listed in Table 1 below.
To train the network on the data with different spin configurations, the CNN is compiled with a cross-entropy loss function. During the training process, the weights of model are updated based on Adam algorithm, to minimize the loss function with a learning rate of 1×10−4, which is a hyper-parameter that controls how much the network changes the model each time. The maximum training epochs are limited to 1000, and the accuracy and loss are monitored during the training process for selecting the model with best performance. After full training, the trained CNN is evaluated by the test dataset.
The performance of trained NNs is characterized by obtaining the overall classification accuracy, which is defined as the ratio of number of samples with predictions matching true labels to the total sample number. For one single image loaded into the NNs, e.g., the softmax activation function normalizes the output values {σc} by P(c)=eσ
Return to
Next, in step S260, the processor 110 performs the one or more filtering operations on the test dataset to generate one or more filtered test datasets. Each of the training dataset, the filtered training datasets, the test dataset, and the filtered test datasets includes one or more experimental images and one or more labels respectively corresponding to the experimental images, wherein each of the labels indicates a class to which the corresponding experimental image belongs.
The attributes processed by the well-trained NNs are analyzed and the less-pronounced effects determined by the spin multiplicity are extracted step by step. Due to the limited interpret-ability of NNs, it is usually difficult to identify what kinds of features the NNs use for classification. In the provided method, parts of the density profile are examined for finding the correlation with the spin multiplicity as described in
In the embodiment, for studying the interacting SU(N) fermions, non-interacting fermions and the associated energy (length) scale are used in choosing various filters (filtering operations) in the momentum space.
To do this, the experimental images are manipulate and the classification accuracy of the manipulated images is checked. As different information is removed in different types of manipulated/filtered images which is generated by different types of filtering operations, the classification accuracy will decrease with different amount, which will unfold what kind of information is more important (has more impact level) for classification. In other words, remove different information in different types of images to find out which information affects/decreases the classification accuracy while raw data is original data with all the information reserved.
For example, the information removed by different filtering operations includes: in the Fermi-Dirac fitting profile operation or the Gaussian fitting profile operation, any information except for fitting parameters are removed; in the radially averaged image operation, information of azimuthal density fluctuation is removed and most of the high momentum information is also removed; in the low momentum mask operation, information located at low momentum region is removed, wherein a central mask covers from the center to a cut-off circle of the experimental image, and the cut-off circle has radius of predetermined cut-off momentum kc, wherein atom density of the center of the experimental image is largest; in the high momentum mask operation, information located at high momentum region is removed, wherein an edge mask covers from the edge to the cut-off circle of the experimental image; and in the binned image operation, information of density fluctuation is partially removed.
In
As shown in
The blurring of adjacent pixels effectively changes subtle features in SU(N) gases due to finite optical resolution, e.g., it will decrease the measured atom number variance. This effect can be minimized by binning the data using a sufficiently large bin size. The whole image is partitioned into bins with the area of n μm−1×n μm−1 without overlapping. In each bin, the averaged optical density with the bin is calculated and the value is subsequently used to fill all the pixels of the bin to maintain the original size of image. Referring to
For the radially averaged images, all the pixels are divided into several bins based on the distance from the center of the atom cloud, then the pixels in the same bin are averaged. The degenerate Fermi-Dirac and Gaussian profiles are fitted by the 2D Thomas-Fermi and Gaussian distribution, respectively. It is worth noting that both density fluctuations and high-momentum information are effectively removed from both fitting cases (i.e., both density fluctuations and high-momentum information are removed in Femi-Dirac/Gaussian fitting profiles). Therefore, the comparison between Fermi-Dirac and Gaussian profiles may allow one to investigate possible next-order effects by which NNs detect the changes in T/TF.
Referring to
In
where Pk(θi) represents the optical density for a specific pixel at k˜58.5 μm−1 and angle ˜θi. The formula can be further derived from the Fourier transform as
The azimuthal correlation spectrum shows how the image looks like its own copy after rotating a specific angle. For a radially averaged image, the correlation becomes 1 at any angle, indicating no density fluctuations. When the image is binned, densities are only locally averaged resulting azimuthal correlation at nonzero angle <1.
During the filtering operation shown by
More specifically, a cutoff momentum of kc (e.g., 70 μm−1) is set, such that >99% of atoms are contained within the low-momentum region (
High-momentum tails. In light of these results, it speculates that NNs utilize less-pronounced effects in the high-momentum part. To confirm this, the dependence of classification accuracy with the fully trained NNs on each SU(N=1, 2, 5, 6) class is checked and it is found that the classification accuracy increases with N in
In
The contributions of low-momentum part and high-momentum part is examined by classifying the masked images with the well-trained NN as shown in
This is motivated by the observation that the classification accuracy significantly decreases with filters, with various fitting functions that remove the SU(N)-dependent effect in the high-momentum tail. Two different types of masks for the region of the replacements, which will be referred as edge mask and central mask, respectively. The edge mask covers from the edge of the image to some atomic cut-off momentum kc, whereas the central mask covers from the center to kc. Then, the masked region is replaced with a fake image generated by averaging the corresponding region of all the images in the dataset and re-evaluate the test accuracies of the pre-trained NN.
In other words, in the low momentum mask operation, the central mask is replaced by a fake image, wherein the fake image is generated by averaging all experimental images' central mask region images of dataset corresponding to the experimental image; and in the high momentum mask operation, the edge mask is replaced by a further fake image, wherein the further fake image is generated by averaging all experimental images' edge mask region images of dataset corresponding to the experimental image.
Reason of replacing: Without replacing the mask region, the images will be totally different from the original images. For example, with the central masks, the high momentum region only contains less than 1% of the total atoms. In this case, the neural network trained on the original images may not recognize the image correctly.
Return to
After the second neural network is well trained by the filtered training datasets, in step S280, the processor 110 inputs the filtered test datasets into the trained second neural network, so as to obtain one or more second classification accuracies respectively corresponding to the filtered test datasets.
Next, in step S290, the processor 110 identifies the differences between first pairs of the standard classification accuracy and the first classification accuracies, second pairs of the standard classification accuracy and the second classification accuracies and third pairs of the first classification accuracies and the second classification accuracies.
Referring to
Next, in step S300, the processor 110 determines impact level of each information preserved or removed by each of the filtering operations according to the differences, wherein information which has higher impact level more affect accuracy of the first neural network and the second neural network.
For example, regarding the first NN (pre-trained NN), the accuracy difference related to binned image filtering operation is 4% and the accuracy difference related to gaussian fitting profile filtering operation is 69%. In this case, the processor 110 determines that impact level of information removed by the gaussian fitting profile is higher than a further impact level of information removed by the binned image. In other words, comparing the information removed by the Gaussian fitting filtering operation and the information removed by the Binned image filtering operation, the information removed by the Gaussian fitting filtering operation affect the classification accuracy of the first neural network more since it has higher impact level. Such that, while generating the experimental data in the future, one or more the information with higher impact level serve as a guidance for the high-sensitivity image analysis of the quantum system with the neural network. For example, one or more the information with higher impact level are kept and one or more other information with lower impact level are removed, so as to improve the efficiency and accuracy of the classification operation performed by the neural network. After the impact level of each information is obtained, it can be known that what information is different among different classes and design experiment or analysis to explore these properties in the quantum system. In the quantum system of the embodiment, these properties are contact and density fluctuation, which are N dependent. That is, unlike other cases without using the provided method, the valuable physical property can be selected from many irrelevant properties more efficiently when handling/analyzing a quantum system with limited prior knowledge. It also validates which theoretical model (in this case, Gaussian distribution/Fermi Dirac distribution) is more consistent with the experimental result.
After the impact level of each information preserved or removed by each of the filtering operations is determined. The processor 110 can further identifies correlations between the thermodynamic observables and each of the information preserved or removed by each of the filtering operations according to the determined impact level of each of the information, such that the correlations can be further focused with analytical efforts. Specifically, first, by comparing the difference of classification accuracy and also images before and after filtering, the importance of the information preserved or removed is determined. For example, when we compare the original image, binned image and radially averaged image, it is determined that the azimuthal density fluctuation is very important for the classification, which means it should be N-dependent. Then, analytical efforts can be focused on the azimuthal density fluctuation and develop a scheme to extract the thermodynamic observables (compressibility) from the azimuthal density fluctuation of single experimental image like
For example, the processor 110 may further finds out the answer for question that to what dominant feature classifies spin multiplicity in the low-momentum regime. Based on the significant decrease of the accuracy with profiles being radially averaged in
To understand how the density fluctuations reveal spin multiplicity, considering the fluctuation-dissipation theorem by which the thermodynamic compressibility
can be measured through density fluctuations (i.e., atom number fluctuations), where n is the local density and μ the local chemical potential. For repulsively interacting SU(N) fermions, it is known that the compressibility κ decreases with increasing spin multiplicity N as
where κ0 is the compressibility of an ideal Fermi gas and
Here, the atom number fluctuations are further suppressed by the Pauli blocking in the degenerate regime showing sub-poissonian fluctuations as σN
In the performed experiment, an atomic sample ballistically expands from the harmonic trap preserving occupation statistics of the phase space during the expansion. Instead of repeatedly producing identical samples and monitoring a small region at the certain position, the relative atom number fluctuations can be extracted along the azimuthal bins containing the same number of atoms on average (therefore, resulting in equivalent optical density) within a single image, even though a grouping of ideally equivalent bins is challenging and the fluctuation measurement is susceptible to the systematic variations. The successful classification of the spin multiplicity with NNs can provides a guidance to subsequently investigate the atom number fluctuations with conventional analysis.
To verify this less-pronounced feature, choosing a series of bins containing ˜450 atoms on average in a line-of-sight integrated density profile along the azimuthal direction (as shown by
Therefore, relative atom fluctuations are normalized by the temperature as
to reveal SU(N) interaction effect from a single snapshot. In
with the uncertainty represented by the shaded region considering the SE of ζSU(1). The inset shows the distribution of the atom number per bin from three images for each spin multiplicity. The distribution is plotted around the average normalized by the degenerate temperature, (N−
Whereas the scaling of the measured density fluctuation with N is in good agreement with theoretical prediction, experimental results for SU(N>1) lie systematically below theoretical ones. The discrepancy may be due to interactions that remain finite during the expansion, which could slightly perturb the occupation statistics of the phase space. Considering the fact that the change of the compressibility is not significant for N=5 and 6 in
Referring to
The bins are selected within single images along azimuthal directions like shown in
If the distance from the center of the cloud is too closed to the center of atom cloud, the pixel number is not enough to calculate the density fluctuation. If it is too far away from the center of the atom cloud, the OD/atom number is too low.
For a further example (evaluating detection accuracy with tunable masks), for examining the less-pronounced effects in both the low-momentum and high-momentum regions, and building up the concrete connections between these less-pronounced effects and the spin multiplicity, the processor 110 quantitatively analyzes the changes inflicted on the test accuracies when the cutoff momentum kc is tuned over. It is clearly shown in
To scrutinize the effects of SU(N) symmetric interactions, it is provided the NN with altered images and probed specific attributes of the profiles independently. It found that the high-momentum tail and density fluctuation information significantly contribute to the SU(N) classification process. First of all, the high-momentum tails of atomic-density distributions are expected to exhibit Tan's contact, which encapsulates the many-body interactions through the set of universal relations. Although the previous work required averaging of hundreds of images for the detection of the SU(N)-dependent contact, the NN's ability makes it possible to obtain the single-image distinguishability of the SU(N) class after training.
The second dominant feature for the SU(N) classification is the density fluctuation within the profile. Both the temperature and spin multiplicity are known to affect the atomic-density fluctuations through the change in compressibility. Sub-poissonian density distributions have been observed in degenerate Fermi gases of atoms and molecules, where multiple images were used to obtain the statistics. The suppression of the density fluctuation was also observed in SU(N) fermions allowing for the thermodynamic study. For a single image, there exists multiple sets of density fluctuation measurements at varying momentum, where each measurement forms a ring around the center of the distribution. Considering the decreased SU(N) classification accuracy from the radially averaged datasets, the fluctuation information might have been utilized in addition to the contact, to reflect the effects of compressibility. Lastly, it is found that the low-energy (low momentum) part of the density profile does not exhibit a signature as strong as the previous two features.
In conclusion, it is demonstrated that the capabilities of the proposed method by classifying SU(N) Fermi gases with their time-of-flight (TOF) density distributions. The NN provides classifications with an accuracy well beyond the conventional methods such as principal components analysis. By applying different types of manipulations, it is also found that the NNs combine the features from a high-momentum signal and density fluctuations together, to distinguish SU(N).
Based on above, the provided method can be used to guide the thermodynamic studies in the density profile of ultracold fermions interacting within SU(N) spin symmetry prepared in a quantum simulator. Although such spin symmetry should manifest itself in a many-body wavefunction, it is elusive how the momentum distribution of fermions, the most ordinary measurement, reveals the effect of spin symmetry. Using the provided fully trained convolutional neural network (NN) with a remarkably high accuracy of ˜94% for detection of the spin multiplicity, how the accuracy depends on various less-pronounced effects with filtered experimental images can be investigated. Guided by the provided method, a thermodynamic compressibility from density fluctuations within the single image can be directly measured. Furthermore, the provided method shows a potential to validate theoretical descriptions of SU(N) Fermi liquids, and to identify less-pronounced effects even for highly complex quantum matter with minimal prior understanding.
The above exemplary embodiment and operations serve only as illustration of the present invention, an ordinarily skilled person in the art will appreciate that other structural and functional configurations and applications are possible and readily adoptable without undue experimentation and deviation from the spirit of the present invention.
The functional units of the apparatuses and the methods in accordance to embodiments disclosed herein may be implemented using computing devices, computer processors, or electronic circuitries including but not limited to application specific integrated circuits (ASIC), field programmable gate arrays (FPGA), and other programmable logic devices configured or programmed according to the teachings of the present disclosure. Computer instructions or software codes running in the computing devices, computer processors, or programmable logic devices can readily be prepared by practitioners skilled in the software or electronic art based on the teachings of the present disclosure.
All or portions of the methods in accordance to the embodiments may be executed in one or more computing devices including server computers, personal computers, laptop computers, mobile computing devices such as smartphones and tablet computers.
The embodiments include computer storage media having computer instructions or software codes stored therein which can be used to program computers or microprocessors to perform any of the processes of the present invention. The storage media can include, but are not limited to, floppy disks, optical discs, Blu-ray Disc, DVD, CD-ROMs, and magneto-optical disks, ROMs, RAMs, flash memory devices, or any type of media or devices suitable for storing instructions, codes, and/or data.
Each of the functional units in accordance to various embodiments also may be implemented in distributed computing environments and/or Cloud computing environments, wherein the whole or portions of machine instructions are executed in distributed fashion by one or more processing devices interconnected by a communication network, such as an intranet, Wide Area Network (WAN), Local Area Network (LAN), the Internet, and other forms of data transmission medium.
The foregoing description of the present invention has been provided for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations will be apparent to the practitioner skilled in the art.
The embodiments were chosen and described in order to best explain the principles of the invention and its practical application, thereby enabling others skilled in the art to understand the invention for various embodiments and with various modifications that are suited to the particular use contemplated.
Number | Date | Country | |
---|---|---|---|
63214249 | Jun 2021 | US |