UNIVERSAL OUTLIER DETECTION AND CURATION OF MISSING OR NOISY DATA POINTS

Information

  • Patent Application
  • 20230070637
  • Publication Number
    20230070637
  • Date Filed
    August 11, 2022
    a year ago
  • Date Published
    March 09, 2023
    a year ago
  • Inventors
    • PAL; Prasanta (Shrewsbury, MA, US)
  • Original Assignees
Abstract
A method includes receiving data, flagging outliers in the received data, each of the outliers representing extreme data values in the received data that stand out greatly from an overall pattern of data values at a given scale in the received data, storing the flagged outliers in an outlier file, removing the flagged outliers from the received data, curating the flagged data points with respect to various neighborhood kernel sizes, and further curating data points with small perturbations using contextual custom regression techniques as a second order refinement process.
Description
STATEMENT REGARDING GOVERNMENT INTEREST

None.


BACKGROUND OF THE INVENTION

The invention generally relates to raw signal acquisition, and in particular to a generic outlier detection and curation of missing or noisy perturbed data points suite.


In general, real world signal acquisition through sensors, is at the heart of modern digital revolution. However, almost every signal acquisition systems are contaminated with noise and outliers. Precise detection, and curation of data is an essential step to reveal the true-nature of the uncorrupted observations. With the exploding volumes of digital data sources, there is critical need for a robust, scaleable but easy-to-operate, low-latency, generic yet highly customizable, outlier-detection and curation tool, easily accessible, adaptable to diverse types of data sources. Existing methods often boil down to data smoothing or rejection of bad data points that inherently cause valuable information loss as well as distortion to good data points. Our key observation is that, pristine information retrieval from raw data based on multiscale regression models is a different class of problem than traditional averaging measure based data smoothing techniques.


SUMMARY OF THE INVENTION

The following presents a simplified summary of the innovation in order to provide a basic understanding of some aspects of the invention. This summary is not an extensive overview of the invention. It is intended to neither identify key or critical elements of the invention nor delineate the scope of the invention. Its sole purpose is to present some concepts of the invention in a simplified form as a prelude to the more detailed description that is presented later.


In an aspect, the invention features a method including receiving data, flagging outliers in the received data, each of the outliers representing extreme data values in the received data that stand out greatly from an overall pattern of data values at a given scale in the received data, storing the flagged outliers in an outlier file, removing the flagged outliers from the received data, curating the flagged data points with respect to various neighborhood kernel sizes, and further curating data points with small perturbations using contextual custom regression techniques as a second order refinement process.


In another aspect, the invention features a method including, in a computer system having at least a processor and a memory, receiving contaminated data having a matrix-like structure or time-series structure that can be transformed into ASCII format to enable easy arithmetic or algebraic operations, the contaminated data comprising real data, background noise and outlier data, and curating the contaminated data on time domain to produce filtered curated data and outlier data.


In still another aspect, the invention features a method including, in a computer system having at least a processor and a memory, receiving contaminated data having a matrix-like structure in ASCII format, the contaminated data comprising real data, background noise and outlier data, and curating the contaminated data on spatial and temporal (as appropriate) domain to produce filtered curated data and outlier data.


These and other features and advantages will be apparent from a reading of the following detailed description and a review of the associated drawings. It is to be understood that both the foregoing general description and the following detailed description are explanatory only and are not restrictive of aspects as claimed.





BRIEF DESCRIPTION OF THE DRAWINGS

These and other features, aspects, and advantages of the present invention will become better understood with reference to the following description, appended claims, and accompanying drawings where:



FIG. 1 is a schematic.



FIG. 2 is a flow diagram.



FIG. 3 is another schematic.



FIG. 4 is a block diagram of an exemplary system.



FIG. 5 is another flow diagram.





DETAILED DESCRIPTION

The subject innovation is now described with reference to the drawings, wherein like reference numerals are used to refer to like elements throughout. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It may be evident, however, that the present invention may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to facilitate describing the present invention.


The present invention is an information separation tool that is used, for example, to decontaminate time-series and matrix like data sources, with a goal of recovering the ground-truth. The tool decontaminates raw data first, through conditional flagging of outliers, schema-based curation of flagged data-points, followed by recovery of asymptotically converged ground-truth. This method immensely enhances the accuracy of a wide range of traditional data processing tasks.


The data-driven world of science, technology and medicine is so much deluged by countless engines of digital data collection processes that clarity is often profoundly blurred by the clout of over-information. It is more important than ever that, pursuit of information clarity is of far more importance than its mere volume. Clarity often comes through curation of the source level raw-input-data (custom-character) originated at some transducers from a set of digital sensors (⊙).


Data, collected from natural-systems are often contaminated by various artifacts, background-noise and extremely deviant outliers that obfuscate the clarity of the underlying, presumably pristine, nature of the observed system. Noise is generally referred to as a systematic perturbation on observed measurements, whereas outliers or artifacts are considered as ad-hoc interruptions occurring at discrete time points (or spatial locations), or cases that are significantly deviant from a signal's central trends at a given scale of the time (time, space etc.). Typically, outliers are defined by data points deviating by more than a set number of standard deviations (e.g., often 3.0 for normal distribution) from a signal's central tendency measures (e.g., mean, median, mode etc.). A real-world example would be, the sound generated by the ongoing traffic on an otherwise quiet street can be considered as background noise, whereas the sounds emitted by honks at discrete time points would be considered outliers with respect to the otherwise natural equilibrium state of quietness. By contrast, the constant beeping sound of a cardiac pulse monitor could be considered as a signal of interest custom-character and any interruption of this rhythmic signal would be considered an outlier with respect to the natural rhythmic pulses. As a simple working definition, we'll consider steady state signals over a well defined period of time (or spatial range) as signal custom-character, and any significant deviation from the central tendency measure as outliers.


The present invention is a method such that, outlier-detection and raw-data curation from noise and perturbations for a wide variety of data types are facilitated both offline as well as in real-time scenarios. There are two basic steps to achieve this goal: (a) Flagging (or detection) of the outliers and (b) curation of the flagged data-points. Optionally, the flagged (followed by curation) as well as the non flagged data points are further curated at all reasonable scales using contextual custom regression techniques as a second order refinement process. This is applied when large outliers are curated, or large outliers are not present and based on custom scale based analytical models that leads to high definition sampling of the underlying data.


During regular data-acquisition scenarios, the recording is typically done with a specified sampling rate (Ω) and in EEG like sensors (e.g., array or montage of sensors), representative values of Ω are 256, 512, 1024, 2048, etc. In our case-studies involving time-series data (e.g., EEG), it is set to Ω=2048. custom-character is read in a block-wise fashion and temporarily stored in a data-container (custom-character) using the deque data-container. For simplicity, a data-block (or the scale) is represented as # and custom-character# as the number of instances of #. The size of #, is represented by custom-character#in, and taken as an input parameter through a configuration-file. In practical situations, the parameter values would typically be set from the domain knowledge (e.g. spatial or temporal scales) of the system or the nature of the specific curation tasks. For example, if we want to curate a long signal train for very short time-scale chirps, custom-character#in would be only a few sample lengths and comparable to few times the sample-length of the chirp-size. On the other hand, for curation of a long-epoch, custom-character#in would scale with the length of the epoch.


We read the data from custom-character in a block-by-block fashion through the entire custom-character. The total number of data-blocks (Σ#) is approximately proportional to the number of effective samples custom-character present in the data-source. For given values of custom-character and #in, the following relationship holds.












?



=


floor



(


𝒩

?



𝒩

?



)


+


?


(

1
,


𝒩

?




𝒩

?




)








(
1
)










?

indicates text missing or illegible when filed




where, the kth data-block is represented as text missing or illegible when filed



custom-character
#
k is effectively the interrogation time-scale δtk for #k upon which the median and MAD (or equivalently mean and standard deviation) parameters are calculated.


The following relationship holds










δ


t

k




=


𝒩

?


Ω





(
2
)










?

indicates text missing or illegible when filed




Typically, an outlier curation routine is performed by choosing a value of δtk and the curated output is obtained for that time-scale. We discuss only the temporal aspects of the operation, however, the spatial aspect of it follow a similar logic. A more sophisticated method would be to perform outlier curation at various scales in a sequential order. Importantly, as a general recommendation, shorter time-scale outliers should be removed first followed by relatively longer ones. Often, the system underlying custom-character is continuous in nature and discrete, ad-hoc division of a continuous system into discrete-blocks may introduce consmetic discontinuity of trends in outcome-measures of consecutive data-blocks. To mitigate this kind of processing-artifacts, we introduce forward #f and backward #b overlap windows (or neighborhood window), data-blocks, the size of which are usually a percentage (˜25%-50%) of custom-character#in. So, the effective data-block length, including the overlap-windows in forward and backward directions, #k→{circumflex over (#)}k










𝒩

?





𝒩

?


+

𝒩

?


+

𝒩

?







(
3
)










?

indicates text missing or illegible when filed




All custom-characters are uniform by design choice except, near the start and end of the data stream due to natural nuances around data boundaries (e.g., the number of samples may not be integer multiples of #in).


Robustness is intrinsically built into the method by incorporating relatively robust central measure, e.g. median ({circumflex over (μ)}), and median absolute deviation ({circumflex over (σ)} or simply MAD) or equivalently interl quantile distribution with appropriate order. These measures are minimally influenced by presence of extreme outliers and quantiles addresses asymmetries in distribution of signal values. We demonstrate a simple case study with median and median absolute deviation measures. We define modified z-score for value xk as below.











Z
^


?


=


(


?

-

?


)


?






(
4
)










?

indicates text missing or illegible when filed




In the flagging step, any if for a given data point k, (abs(Zk)≥λ) is satisfied, xk (or its location) is flagged (or labeled) as an outlier-candidate and staged to be curated in the subsequent steps if additional constraints (specific to the curation) are satisfied. Curation is not mandatory and the program may halt at the flagging stage if necessitated by the user. The outlier flags (denoted by NaNs or Not a Number) by themselves carry important information about the underlying system. Then, using non-NaN values we create a linear-regression interpolation (or any higher other order as appropriate) model to replace the NaN values (flagged data points) by constructing a model from the data-points in the neighborhood of the flagged points. As a vanilla case, we take advantage of armadillo library's built-in interp1 function that is optimized to fill in the flagged values from the unflagged, neighborhood data points. Interp1, additionally does extrapolation of the values if it is outside of the domain on which the model is built.


Optionally, we can extend the default the software settings is customized with built-in with static intializers into additional outlier detection procedures modalities based on other statistical measures like arithmetic mean, harmonic mean, mode or custom criteria specific to the underlying data-curation task based on guided by specific domain constraints (e.g., for images black backgrounds can be ignored if needed). The method includes a modern CPU architecture sensitive multi-threaded data-IO mechanism, coupled with appropriate data type conversions, matrix transformations, flagging, curation and iteration process. Efficient availability of data from the source custom-character is facilitated by the multi-threaded environment through lock-based, mutex-enabled, synIO-synchronization mechanisms. The above steps can be done recursively for certain number of iterations (custom-character). Once the targeted iterations are finished (or by any other run time condition), the flagging and curation steps may continue by piping the data from the last curated output as input and potentially with a different parameter tuple e.g. (λ, πin, custom-character). We call it cascading with a parameter tuple. The method also allows, multiple processing cascades (custom-characterk) to be sequentially staged by cascading flagging, curation and iteration steps in a serial fashion. FIG. 1 describes the steps for a given cascade level and the number of cascades custom-character needed for a specific curation objective and can be defined through the configuration file. More specifically, FIG. 1 illustrates a schematic representation of two consecutive curation cascades Ck & C(k+1) applied in sequence.


ASCII data is read from custom-character in floating point (or double precision) format. It is reasonable to presume that, the default data arrangement is column-wise in the sensor space and row-wise in the temporal space (or matrix form in case of image like structures). Each row is an individual record of the uniformly sampled data. Traditionally, the data may contain few meta-data header-lines at the beginning. Depending upon the circumstances, the header information may be utilized for calibration (e.g., sampling rate, image dimensions etc.) or may be discarded as well. In principle, there may be extraneous columns present in custom-character than those deemed relevant for processing and may be discarded as well by setting appropriate boundaries of the data through the configuration-file. The curation is performed on the effective samples (custom-character) that are candidates for actual processing (or ROI). It is a sub-matrix of the original custom-character. The file or the stream often comes with auxiliary information, time-stamp, serial no., etc., channels and it is important to discard the indices of those channels if deemed unnecessary. Also, often the first few lines of the data file may contain header information (e.g. file type, channel description, Ω etc.) and we discard these lines for actual processing. The zero time stamp starts at the first sample of the actual data after the header. In case of a file structure, if the input file contains few lines of header information, we skip those lines from processing.


The underlying design principles are, easy availability (open-source), flexibility, scalability (can handle large data from diverse sources), adaptability (various input data formats, new filter implementations), low-latency, and robustness (works even in the presence of extreme values of data). It can be used either as a stand-alone tool for offline-processing or as a plugin to real-time systems through file or streaming interfaces. The flexibility is achieved by parametric curation of custom-character where, important parameters (e.g., input file, config file, etc.) are fed into the software through a combination of command-line options. Additional parameters are taken from a configuration file designed using JSON structure. The command-line options and JSON parsing are done using open-source cxx-opt and json-cpp libraries respectively.


Highly efficient matrix-data processing mechanism facilitated by armadillo linear algebra library at the heart of the software implementation method. From the input raw data matrix (custom-character) it is trivial to create a reference submatrix sub-matrix custom-character containing only the absolutely necessary chunk of data to be curated by the method's kernel. custom-character is constructed from custom-character by taking into account appropriate row and column index ranges. The input data is presumed to be in a columnar manner separated by standard field separators (e.g., space, comma, etc.) where each column represents time-series data from each individual sensor whereas each row is a single snapshot record of the measurement. As illustrated in FIG. 2, the operations on custom-character are divided into two primary categories based on the respective operational domain of the data a) time-domain custom-character b) spatial-domain custom-character, where contaminated time-series data is taken as an input field having a matrix-like structure in ASCII format. The method curates the data either on time domain or spatial domain (or image) to produce filtered curated data while separately storing the outlier data-points as well.


More specifically, there are two sets of output for each pass of curation, i.e., curated data and outlier data. Both curated data and outlier data carry distinct information about the data.



FIG. 3 is one example data source from a brain electroencephalogram (EEG) and illustrates the spatial arrangements of EEG sensors where the data points are generated from sensors placed on the head-model of standard EEG data-acquisition systems, where the solid arrowheads represent the Cartesian coordinate system (x, y, z) and the tip of the dashed arrows indicate representative outlier data points. x (y) directions point towards the right (left) ear and z direction towards the top of the head. Due to natural hemi-spherical geometry of human-head model, any individual sensor or a group of them may produce outlier data but it can be recovered in the context of custom-character, by constructing a sensor proximity map from the inter-geodesic distances between sensors. In principle, inter-geodesic distances can be replaced by any other suitable distance measure e.g., inter-sensor dynamic correlation, functional proximity (e.g. how functionally close two sensor locations are) etc. When spatial curation is under consideration, this map is optionally taken as an input from the configuration-file. For the purpose of brevity, we would discuss the custom-character domain operations only, however, custom-character domain operations are done in an equivalent fashion except for few additional elements like sensor proximity map etc. The scalability of processing load depending on available resources was ensured by utilizing a multi-threaded processing framework, paired with block-by-block data-reading, processing and asynchronous storage protocols. At the initialization stage of the software a thread-pool (custom-character) of size custom-character is allocated so that computation can be delegated to various hardware threads when necessary, while avoiding expensive thread-creation and destruction operations. Generally, custom-character scales with the number of cpu-cores of the host-device for processing. A run-time data-block container custom-character internally having the deque data structure, is maintained as a container of #ks throughout the lifetime of the data-processing task, so that it acts as the interface between the custom-character and the data-processing-module while reducing the data-loading time from custom-character by making the data-block readily available in the hardware memory. Before the data-processing operation starts, custom-character initialized to be in a filled state of custom-character size custom-character≥3 whereas, the maximum size of custom-character is determined by the choice of memory footprint of the software as pre-defined in the configuration-file. The front element of custom-character is popped for processing while an asynchronous request is made to read the next data block from the custom-character to be pushed (or en-queued) in partially empty custom-character. Before the push or pop operation, custom-character is locked with a mutex. This way the wait time to read data from custom-character the is avoided while the processing happens simultaneously on a separate thread of custom-character. The following schematic sequence 16 describes different states of the data-block container from the initial empty custom-character, followed by complete filling and consecutive popping and pushing of #k's from data-container custom-character until all the data is exhausted. It is to be noted that, the following is an in memory architecture and the same can be accomplished with the help of a standard database for cloud applications.










?





?





?





?











?





?











?










?

indicates text missing or illegible when filed




The output as well as outlier instances are asynchronously stored through an output data sink (custom-character) interface that can optionally be piped to a data file or a streamer, e.g., labstreaming layer (LSL)). The adaptability to variations of data formats is achieved by first reading each line of the input data as a fresh record in the format of a string. String being one of the most universal data types, any kind of ASCII data files can be read in a line by line fashion. Then the data parsing is done by utilizing the data-domain boundaries contained as the actual experimental details of custom-character. As a concrete example, we used EEG data samples collected from Biosemi systems where, the first column usually have time stamp information, mid columns have sensor data information, and the last few columns may have auxiliary or external trigger information. In most practical scenarios, only a subset of these data columns are relevant and our method would parse it as per data-boundaries set through the config file. The low-latency and efficiency goal was achieved by using, highly optimized, low-level, multi-threaded, compiled C++ language with modern standards (−std≥11) that intrinsically supports minimization of expensive data copy through the modern syntax introduced in standard 11. By move semantics, reference passing and perfect forwarding principles, the need for expensive data copying was partially if not completely eliminated and every unique piece of data is guaranteed to reside in a single memory location. It is to be noted that, the introduction of move semantics and perfect forwarding, native language level multi-threading, enabled a very powerful interface to build efficient, close to metal real-time modern software. Parallelized OpenMP-loops, were used in appropriate code-blocks to make it even more efficient. Additionally, multi-purpose linear-algebra library armadillo [49] has been heavily utilized to facilitate a diverse variety of highly complex but efficient matrix as well as data transformation operations. One big advantage of the armadillo library is that, it creates a natural interface to form sub-matrix views using references such that any operation on a sub-matrix does not require expensive data copy operation. This has a significant impact on the efficiency of the method because the custom-character is often either imported from file on the hard drive or from a live-stream that in its raw form, belongs to a higher dimensional sensor space than the subspace of the matrix where custom-character of interest belongs.


Without the reference based sub-matrix view, we have to perform expensive and memory intensive copy operations. Functional operations/transformations are performed on a reference based sub-matrix structure of the imported custom-character in the form of matrix. In case of streaming data we added the interface to stream the output to outgoing data-stream clients. In one embodiment, we implemented only the outgoing data-streaming, identified by the name and type of the stream. These parameters can be trivially modified in the configuration-file as needed by specific situations. The incoming data-stream can be easily implemented but we skip it in the current version for the sake of simplicity. It is to be noted that, output from the method described above (e.g., curated data) can be further processed internally using traditional auxiliary filters (e.g., Gaussian, Poisson, etc.). For illustration purpose, we included an implementation of half-Gaussian filter that can be used optionally by turning on or off appropriate flags.


Most, if not all, time-series data are intrinsically multi-scale in nature. EEG or similar data-sources, under equilibrium measurement settings, may get intermittently polluted by outliers with varied time-scales. One practical problem is, reliable detection of the various time-scale specific outliers, despite significant variations of its temporal length-scales (e.g. τ1, τ2 & τ3). One way to address this problem is to per-determine a set of time-scales (τ∈{τ1, τ2 & τ3}) to isolate temporal spans of the outliers. The parameters of the curation task, are appropriately adjusted specific to those time-scales, followed by the flagging, curation, iteration and cascading steps. Depending the outcome objectives, users have complete freedom to determine how many cascades of the curation are appropriate. This can be performed by sequentially applying the flagging and curation algorithms for each time-scale, followed by piping the output of the most recent elemental curation task to the input of the next curation step.


The present invention can be used in multiple data-contexts beyond EEG time-series data including audio time series or image-processing where raw-image is transformed into a data-matrix of gray scale pixel-intensities (or in case of color image color plane matrices). In the context of image-processing it would be a very effective tool to perform both flagging and curation steps through iteration and cascading by choosing appropriate kernel-radius (K) and λ-values.


The method described for time-series data is easily adapted to image-processing problems for filtering and segmentation tasks. All that is needed is to transform a raw image into data-matrix using OpenCV based imread function followed by conversion to armadillo matrix. armadillo library has a very advanced data-io facility associate with state of the art matrix operations.


As shown in FIG. 4, an exemplary system 400 includes a computer system 405, a mobile device 410, and a tablet computing device 415. The computer system 405, the mobile device 410, and the tablet computing device 415 and linked to a network 420 of interconnected computer systems (e.g., Internet) via wired or wired communication links. The network 420 includes at least one server 425. Each of the computer system 405, the mobile device 410, and the tablet computing device 415 include at least a processor, a memory, an input/output (I/O) device and a storage device. Memories in each of the computer system 405, the mobile device 410, and the tablet computing device 415 include at least a process, sometimes referred to as an application (“App”) to enable interaction with the server 425.


The server 425 includes at least a processor 430, a memory 435, and a storage device 440. The memory 435 includes at least an operating system 445 and an outlier detection and curation suite process 450.


As shown in FIG. 5, the outlier detection and curation suite process 450 includes receiving (452) data (real data, background noise & outliers).


Process 450 flags (454) outliers and stores (456) them in an outlier file.


Process 450 removes (458) the flagged outliers from received data (i.e., curates data).


Process 450 determines (460) whether another pass is to be done.


If another pass is to be done, process 450 repeats the receiving, flagging and removal with the curated data.


If no other passes are warranted, process 450 stores (462) the curated date in a curated data file.


More specifically, steps 452 through 460 are repeated for a different set of curation flags (e.g., threshold, scale) by taking the curated data in step 460 as input and feeding it back to step 452 through a data feedback loop.


In summary, the present invention is a low-latency, robust, method to curate large volumes of time-series data without any practical limitations on the size of the input data source (streaming or data files). After operating on the raw-data followed by curated-data in an iterative and cascaded fashion, the output is separately stored in curated and outlier signal files for each iteration cycle within a given cascade.


It would be appreciated by those skilled in the art that various changes and modifications can be made to the illustrated embodiments without departing from the spirit of the present invention. All such modifications and changes are intended to be within the scope of the present invention except as limited by the scope of the appended claims.

Claims
  • 1. A method comprising: in a computer and network system having at least a processor and a memory, sensor or data sources receiving data;flagging outliers in the received data, each of the outliers representing extreme data values in the received data that stand out greatly from an overall pattern of data values at a given scale in the received data;storing the flagged outliers in an outlier file;removing the flagged outliers from the received data;curating the flagged data points with respect to various neighborhood kernel sizes; andfurther curating data points with small perturbations using contextual custom regression techniques as a second order refinement process.
  • 2. The method of claim 1 further comprises of comprising determining whether additional flagging of outliers is needed in a where curated data is utilized as input to the next iteration of flagging and curation in a recursive fashion until discovery of new outliers becomes insignificant.
  • 3. The method of claim 2 wherein if flagging of outliers is further needed with a different set of parameters, after receiving data from the removed flagged outliers; flagging additional outliers in the received data with respect to a new set of curation parameters;storing the additional flagged outliers in the outlier file as an ordered series outliers at each flagging and curation step;removing the additional flagged outliers from the received data; andcurating them further if needed.
  • 4. The method of claim 3 further comprising: storing the data without outliers in a curated data file as a final result or set of final results depending on the depth and order of curation.
  • 5. A method comprising: in a computer system having at least a processor and a memory, receiving contaminated raw-data transformed into a matrix-like structure in ASCII format or converted to raw data format, the contaminated data comprising real data, background noise and infused with outliers data; andcurating the contaminated data on time or spatial domain to produce filtered curated data and outlier data with minimal information loss.
  • 6. The method of claim 5, wherein curating the contaminated data comprises: flagging outliers, each of the outliers representing extreme data values that stand out greatly from an overall pattern of data values at various scales;storing the flagged outliers in an outlier file; andremoving the flagged outliers from the contaminated data.
  • 7. The method of claim 6 further comprising determining whether additional flagging of outliers is needed.
  • 8. The method of claim 7 wherein if flagging of outliers is needed, receiving data with the removed flagged outliers; flagging additional outliers in the received data;storing the additional flagged outliers in the outlier file; andremoving the additional flagged outliers from the received data.
  • 9. The method of claim 8 further comprising: storing the data without outliers in a curated data file.
  • 10. A method comprising: in a computer system having at least a processor and a memory, receiving contaminated data having a matrix-like structure in ASCII format, the contaminated data comprising real data, background noise and outlier data; andcurating the contaminated data on spatial domain to produce filtered curated data and outlier data.
  • 11. The method of claim 10, wherein curating the contaminated data comprises: flagging outliers, each of the outliers representing extreme data values that stand out greatly from an overall pattern of data values;storing the flagged outliers in an outlier file; andremoving the flagged outliers from the contaminated data.
  • 12. The method of claim 11 further comprising determining whether additional flagging of outliers is needed.
  • 13. The method of claim 12 wherein if flagging of outliers is needed, receiving data with the removed flagged outliers; flagging additional outliers in the received data;storing the additional flagged outliers in the outlier file; andremoving the additional flagged outliers from the received data.
  • 14. The method of claim 13 further comprising: storing the data without outliers in a curated data file.
CROSS REFERENCE TO RELATED APPLICATIONS

This application claims benefit from U.S. Provisional Patent Application Ser. No. 63/232,253, filed Aug. 12, 2021, which is incorporated by reference in its entirety.

Provisional Applications (1)
Number Date Country
63232253 Aug 2021 US