This disclosure relates generally to the field of seismic prospecting for hydrocarbons and, more particularly, to seismic data processing. Specifically the disclosure relates to a system or method for scalable and reliable scheduling for massive parallel computing of full wavefield inversion (“FWI”) of seismic data to infer a physical property model, such as a seismic wave propagation velocity model, of the subsurface.
Geophysical inversion attempts to find a model of subsurface properties that optimally explains observed data and satisfies geological and geophysical constraints. See, e.g., Tarantola, A., “Inversion of seismic reflection data in the acoustic approximation,” Geophysics 49, 1259-1266 (1984); and Sirgue, L., and Pratt G. “Efficient waveform inversion and imaging: A strategy for selecting temporal frequencies,” Geophysics 69, 231-248 (2004). Acoustic wave propagation velocity is determined by the propagating medium, and hence it is one such physical property for which a subsurface model is of great interest. There are a large number of well-known methods of geophysical inversion. These well-known methods may be classified as falling into one of two categories, namely iterative inversion and non-iterative inversion. Non-iterative inversion may be used to mean inversion that is accomplished by assuming some simple background model and updating the model based on the input data. This method does not use the updated model as input to another step of inversion. For the case of seismic data these methods are commonly referred to as imaging, migration (although typical migration does not update the model), diffraction tomography or Born inversion. Iterative inversion may be used to mean inversion involving repetitious improvement of the subsurface properties model such that a model is found that satisfactorily explains the observed data. If the inversion converges, then the final model will better explain the observed data and will more closely approximate the actual subsurface properties. Iterative inversion may generally produce a more accurate model than non-iterative inversion, but may also be much more expensive to compute.
One iterative inversion method employed in geophysics is cost function optimization. Cost function optimization involves iterative minimization or maximization of the value, with respect to the model M, of a cost function S(M) selected as a measure of the misfit between calculated and observed data (this is also sometimes referred to as the objective function), where the calculated data are simulated with a computer using a current geophysical properties model and the physics governing propagation of the source signal in a medium represented by a geophysical properties model. (A computational grid subdivides the subsurface region of interest into discrete cells, and the model consists of assigning a numerical value for at least one physical property such as seismic wave propagation velocity to each cell. These numerical values are called the model parameters.) The simulation computations may be done by any of several numerical methods including but not limited to finite difference, finite element or ray tracing.
Cost function optimization methods may be classified as either local or global. See, e.g., Fallat, M. R., Dosso, S. E., “Geoacoustic inversion via local, global, and hybrid algorithms,” Journal of the Acoustical Society of America 105, 3219-3230 (1999). Global methods may involve computing the cost function S(M) for a population of models {M1, M2, M3, . . . } and selecting a set of one or more models from that population that approximately minimize S(M). If further improvement is desired, this new selected set of models may then be used as a basis to generate a new population of models that can be again tested relative to the cost function S(M). For global methods, each model in the test population can be considered to be an iteration, or, at a higher level, each set of populations tested can be considered an iteration. Well known global inversion methods include Monte Carlo, simulated annealing, genetic, and evolution algorithms. Local cost function optimization may involve: (1) selecting a starting model M; (2) computer-simulating predicted data corresponding to the measured data and computing a cost function S(M); (3) computing the gradient of the cost function with respect to the parameters that describe the model; and (4) searching for an updated model that is a perturbation of the starting model in the negative gradient direction in multi-dimensional model parameter space (called a “line search”) that better explains the observed data. This procedure may be iterated by using the new updated model as the starting model and repeating steps 1-4. The process continues until an updated model is found that satisfactorily explains the observed data. Typically, this is taken when the updated model agrees with the previous model to within a preselected tolerance, or another stopping condition is reached. Commonly used local cost function inversion methods include gradient search, conjugate gradients and Newton's method.
As discussed above, iterative inversion may be preferred over non-iterative inversion in some circumstances because it yields more accurate subsurface parameter models. Unfortunately, iterative inversion may be so computationally expensive that it would be impractical to apply it to some problems of interest. This high computational expense is the result of the fact that all inversion techniques require many compute-intensive forward and sometimes also reverse simulations. Forward simulation means computation of the data forward in time, e.g. the simulation of data in step 2 above. Reverse simulation means computation of the data in discrete time steps running backward in time. Back-propagation of the waveform may be used when a particularly efficient method, called the adjoint method, is used to compute the gradient of the cost function. See, Tarantola, A., “Inversion of seismic reflection data in the acoustic approximation,” Geophysics 49, 1259-1266 (1984). The compute time of any individual simulation is generally proportional to the number of sources to be inverted, and typically there may be large numbers of sources in geophysical data. By encoding the sources, multiple sources may be simulated simultaneously in a single simulation. See, U.S. Pat. No. 8,121,823 to Krebs, et al. The problem may be exacerbated for iterative inversion because the number of simulations that must be computed is proportional to the number of iterations in the inversion, and the number of iterations required is typically on the order of hundreds to thousands.
Due to its high computational cost, iterative inversion typically requires a large high performance computing (“HPC”) system in order to reach a solution in a practical amount of time. As described above, a single iterative inversion can be decomposed into many independent components and executed on hundreds of thousands of separate compute cores of an HPC system, thus reducing time to solution. Problem decomposition can be achieved at several levels: for a single forward or reverse simulation, the model domain can be spatially divided among computational cores with each core communicating only the information at the edge of the subdomain to neighboring cores. This is a common parallel speedup strategy used in iterative and non-iterative inversion, as well as other scientific computing domains. See, e.g., Araya-Polo, M., et. al. “3D Seismic Imaging through Reverse-Time Migration on Homogeneous and Heterogeneous Multi-Core Processors,” Scientific Programming 17(1-2):185-198 (2009); Brossier, R., “Two-dimensional frequency-domain visco-elastic full waveform inversion: Parallel algorithms, optimization and performance,” Computers and Geosciences, Volume 37, Issue 4, p. 444-455 (2010). However, the scalability of such methods is eventually limited by the overhead of sharing edge information, since computational work done by each computational core decreases as number of subdomains is increased to grow the parallel speedup.
Another parallel speedup method involves dividing an iterative inversion across separate forward or reverse simulations for each source to be inverted. As there are large numbers of sources in geophysical data which can be modeled separately, this approach enables many independent simulations to be mapped onto an HPC system without scalability concerns due to communication overhead. For traditional non-iterative seismic inversion methods, this parallelization scheme is trivial to implement, as the sources can simply be divided into separate simulations and run to completion. See, e.g., Suh, S. Y., et. al., “Cluster programming for reverse time migration”, The Leading Edge, January 2010. For iterative inversion, separate simulations for each source must be combined to determine the gradient of the cost function S(M) for the entire model area as well as the line search for the updated model, requiring efficient scheduling and load balancing among independent simulations to efficiently execute large inversions. Due to low communication requirements, parallelization across geophysical sources exhibits excellent scalability and presents the main opportunity to implement iterative seismic inversions in an efficient way on massively parallel computing systems. (Scalability refers to the algorithm's ability to use more parallel processors on a single problem, with overall time to solution decreasing proportionally as more processors are added).
The two parallelization strategies described above, parallelization of single source simulation by domain decomposition and parallelization across separate seismic sources, can both be used concurrently to reduce time to solution. The attention is now turned to the implementation methods for the second parallelization strategy using separate seismic sources, for which there are several different published implementation methods with which parallel speedup of iterative geophysical inversion may be achieved.
In typical practice, HPC systems use message passing (e.g., MPI) or shared memory (e.g., OpenMP, OpenACC, Cilk Plus, Intel Threaded Building Blocks, Parallel Global Address Space (PGAS) languages) techniques to decompose a scientific computing problem across many compute cores. This is a common method for parallelization of seismic simulations for a single geophysical source, and can be used to parallelize across separate sources as well. However, this method has several disadvantages when applied to large scale iterative seismic inversions as parallelization via message passing or shared memory encapsulates the entire inversion as a single HPC job. On current HPC systems, failure of a single thread, MPI rank, or a compute core may cause the entire job to fail, thereby losing progress to date. Regular checkpointing to disk may be required to be performed as long geophysical inversion run times coupled with typical HPC failure rates make a single failure likely to occur during a single inversion. Another limitation of encapsulation into a single job is that it does not allow the system level scheduler to interject other work onto nodes that are temporarily unused during inversion. Since compute demands can drastically change during the course of iterative geophysical inversion (serial stages of
An alternative to the above approaches is to submit each separate geophysical source simulation as a separate HPC job, and allow the system batch job scheduling software (e.g., Moab Cluster Suite, SLURM, PBS Professional, Condor, etc.) to manage these jobs. Separate jobs would share information via files on a shared disk, and workflow support packages like Condor Directed Acyclic Graph Manager (DAGman) or Pegasus can coordinate complex workflows with many dependent tasks. This approach is simple to set up and implement, but suffers from slow scheduling times of batch scheduling software, which is typically not equipped to handle tens of thousands to hundreds of thousands of separate jobs that geophysical inversion may inject into the system each iteration. Additionally, batch schedulers tend to prioritize overall system utilization instead of the overall workflow runtime, as they are designed to extract maximum use of the HPC system having small numbers (e.g., tens) of large workloads queued up to run.
What is needed is a method to efficiently schedule and execute many separate geophysical source simulations within the context of a single iterative seismic inversion. As such, the method should balance between the overall iteration runtime and system utilization, allow efficient scaling of the inversion to many cores in order to make processing of realistic seismic surveys possible, and provide reliability capabilities to enable inversion runs lasting multiple days or weeks. There are a variety of difficulties with delivering such a system. For example, at high levels of parallelism (100 k+ cores—where core is an independent processor within the HPC system capable of executing a single serial part of a parallel program) keeping consistent global picture of the computation state is expensive. Collecting information across all the tasks can affect the overall performance of the system. Additionally, at high levels of parallelism (100 k+ cores) hardware may be inherently unreliable. This means that one of the parallel tasks may be likely to fail over long execution times. Recovery from failure is not trivial since collection of global state needed for recovery is expensive (see first item). Further, load balancing and smart scheduling may become increasingly important as the level of parallelism increases because the “jitter” between task run times (e.g., the unevenness of tasks shown in
This disclosure concerns a system and method for scalable and reliable scheduling for massive parallel computing of iterative geophysical inversion of seismic data to infer a physical property model, such as a seismic wave propagation velocity model, of the subsurface. In one embodiment, the invention is a parallel computing system for full wavefield inversion (“FWI”) of seismic data, comprising a pool of computational units, called workers, each programmed to operate independently of each other and initiate and perform one or more computational tasks that arise in the course of the full wavefield inversion, and having an input stage adapted to receive information needed to perform a task, and an output stage; wherein the full wavefield inversion is organized into a series of parallel computational stages and serial computational stages, and each parallel stage is divided into a plurality of parallel computational tasks sized for a worker according to a selected job scale, a central dispatcher processor programmed to (a) provide task queues where tasks are made available to the workers and where the computational units output results of a completed task, wherein dependencies between tasks are enforced, and (b) monitor and store information relating to a current state of the full wavefield inversion, and a controller, interconnected with the central dispatcher and programmed with an optimization algorithm that decides whether to halt a computational task that has not converged after a predetermined number of iterations.
In another embodiment, the invention is a seismic prospecting method for exploring for hydrocarbons, comprising obtaining seismic survey data, processing the seismic survey data by full wavefield inversion to infer a subsurface model of velocity or other physical parameter, wherein the processing is performed on a system of parallel processers, called workers, dividing the full wavefield inversion into a sequence of parallel stages and serial stages, and defining one or more computational tasks to be performed at each stage, wherein the computational tasks are parallel tasks for a parallel stage, providing a pool of workers, each programmed to perform at least one of the computational tasks, and perform them independently and without knowledge of the other workers, and wherein a task is sized for a worker according to a selected job scale, providing a central dispatcher unit, being a processor programmed to maintain task queues, where tasks are placed for pickup by the workers and where completed tasks are returned by the workers, and enforce dependencies between tasks, monitor and store information relating to a current state of the full wavefield inversion, and providing a controller unit, interconnected with the central dispatcher and programmed with an optimization algorithm that decides whether to halt a computational task that has not converged after a predetermined number of iterations.
The advantages of the present invention are better understood by referring to the following detailed description and the attached drawings, in which:
The invention will be described in connection with example embodiments. However, to the extent that the following detailed description is specific to a particular embodiment or a particular use of the invention, this is intended to be illustrative only, and is not to be construed as limiting the scope of the invention. On the contrary, it is intended to cover all alternatives, modifications and equivalents that may be included within the scope of the invention, as defined by the appended claims.
The present invention implements a distributed parallel framework for scalable and reliable scheduling of iterative geophysical inversions across large HPC systems. The invention decentralizes and separates computation, global state, and control mechanisms within the system, allowing scalability to hundreds of thousands of cores, while still providing ability to apply smart scheduling, load balancing, and reliability techniques specific to the seismic inversion problems.
Data parallelization means dividing the data processing by source shot among multiple processors. Data parallelization is inherently scalable. However, for full wavefield inversion (“FWI”) at a large scale, serial sections (gradient stacking) and coordination of the inversion (how many shots to select, when finished, etc.) can complicate the pure data parallelization (across shots) and compromise scalability. The present invention avoids such problems.
As examples of the foregoing characteristics of independent workers, a worker might remove itself from the system if, for example, it was unable to run any jobs due to hardware failure. Another reason might be to give up its HPC nodes back to the system, possibly for another user's job on a shared system. Nodes may be temporarily borrowed, or returned, when available and when advantageous. A policy might be set whereby if a worker has no work for a specified length of time, the worker is shut down and the nodes returned to the system. Alternatively, this decision can be made manually by the user, observing information compiled by the central dispatcher on the status of the inversion. Regarding task failures, the user may specify at the beginning of the inversion how many shot simulations will be allowed to fail before the inversion will be halted.
The central dispatcher can be thought of as a passive responder to the active initiating by the independent workers, but the central dispatcher provides the jobs and the data and information to perform them, and monitors the status of all the jobs being performed by the workers, e.g., the state of the inversion process, for example, how many shot simulation failures have occurred. Using another example, the central dispatcher may be thought of as a centralized information store that specifies the structure of the inversion (i.e. how tasks are connected to each other) and keeps track of progress of work. The central dispatcher does not initiate action; the central dispatcher only responds to requests. Workers may function in an opposite way, namely, they may initiate actions but have limited information. In some published descriptions of conventional parallel processing of seismic inversion problems, there is a central controller that schedules seismic inversion tasks and hands out work to the parallel processors. The central controller stores information about where the tasks are, but unlike in the present inventive method, the central controller initiates performance of those tasks by their version of the workers. Yet even though certain tasks are decentralized in the present invention compared to conventional parallel processing, this pushing more responsibility to the workers is not true of every aspect of what the workers do. MPI (probably the most popular parallelization strategy, not just for seismic jobs) requires capability for a message to be addressed from one parallel “worker” to another. That means each worker needs to know the address that connects it to each other worker and, at a minimum, how many workers exist. A feature of the present invention is that a worker may not even “know” that there are other workers, much less who they are and what they are doing. A single word that describes this aspect of a worker in the present invention as well as the initiating aspect is independent.
Additionally, workers can execute on heterogeneous hardware, allowing a single type of parallel task to be executed on several different architectures simultaneously. This feature allows the framework to operate on a single seismic problem at high parallelism in environments where several generations of different hardware exist. Similarly a single inversion can be performed on a heterogeneous system where some components have traditional processors, while others have accelerators (graphical processing units, or GPU's; and field programmable gate arrays, or FPGA's; etc.).
The central dispatcher, shown as 22 in the middle of
The flow chart in
Workers may not be aware of task dependencies; they may simply have an input pool of work which they grab their assignment from and an output pool where they place the result. An example of a task dependency in FWI is that the line search cannot begin until the gradient is computed. The central dispatcher in
It may be desirable to run several big inversion problems simultaneously, each one loaded into its own central dispatcher, but with a common pool of independent workers. The user can control which inversion problem will get priority on the workers, thus scaling up some problems and scaling down others.
By connecting work pools together inside the central dispatcher, complex workflows that involve many dependent computational tasks can be mapped onto the proposed parallel computation system (
As a typical example from an FWI problem, 42 may represent a task that filters the input data and removes all frequencies above 30 Hz. The arrow going from task 42 to the right represents mapping of task 42 onto any number of workers 46. Other tasks are similarly mapped to workers 46. At task 43, the velocity model is prepared for simulation by chopping up the model to apertures just around the shot area. There are two paths from task 42 to task 43 because (in this illustrative example) the user decided to have larger apertures for shots at the edge of the survey, which sets up two types of tasks (shots in the center go to one box 43, shots on the edge get processed at the other box 43). Task 44 may be simulation and gradient computation, and task 45 may be stacking each shot gradient into the master gradient.
It may be noted that the workers performing task 44 in the work flow, the simulation and gradient computations, will need to know various parameters in order to execute, for example source location, other source parameters, and receiver locations. Some of these parameters are static (meaning they are known at the beginning of inversion), and some are dynamic (calculated by a preceding task). Each work item stored in the central dispatcher is essentially such a “bag” of parameters. The launcher of the inversion may populate all the static parameters, including source/receiver locations, source signature files, and parameters that specify the physics of the simulation, etc., into an appropriate task such as, in this case, task 44. For dynamic parameters, when the worker returns this bag of data (work item) to the central dispatcher to be put in the output queue, the central dispatcher can delete, modify, or add new parameters into the work item.
The type of work flow described above, with dependent tasks, is typical not only in FWI but in seismic processing in general, and
The distributed nature of the system and the design of the central dispatcher allow for several performance and reliability improvements that are crucial to execution of seismic inversion at large scales. Using the present state of the ongoing inversion as monitored by the central dispatcher, global progress of the application can be tracked and displayed to the user, some of whose various possible interactions with the FWI system are represented by the management tools 48 in
Another application of the invention is to parallelize the “stacking” stage in seismic processing. Stacking combines results calculated for portions of the seismic survey into a full survey result. Although computationally simple, at high levels of parallelism stacking can lag the overall inversion time if done serially. Using the system described above, parallelization can be done simply by adding additional workers into the system to perform the stacking task. “Partial stacks” from each parallel stacker can then be further combined into the final result. To optimize “reduce” operations such as stackers, several other modifications to the worker-dispatcher communications can be made, including the following, which also give some idea of the capabilities that may be programmed into the independent workers: (A) workers may place partial results back on their input work pools, allowing results to be recursively accumulated, and/or (B) workers may request multiple items to stack before producing the partial result, reducing stacking time by reducing disk I/O; the number of items before producing a partial result can be held static, or can be based on a heat up/cool down heuristic, or based on the number of available items to be stacked in the central dispatcher state.
In some of its embodiments, the present invention includes one or more of the following features: (1) application of a distributed, loosely coupled, many-task computing paradigm to seismic inversion algorithms on HPC systems, composed of (a) independent workers that are not aware of each other's existence that are either assigned tasks by a master controller (represented by “Inversion Control” among the management tools in
The foregoing description is directed to particular embodiments of the present invention for the purpose of illustrating it. It will be apparent, however, to one skilled in the art, that many modifications and variations to the embodiments described herein are possible. All such modifications and variations are intended to be within the scope of the present invention, as defined by the appended claims.
This application claims the benefit of U.S. Provisional Patent Application 62/093,991 filed Dec. 18, 2014 entitled SCALABLE SCHEDULING OF PARALLEL ITERATIVE SEISMIC JOBS, the entirety of which is incorporated by reference herein.
Number | Name | Date | Kind |
---|---|---|---|
3812457 | Weller | May 1974 | A |
3864667 | Bahjat | Feb 1975 | A |
4159463 | Silverman | Jun 1979 | A |
4168485 | Payton et al. | Sep 1979 | A |
4545039 | Savit | Oct 1985 | A |
4562650 | Nagasawa et al. | Jan 1986 | A |
4575830 | Ingram et al. | Mar 1986 | A |
4594662 | Devaney | Jun 1986 | A |
4636957 | Vannier et al. | Jan 1987 | A |
4675851 | Savit et al. | Jun 1987 | A |
4686654 | Savit | Aug 1987 | A |
4707812 | Martinez | Nov 1987 | A |
4715020 | Landrum, Jr. | Dec 1987 | A |
4766574 | Whitmore et al. | Aug 1988 | A |
4780856 | Becquey | Oct 1988 | A |
4823326 | Ward | Apr 1989 | A |
4924390 | Parsons et al. | May 1990 | A |
4953657 | Edington | Sep 1990 | A |
4969129 | Currie | Nov 1990 | A |
4982374 | Edington et al. | Jan 1991 | A |
5260911 | Mason et al. | Nov 1993 | A |
5469062 | Meyer, Jr. | Nov 1995 | A |
5583825 | Carrazzone et al. | Dec 1996 | A |
5677893 | de Hoop et al. | Oct 1997 | A |
5715213 | Allen | Feb 1998 | A |
5717655 | Beasley | Feb 1998 | A |
5719821 | Sallas et al. | Feb 1998 | A |
5721710 | Sallas et al. | Feb 1998 | A |
5790473 | Allen | Aug 1998 | A |
5798982 | He et al. | Aug 1998 | A |
5822269 | Allen | Oct 1998 | A |
5838634 | Jones et al. | Nov 1998 | A |
5852588 | de Hoop et al. | Dec 1998 | A |
5878372 | Tabarovsky et al. | Mar 1999 | A |
5920838 | Norris et al. | Jul 1999 | A |
5924049 | Beasley et al. | Jul 1999 | A |
5991695 | Wang | Nov 1999 | A |
5999488 | Smith | Dec 1999 | A |
5999489 | Lazaratos | Dec 1999 | A |
6014342 | Lazaratos | Jan 2000 | A |
6021094 | Ober et al. | Feb 2000 | A |
6028818 | Jeffryes | Feb 2000 | A |
6058073 | VerWest | May 2000 | A |
6125330 | Robertson et al. | Sep 2000 | A |
6219621 | Hornbostel | Apr 2001 | B1 |
6225803 | Chen | May 2001 | B1 |
6311133 | Lailly et al. | Oct 2001 | B1 |
6317695 | Zhou et al. | Nov 2001 | B1 |
6327537 | Ikelle | Dec 2001 | B1 |
6374201 | Grizon et al. | Apr 2002 | B1 |
6381543 | Guerillot et al. | Apr 2002 | B1 |
6388947 | Washbourne et al. | May 2002 | B1 |
6480790 | Calvert et al. | Nov 2002 | B1 |
6522973 | Tonellot et al. | Feb 2003 | B1 |
6545944 | de Kok | Apr 2003 | B2 |
6549854 | Malinverno et al. | Apr 2003 | B1 |
6574564 | Lailly et al. | Jun 2003 | B2 |
6593746 | Stolarczyk | Jul 2003 | B2 |
6662147 | Fournier et al. | Dec 2003 | B1 |
6665615 | Van Riel et al. | Dec 2003 | B2 |
6687619 | Moerig et al. | Feb 2004 | B2 |
6687659 | Shen | Feb 2004 | B1 |
6704245 | Becquey | Mar 2004 | B2 |
6714867 | Meunier | Mar 2004 | B2 |
6735527 | Levin | May 2004 | B1 |
6754590 | Moldoveanu | Jun 2004 | B1 |
6766256 | Jeffryes | Jul 2004 | B2 |
6826486 | Malinverno | Nov 2004 | B1 |
6836448 | Robertsson et al. | Dec 2004 | B2 |
6842701 | Moerig et al. | Jan 2005 | B2 |
6859734 | Bednar | Feb 2005 | B2 |
6865487 | Charron | Mar 2005 | B2 |
6865488 | Moerig et al. | Mar 2005 | B2 |
6876928 | Van Riel et al. | Apr 2005 | B2 |
6882938 | Vaage et al. | Apr 2005 | B2 |
6882958 | Schmidt et al. | Apr 2005 | B2 |
6901333 | Van Riel et al. | May 2005 | B2 |
6903999 | Curtis et al. | Jun 2005 | B2 |
6905916 | Bartsch et al. | Jun 2005 | B2 |
6906981 | Vauge | Jun 2005 | B2 |
6927698 | Stolarczyk | Aug 2005 | B2 |
6944546 | Xiao et al. | Sep 2005 | B2 |
6947843 | Fisher et al. | Sep 2005 | B2 |
6970397 | Castagna et al. | Nov 2005 | B2 |
6977866 | Huffman et al. | Dec 2005 | B2 |
6999880 | Lee | Feb 2006 | B2 |
7046581 | Calvert | May 2006 | B2 |
7050356 | Jeffryes | May 2006 | B2 |
7069149 | Goff et al. | Jun 2006 | B2 |
7027927 | Routh et al. | Jul 2006 | B2 |
7072767 | Routh et al. | Jul 2006 | B2 |
7092823 | Lailly et al. | Aug 2006 | B2 |
7110900 | Adler et al. | Sep 2006 | B2 |
7184367 | Yin | Feb 2007 | B2 |
7230879 | Herkenoff et al. | Jun 2007 | B2 |
7271747 | Baraniuk et al. | Sep 2007 | B2 |
7330799 | Lefebvre et al. | Feb 2008 | B2 |
7337069 | Masson et al. | Feb 2008 | B2 |
7373251 | Hamman et al. | May 2008 | B2 |
7373252 | Sherrill et al. | May 2008 | B2 |
7376046 | Jeffryes | May 2008 | B2 |
7376539 | Lecomte | May 2008 | B2 |
7400978 | Langlais et al. | Jul 2008 | B2 |
7436734 | Krohn | Oct 2008 | B2 |
7480206 | Hill | Jan 2009 | B2 |
7584056 | Koren | Sep 2009 | B2 |
7599798 | Beasley et al. | Oct 2009 | B2 |
7602670 | Jeffryes | Oct 2009 | B2 |
7616523 | Tabti et al. | Nov 2009 | B1 |
7620534 | Pita et al. | Nov 2009 | B2 |
7620536 | Chow | Nov 2009 | B2 |
7646924 | Donoho | Jan 2010 | B2 |
7672194 | Jeffryes | Mar 2010 | B2 |
7672824 | Dutta et al. | Mar 2010 | B2 |
7675815 | Saenger et al. | Mar 2010 | B2 |
7679990 | Herkenhoff et al. | Mar 2010 | B2 |
7684281 | Vaage et al. | Mar 2010 | B2 |
7710821 | Robertsson et al. | May 2010 | B2 |
7715985 | Van Manen et al. | May 2010 | B2 |
7715986 | Nemeth et al. | May 2010 | B2 |
7725266 | Sirgue et al. | May 2010 | B2 |
7791980 | Robertsson et al. | Sep 2010 | B2 |
7835072 | Izumi | Nov 2010 | B2 |
7840625 | Candes et al. | Nov 2010 | B2 |
7940601 | Ghosh | May 2011 | B2 |
8121823 | Krebs et al. | Feb 2012 | B2 |
8213261 | Imhof et al. | Jul 2012 | B2 |
8248886 | Neelamani et al. | Aug 2012 | B2 |
8380435 | Kumaran et al. | Feb 2013 | B2 |
8428925 | Krebs et al. | Apr 2013 | B2 |
8437998 | Routh et al. | May 2013 | B2 |
8489632 | Breckenridge | Jul 2013 | B1 |
8547794 | Gulati et al. | Oct 2013 | B2 |
8688381 | Routh et al. | Apr 2014 | B2 |
8781748 | Laddoch et al. | Jul 2014 | B2 |
8849623 | Carvallo et al. | Sep 2014 | B2 |
8923094 | Jing et al. | Dec 2014 | B2 |
9176247 | Imhof et al. | Nov 2015 | B2 |
9194968 | Imhof et al. | Nov 2015 | B2 |
9195783 | Mullur et al. | Nov 2015 | B2 |
20020099504 | Cross et al. | Jul 2002 | A1 |
20020120429 | Ortoleva | Aug 2002 | A1 |
20020183980 | Guillaume | Dec 2002 | A1 |
20040073529 | Stanfill | Apr 2004 | A1 |
20040199330 | Routh et al. | Oct 2004 | A1 |
20040225438 | Okoniewski et al. | Nov 2004 | A1 |
20060235666 | Assa et al. | Oct 2006 | A1 |
20070036030 | Baumel et al. | Feb 2007 | A1 |
20070038691 | Candes et al. | Feb 2007 | A1 |
20070274155 | Ikelle | Nov 2007 | A1 |
20080175101 | Saenger et al. | Jul 2008 | A1 |
20080306692 | Singer et al. | Dec 2008 | A1 |
20090006054 | Song | Jan 2009 | A1 |
20090067041 | Krauklis et al. | Mar 2009 | A1 |
20090070042 | Birchwood et al. | Mar 2009 | A1 |
20090083006 | Mackie | Mar 2009 | A1 |
20090164186 | Haase et al. | Jun 2009 | A1 |
20090164756 | Dokken et al. | Jun 2009 | A1 |
20090187391 | Wendt et al. | Jul 2009 | A1 |
20090248308 | Luling | Oct 2009 | A1 |
20090254320 | Lovatini et al. | Oct 2009 | A1 |
20090259406 | Khadhraoui et al. | Oct 2009 | A1 |
20100008184 | Hegna et al. | Jan 2010 | A1 |
20100018718 | Krebs et al. | Jan 2010 | A1 |
20100039894 | Abma et al. | Feb 2010 | A1 |
20100054082 | McGarry et al. | Mar 2010 | A1 |
20100088035 | Etgen et al. | Apr 2010 | A1 |
20100103772 | Eick et al. | Apr 2010 | A1 |
20100118651 | Liu et al. | May 2010 | A1 |
20100142316 | Keers et al. | Jun 2010 | A1 |
20100161233 | Saenger et al. | Jun 2010 | A1 |
20100161234 | Saenger et al. | Jun 2010 | A1 |
20100185422 | Hoversten | Jul 2010 | A1 |
20100208554 | Chiu et al. | Aug 2010 | A1 |
20100212902 | Baumstein et al. | Aug 2010 | A1 |
20100246324 | Dragoset, Jr. et al. | Sep 2010 | A1 |
20100265797 | Robertsson et al. | Oct 2010 | A1 |
20100270026 | Lazaratos et al. | Oct 2010 | A1 |
20100286919 | Lee et al. | Nov 2010 | A1 |
20100299070 | Abma | Nov 2010 | A1 |
20110000678 | Krebs et al. | Jan 2011 | A1 |
20110040926 | Donderici et al. | Feb 2011 | A1 |
20110048731 | Imhof et al. | Mar 2011 | A1 |
20110051553 | Scott et al. | Mar 2011 | A1 |
20110090760 | Rickett | Apr 2011 | A1 |
20110131020 | Meng | Jun 2011 | A1 |
20110134722 | Virgilio et al. | Jun 2011 | A1 |
20110182141 | Zhamikov et al. | Jul 2011 | A1 |
20110182144 | Gray | Jul 2011 | A1 |
20110191032 | Moore | Aug 2011 | A1 |
20110194379 | Lee et al. | Aug 2011 | A1 |
20110222370 | Downton et al. | Sep 2011 | A1 |
20110227577 | Zhang et al. | Sep 2011 | A1 |
20110235464 | Brittan et al. | Sep 2011 | A1 |
20110238390 | Krebs et al. | Sep 2011 | A1 |
20110246140 | Abubakar et al. | Oct 2011 | A1 |
20110267921 | Mortel et al. | Nov 2011 | A1 |
20110267923 | Shin | Nov 2011 | A1 |
20110276320 | Krebs et al. | Nov 2011 | A1 |
20110288831 | Tan et al. | Nov 2011 | A1 |
20110299361 | Shin | Dec 2011 | A1 |
20110320180 | Al-Saleh | Dec 2011 | A1 |
20120010862 | Costen | Jan 2012 | A1 |
20120014215 | Saenger et al. | Jan 2012 | A1 |
20120014216 | Saenger et al. | Jan 2012 | A1 |
20120051176 | Liu | Mar 2012 | A1 |
20120073824 | Routh | Mar 2012 | A1 |
20120073825 | Routh | Mar 2012 | A1 |
20120082344 | Donoho | Apr 2012 | A1 |
20120143506 | Routh et al. | Jun 2012 | A1 |
20120215506 | Rickett et al. | Aug 2012 | A1 |
20120218859 | Soubaras | Aug 2012 | A1 |
20120275264 | Kostov et al. | Nov 2012 | A1 |
20120275267 | Neelamani et al. | Nov 2012 | A1 |
20120290214 | Huo et al. | Nov 2012 | A1 |
20120314538 | Washbourne et al. | Dec 2012 | A1 |
20120316786 | Liu | Dec 2012 | A1 |
20120316790 | Washbourne et al. | Dec 2012 | A1 |
20120316844 | Shah et al. | Dec 2012 | A1 |
20120316845 | Grey | Dec 2012 | A1 |
20120316850 | Liu et al. | Dec 2012 | A1 |
20130030777 | Sung et al. | Jan 2013 | A1 |
20130060539 | Baumstein | Mar 2013 | A1 |
20130081752 | Kurimura et al. | Apr 2013 | A1 |
20130090906 | AlShaikh et al. | Apr 2013 | A1 |
20130116927 | DiCaprio et al. | May 2013 | A1 |
20130138408 | Lee | May 2013 | A1 |
20130151161 | Imhof et al. | Jun 2013 | A1 |
20130238246 | Krebs et al. | Sep 2013 | A1 |
20130238249 | Xu et al. | Sep 2013 | A1 |
20130279290 | Poole | Oct 2013 | A1 |
20130282292 | Wang et al. | Oct 2013 | A1 |
20130311149 | Tang | Nov 2013 | A1 |
20130311151 | Plessix | Nov 2013 | A1 |
20140095131 | DiCaprio et al. | Apr 2014 | A1 |
20140118350 | Imhof et al. | May 2014 | A1 |
20140136170 | Leahy et al. | May 2014 | A1 |
20140180593 | Schmedes et al. | Jun 2014 | A1 |
20140257780 | Jing et al. | Sep 2014 | A1 |
20140278311 | Dimitrov et al. | Sep 2014 | A1 |
20140278317 | Dimitrov et al. | Sep 2014 | A1 |
20140350861 | Wang et al. | Nov 2014 | A1 |
20140358445 | Imhof et al. | Dec 2014 | A1 |
20140358504 | Baumstein et al. | Dec 2014 | A1 |
20140365132 | Imhof et al. | Dec 2014 | A1 |
20140372043 | Hu et al. | Dec 2014 | A1 |
20150123825 | De Corral | May 2015 | A1 |
20150293247 | Imhof et al. | Oct 2015 | A1 |
20150301225 | Dimitrov et al. | Oct 2015 | A1 |
20150309197 | Dimitrov et al. | Oct 2015 | A1 |
20150316685 | Dimitrov et al. | Nov 2015 | A1 |
20160033661 | Bansal et al. | Feb 2016 | A1 |
20160139282 | Dimitrov | May 2016 | A1 |
Number | Date | Country |
---|---|---|
2 796 631 | Nov 2011 | CA |
1 094 338 | Apr 2001 | EP |
1 746 443 | Jan 2007 | EP |
2 390 712 | Jan 2004 | GB |
2 391 665 | Feb 2004 | GB |
WO 2006037815 | Apr 2006 | WO |
WO 2007046711 | Apr 2007 | WO |
WO 2008042081 | Apr 2008 | WO |
WO 2008123920 | Oct 2008 | WO |
WO 2009067041 | May 2009 | WO |
WO 2009117174 | Sep 2009 | WO |
WO 2010085822 | Jul 2010 | WO |
WO 2011040926 | Apr 2011 | WO |
WO 2011091216 | Jul 2011 | WO |
WO 2011093945 | Aug 2011 | WO |
WO 2012024025 | Feb 2012 | WO |
WO 2012041834 | Apr 2012 | WO |
WO 2012083234 | Jun 2012 | WO |
WO 2012134621 | Oct 2012 | WO |
WO 2012170201 | Dec 2012 | WO |
WO 2013081752 | Jun 2013 | WO |
Entry |
---|
Hellerstein et al., “Science in the Cloud”, 2012, IEEE Internet Computing, pp. 64-68 (Year: 2012). |
Wang et al., “Distributed Parallel Processing Based on Master/Worker Model in Heterogeneous Computing Environment”, Feb. 2012, Advances in information Sciences and Service Sciences(AISS), vol. 4, No. 2, pp. 49-57 (Year: 2012). |
Bonham et al., “Seismic data modelling using parallel distributed MATLAB”, SEG Houston 2009 International Exposition and Annual Meeting, pp. 2692-2696 (Year: 2009). |
Lee, et al. (2013), “An Optimized Parallel LSQR Algorithm for Seismic Tomography”, Computers & Geosciences, 62, pp. 184-197. |
U.S. Appl. No. 14/329,431, filed Jul. 11, 2014, Krohn et al. |
U.S. Appl. No. 14/330,767, filed Jul. 14, 2014, Tang et al. |
Araya-Polo, M. et al., “3D Seismic Imaging through Reverse-Time Migration on Homogeneous and Heterogeneous Multi-Core Processors,” Scientific Programming 17(1-2), pp. 185-198 (2009). |
Brossier, R., “Two-dimensional frequency domain visco-elastic full waveform inversion: Parallel algorithms, optimization and performance,” Computers and Geosciences 37(4), pp. 444-455 (2010). |
Fallat, M.R. et al., “Geoacoustic inversion via local, global, and hybrid algorithms,” Journal of the Acoustical Society of America 105, pp. 3219-3230 (1999). |
Sirgue, L. et al., “Efficient waveform inversion and imaging: A strategy for selecting temporal frequencies,” Geophysics 69, 231-248 (2004). |
Suh, S.Y. et al., “Cluster programming for reverse time migration,” The Leading Edge, pp. 94-97 (Jan. 2010). |
Tarantola, A., “Inversion of seismic reflection data in the acoustic approximation,” Geoophysics 49, pp. 1259-1266 (1984). |
Number | Date | Country | |
---|---|---|---|
20160178801 A1 | Jun 2016 | US |
Number | Date | Country | |
---|---|---|---|
62093991 | Dec 2014 | US |