In certain embodiments, a circuit may comprise a digital data channel including a pre-processor module configured to sample a signal to generate sample values, a detector module configured to determine bit values based on the sample values, a least squares function module configured to determine a first parameter set for the digital data channel based on the sample values and a least squares algorithm, and a general cost function module configured to determine a second parameter set for the digital data channel based on a general cost algorithm. The digital data channel may also include a limiter module configured to generate a third parameter set based on constraining the second parameter set with the first parameter set, and modify applied parameters of the digital data channel based on the third parameter set.
In certain embodiments, an apparatus may comprise a circuit configured to select receiver parameters. The circuit may determine a first parameter set based on a least squares function, limit results of a general cost function based on the first parameter set to determine a second parameter set, and perform signal processing at the receiver using the second parameter set.
In certain embodiments, a method may comprise performing a parameter optimization procedure for a receiver, including determining a first parameter set based on a first function, determining a second parameter set based on a second function different from the first function, determining a third parameter set by using the first parameter set to define a subset of a parameter space to which to limit values from the second parameter set, and performing signal processing in the receiver using the third parameter set.
In the following detailed description of certain embodiments, reference is made to the accompanying drawings which form a part hereof, and in which are shown by way of illustration of example embodiments. It is also to be understood that features of the embodiments and examples herein can be combined, exchanged, or removed, other embodiments may be utilized or created, and structural changes may be made without departing from the scope of the present disclosure.
In accordance with various embodiments, the methods and functions described herein may be implemented as one or more software programs running on a computer processor or controller. Dedicated hardware implementations including, but not limited to, application specific integrated circuits, programmable logic arrays, and other hardware devices can likewise be constructed to implement the methods and functions described herein. Methods and functions may be performed by modules, which may include one or more physical components of a computing device (e.g., logic, circuits, processors, etc.) configured to perform a particular task or job, or may include instructions that, when executed, can cause a processor to perform a particular task or job, or any combination thereof. Further, the methods described herein may be implemented as a computer readable storage medium or memory device including instructions that, when executed, cause a processor to perform the methods.
System 100 may include a pre-processor 104 configured to perform initial processing on the signal 102 in order to convert the signal 102 into a form from which individual bit values may be detected. The pre-processor 104 may include an interface configured to receive the signal 102, an analog front end (AFE) configured to condition an analog signal via amplifiers, filters, and other operations, an analog to digital converter (ADC) configured to periodically sample the conditioned analog signal, and an equalizer configured to reverse or reduce distortions in the signal 102. The equalized signal samples yn may be provided to a detector 106, a subcomponent of a receiver which may determine a sequence of data bits 122 provided by the signal 102 based on the sampled values from the ADC (e.g. whether the sample values indicate a 1 or a 0).
In many signal processing applications, an optimization procedure may be used to determine a set of receiver parameters to minimize a specific cost function. For example, parameters used by the detector 106 may be selected to minimize a bit error rate (BER) of the detected bit sequence. Parameters may include weight and variable values applied by the channel components when executing functions and calculating results, other values, or any combination thereof. For example, the detector 106 may include a partial response maximum likelihood (PRML) detector configured to implement a SOVA (soft output Viterbi algorithm). For a PRML detector in a read channel, the parameters can include the branch biases used in the Viterbi detector. If the Viterbi included data dependent noise prediction, the parameters can additionally include the data dependent noise whitener coefficients and variances. Additionally, once an initial solution to the optimization procedure has been found, it may be advantageous to continue running the optimization procedure to track channel variations.
Modules within the system 100 may produce parameter sets that may be provided to receiver components, such as the detector 106, to influence those components' behavior. Functions may determine parameters to minimize or maximize a selected value. For a HDD, the general cost function goal could be BER. The system 100 may determine a set of detector parameters which minimizes BER at the detector 106 output.
Various approaches or equations may be used in the optimization procedure to generate or estimate the optimal receiver parameters. For example, a least squares (LS) cost function may be used because its convergence is generally well behaved, it is less prone to dynamic range issues (since it minimizes error magnitude), and low complexity implementations are available, such as the least mean squares (LMS) algorithm. For example, a PRML Viterbi detector may produce a “soft” output indicating both bit value estimates and the reliability of the estimates. The estimate reliability value Ln may be expressed as a log likelihood ratio (LLR). In an example embodiment, the sign (e.g. ‘+’ or ‘−’) may indicate the bit value (e.g. 1 or 0), while the magnitude of Ln may indicate a reliability of the bit estimate. A Viterbi detector 106 may use the expected means of a set of equalized samples corresponding to different data patterns to make bit value estimates. For an additive white Gaussian noise (AWGN) and intersymbol interference (ISI) channel the means of the equalized samples corresponding to different data patterns may correspond to least squares error. For an HDD the channel noise may be nonlinear or data-dependent, so the equalized samples may minimize the least squares error. However, in many applications minimizing least squares cost may provide a satisfactory result, but may not minimize the system performance figure-of-merit (e.g. BER). Additionally the channel may be subject to nonlinear perturbations such as nonlinear distortion and data dependent noise. So while LS functions may produce workable results, they may still produce sub-optimal results.
To explain another way, LS is a convex cost function. The LS function may have a unique global minimum (e.g. the bottom of the “bowl”), such that a determined minimum value will be the global minimum. There may not be issues with arriving at local minima. The LS cost function may be a continuous function and hence varies smoothly with respect to applied parameters. BER, on the other hand, may be a nonlinear function of the detector parameters. A BER function may have multiple local minima, and saddle points which can be problematic during optimization.
Therefore it may be desirable to perform optimization with respect to a general cost function, such as for BER. As used herein, the term “general cost function” may be used to mean any cost function other than the mean squared or least squares error cost functions. Some examples of general cost functions could be: BER, sector failure rate, LLR distribution or shape, or a weighted combination of quality metrics. As the general cost function is not least squares, the system can become more prone to saturation issues. Additionally this cost function may not be a globally well behaved function of the receiver parameters. There may exist multiple solutions which locally minimize the cost function, but which are impractical to implement due to parameter dynamic range limitations. For example, a cost function may produce a result that is optimal but that is outside a realistic parameter range for the system 100. To phrase it another way, a least squares algorithm may produce a single “minimum” value (e.g. set of parameters) that may not be optimal for the selected performance metric. On the other hand, a general cost function may produce multiple local solutions or minimums, with some solutions resulting in huge parameter values being chosen for the detector 106 that are greatly outside the practical range for fixed-point implementation.
Accordingly, a method is presented for constraining an optimization procedure driven with respect to a general cost function to search in a subset of the parameter space which includes parameters that are feasible to implement. The presented method may also be adaptive in order to track channel variations.
A first well-behaved algorithm may be used to define or establish a parameter “range” within which a second algorithm may select the receiver parameter set. For example, a least squares (LS) procedure may be used to estimate a set of optimal parameters with respect to a LS cost function. This set of LS parameters may be used to center and limit the parameter space searched during a parallel optimization procedure with respect to a general cost function. The result may be a parameter solution that is more reliable than that produced by the LS algorithm, and which is within an acceptable parameter range.
Given a parameter set [p1 . . . pn] the cost function or value C(p1, . . . , pn) may be a measure of how well that parameter set performs. For a given cost function, an optimization procedure may be applied to seek the parameter set with lowest cost. For the least squares approach, the cost function may be the mean square error. The least squares solution can minimize the mean squared error. This LS cost function may generally be a convex function of the parameter set, and hence amenable to simple mathematical formulation and analysis. However, in a communication system it may be advantageous to find a parameter set which minimizes bit error rate (or some other parameter). For non-ideal channels (e.g. nonlinear or data-dependent), least squares optimization may find an acceptable solution, but parameter sets in the vicinity of the least squares solution may result in even lower BER. A general cost function may be used to identify the parameter sets within the vicinity of the least squares solution that produce superior BER values.
In regard to system 100, the equalized sample values yn from the pre-processor may be provided to a least squares (LS) algorithm or estimator 108 (e.g. using LMS), which may produce a least squares solution parameter set [p1 . . . pN]LS. The detector output Ln (e.g. SOVA detector LLRs) and the LS parameter set may be applied to a general cost function 110, which may produce a general cost function parameter set [p1 . . . pN]G. The results of the general cost function 110 may be limited or constrained by the LS parameter set, producing parameters that may be better optimized than the LS parameters while constrained within an acceptable parameter range. For example, constrained parameter value ranges may be centered on or otherwise limited by the LS parameter values, and the results of the general cost function 110 may be limited to falling within the constrained ranges. The results of the general cost function 110 may be constrained in a number of ways. For example, the LS parameter set may be used as an input to the general cost function 110 so that the general cost function 110 only searches for parameter values within a range based on the LS parameter set. The general cost function 110 may try all solutions within a range defined by the LS parameter set, and select the one that minimizes the general cost function. In another example, the general cost function 110 may generate parameter values based on the detector output Ln alone, and those general cost parameter values may then be reduced or modified based on the LS parameter values (e.g. the general cost solution based on Ln could be limited to fall within a range defined by the LS parameter values, if necessary). Once selected, the general cost parameter set may be applied to the detector 106 to adaptively adjust the detector parameters in response to changing signal and channel conditions. Parameter values may be selected for other components instead of or in addition to the detector 106, such as for the pre-processor 104 or components thereof. The proposed parameter optimization procedure is discussed in greater detail in regard to
In an example system 200 having a PRML detector 206, the pre-processor 204 may include timing or gain stages along with magneto-resistive asymmetry (MRA) and offset cancellation, with equalization to a desired target response. The general cost function 210 can be a function of the expected data bits bn, the SOVA LLRs Ln, and auxiliary information In, such as system quality indicators. Examples of auxiliary information may include quality or performance metrics from the channel, such as an average iteration count required to decode a codeword from a low density parity check (LDPC) decoder 218, which can be used as a quality measure or cost which the general cost function 210 seeks to reduce. When the general cost function 210 is BER, the estimated bit values from Ln may be compared against the expected bit values bn. A more sophisticated general cost function might be able to additionally exploit the reliability information In to improve system performance or reliability. For example, the general cost function 210 may be used to optimize the receiver parameters such that the LLR distribution achieves a certain shape, dynamic range, or both.
The results JG of the general cost function 210 may be provided to the adaptive algorithm 212, which may use the results to generate the general cost function parameter set [p1 . . . pN]G′. The adaptive algorithm 212 may be an algorithm that changes its behavior based on information available at the time it is run. This information may include the general cost function results, information provided to the general cost function 210, or other available information about the channel or signal. For example, the adaptive algorithm 212 could be a brute force search for optimal parameter values, or a directed search driven by measurements of the cost function JG(Ln, bn, In). The general cost function 210 and adaptive algorithm 212 can operate in either a training mode, where a known bit pattern is read so that bn is known beforehand, or in a decoder directed mode, where unknown bits are determined via error correction code (ECC)-decoding and used as the expected bn. The estimated bit sequence from the detector 206 may be passed to a decoder 218 which performs ECC decoding to correct erroneous bit estimates and determine final bit values 222, which may in turn be provided as the values for bn.
A least squares based estimator 208 (such as using LMS) may be used to estimate an optimal set of detector parameters [p1 . . . pN]LS with respect to a least squares cost function and sample values yn.
The general cost parameter set and the LS parameter set may be provided to the limiter 214. The limiter may limit or modify the general cost parameter set based on the LS parameter set and a set of parameter constraint range values [Δ1 . . . ΔN]. The parameter constraint range values may define a numeric range around values of the LS parameter set within which the values of the final applied constrained parameter set must fall. In particular, given a least squares estimate [p1 . . . pN]LS, the limiter 214 may limit the generalized cost parameter set to the range [p1 . . . pN]LS, ±[Δ1 . . . ΔN]. The underlying assumption is that optimal solutions with respect to the general cost function may lie in the vicinity of an optimal solution with respect to the LS estimates. Conceptualized in a three-dimensional space, the LS estimator 208 may select a solution “point” of the various parameter values (e.g. a coordinate made up of parameter values). The limiter 214 may then select a constrained parameter set [p1 . . . pN]G by limiting the general cost parameters to an area around that solution point, with the area defined by the delta corresponding to each parameter. The deltas may be programmable values that may be set in the firmware, or adaptively adjusted by the system 200.
As an example, a LS parameter estimate may be the value 10, with a corresponding delta of ±7, to establish a parameter range of 3 to 17. The general cost function 210 may generate a corresponding parameter value of 30. The limiter 214 may adjust the general cost parameter to the nearest value within the parameter range; here, reduced from 30 to 17. Accordingly, the constrained parameter value may be set to 17 and provided to the detector 206 or other channel component. Phrased another way: the value of a constrained parameter pG′ may be set to the value of the general cost parameter pG if pG falls within the permissible range of the LS parameter pi±Δi, or set to the value within the permissible range closest to the general cost parameter value when the general cost parameter value is outside the permissible range.
In an example embodiment the parameters may be 8-bit quantities or values (e.g. within a range of 0 to 255), and the deltas may be ±3 bits (e.g. within a value of 8 from the estimated LS parameter value). If the delta values were set to 0, then the parameter values would be constrained to exactly match the LS estimates for the parameter values.
Once an initial constrained parameter solution has been selected, the system 100 can continue to adapt in order to track changes in the channel response and noise statistics. Another implementation of a system for performing constrained parameter optimization is discussed in regard to
In system 300, some or all of the components of the first pre-processor 304 may be duplicated with a second pre-processor 316. The first and second pre-processors may be achieved via duplicating separate physical components for each pre-processor, or by using a multiplexer to adjust input signals and parameters to achieve two different pre-processor behaviors with a single set of physical pre-processor components. The first pre-processor 304 may search and optimize parameter values with respect to a general cost function. The second pre-processor 316 may be used to run least squares optimization, and provide input to the LS estimator 308. The second pre-processor 316 and the LS estimator 308 may be used to estimate a least squares solution via an adaptive algorithm, and the LS solution can be used to center a search space for the first pre-processor 304 (e.g via limiter 314). In doing so, the first preprocessor (parameter set) can achieve better performance with respect to the figure-of-merit of interest.
For example, the pre-processor parameters to be modified could be equalizer coefficients. The first set of coefficients used in the first pre-processor 304 may minimize BER as measured at the detector 306 output. The second set of equalizer coefficients may minimize mean squared error as measured at the equalizer output of the second pre-processor 316.
The first pre-processor 304 may generate sample values yn based on a general cost function constrained based on a least squares solution. The first pre-processor 304 may provide the sample values yn to the detector 306, which may generate detected bit values and reliability information Ln, (e.g. SOVA LRRs). The general cost function 310 may generate an output JG as a function of (Ln, bn, In). The output JG may be provided to an adaptive algorithm 312, which may generate a general cost parameter set [p1 . . . pN]G′, and provide it to a limiter 314.
The second pre-processor 316 may be adaptively adjusted based on LS parameter optimization, to produce a set of samples ynLS The system 300 may know what the “ideal” sample values dn would be, and those values may be subtracted from the observed values ynLS to obtain error values en. The error values en may be provided to the LS estimator 308. In some embodiments, expected data bits bn could be provided to the LS estimator 308 instead of dn or en.
Similar to bn, the ideal or desired values dn may be learned through training (e.g. reading or receiving a known value and comparing against the observed values), or learned after error correction is performed on the signal 302. For learning after error correction, an error-corrected bit sequence can be reversed into ideal sample values. Given the target response of the equalizer and a sequence of corrected data bits, the ideal sample values can be computed. For training mode, the sequence of data bits may be known beforehand, e.g. typically implemented via a pseudo-random binary sequence (PRBS) generator. For a decoder-directed adaptation mode, the LS updates may be delayed until the decoded bits are available, at which time the updates can be computed and applied.
The LS adaptive algorithm may use the error values en to generate a LS parameter set [p1 . . . pN]LS. The LS parameter set may be provided to the second pre-processor 316 in order to adjust the pre-processor parameters, which may improve the sample values ynLS. In this manner the LS estimator 308 may adaptively improve the sample values generated by the second pre-processor 316.
The LS parameter set, along with a parameter constraint range [Δ1 . . . ΔN], may also be provided to the limiter 314, which may constrain the general cost parameter set [p1 . . . pN]G′ in order to generate the constrained parameter set [p1 . . . pN]G used in the main data-path of the channel, including the first pre-processor 304. For example, if a parameter value from the general cost parameter set exceeds the range set by a LS parameter pi±Δi, the limiter may generate a constrained parameter that is the closest value to the general cost parameter still within the set range of the LS parameter. The constrained parameter set may be provided to the detector 306, the first pre-processor 304, or other components of the system 300 in order to adjust parameter settings and behavior of those components. For example, an equalizer of the first pre-processor 304 may be neural network based, and the measured BER may be used to generate a set of constrained parameters to prune hidden or non-helpful nodes in the neural network.
The LS estimator 308 may apply different update equations for the first pre-processor 304 (e.g. for equalizer coefficients) and for detector 306 parameters. Similarly, the limiter 314 may apply different constrained parameter sets for each component or parameter set to be modified. Accordingly, the LS parameter set [p1 . . . pN]LS, the parameter constraint range [Δ1 . . . ΔN], and the constrained parameter set [p1 . . . pN]G may include multiple sets of data for the different parameters to be limited, or separate sets may be provided for each set of parameters to be limited. In contrast, the LS estimator 208 of
The DSD 404 may include a memory 406 and a read/write (R/W) channel 408, such as the receiver described in regard to
DSD 404 may include a parameter selection module (PSM) 410. The PSM 510 may perform the methods and processes described herein to constrain a first parameter set generated using a first process by a second parameter set generated using a second process, and to apply the constrained parameter set for signal processing in a data channel.
The buffer 512 can temporarily store data during read and write operations, and can include a command queue (CQ) 513 where multiple pending operations can be temporarily stored pending execution. Commands arriving over the interface 504 may automatically be received in the CQ 513 or may be stored there by controller 506, interface 504, or another component.
The DSD 500 can include a programmable controller 506, which can include associated memory 508 and processor 510. The controller 506 may control data access operations, such as reads and writes, to one or more memories, such as disc memory 509. The DSD 500 may include an additional memory 503 instead of or in addition to disc memory 509. For example, additional memory 503 can be a solid state memory, which can be either volatile memory such as DRAM or SRAM, or non-volatile memory, such as NAND Flash memory. The additional memory 503 can function as a cache and store recently or frequently read or written data, or data likely to be read soon. Additional memory 503 may also function as main storage instead of or in addition to disc(s) 509. A DSD 500 containing multiple types of nonvolatile storage mediums, such as a disc(s) 509 and Flash 503, may be referred to as a hybrid storage device.
The DSD 500 can include a read-write (R/W) channel 517, which can encode data during write operations and reconstruct user data retrieved from a memory, such as disc(s) 509, during read operations. A preamplifier circuit (preamp) 518 can apply write currents to the head(s) 519 and provides pre-amplification of read-back signals. In some embodiments, the preamp 518 and head(s) 519 may be considered part of the R/W channel 517. A servo control circuit 520 may use servo data to provide the appropriate current to the coil 524, sometimes called a voice coil motor (VCM), to position the head(s) 519 over a desired area of the disc(s) 509. The controller 506 can communicate with a processor 522 to move the head(s) 519 to the desired locations on the disc(s) 509 during execution of various pending commands in the command queue 513.
DSD 500 may include a parameter selection module (PSM) 530. The PSM 530 may perform the methods and processes described herein generate a first parameter set using a first algorithm or process, and a second parameter set using a second algorithm or process. For example, the PSM 530 may generate the first parameter set using a least mean squares function, and generate the second parameter set using a general cost function. The PSM 530 may then constrain the second parameter set based on the first parameter set to determine a constrained parameter set, and use the constrained parameter set to establish settings used in the R/W channel 517. The PSM 530 may be a processor, controller, other circuit, or a portion thereof. The PSM 530 may include a set of software instructions that, when executed by a processing device, perform the functions of the PSM 530. The PSM 530 may be part of or executed by R/W channel 517, included in or performed by other components of the DSD 500, a stand-alone component, or any combination thereof.
The illustrations of the embodiments described herein are intended to provide a general understanding of the structure of the various embodiments. The illustrations are not intended to serve as a complete description of all of the elements and features of apparatus and systems that utilize the structures or methods described herein. Many other embodiments may be apparent to those of skill in the art upon reviewing the disclosure. Other embodiments may be utilized and derived from the disclosure, such that structural and logical substitutions and changes may be made without departing from the scope of the disclosure. Moreover, although specific embodiments have been illustrated and described herein, it should be appreciated that any subsequent arrangement designed to achieve the same or similar purpose may be substituted for the specific embodiments shown.
This disclosure is intended to cover any and all subsequent adaptations or variations of various embodiments. Combinations of the above embodiments, and other embodiments not specifically described herein, will be apparent to those of skill in the art upon reviewing the description. For example, the adaptive algorithm 212 and the general cost function 210 of
Number | Name | Date | Kind |
---|---|---|---|
5563746 | Bliss | Oct 1996 | A |
5726818 | Reed et al. | Mar 1998 | A |
5754352 | Behrens et al. | May 1998 | A |
5793548 | Zook | Aug 1998 | A |
5822142 | Hicken | Oct 1998 | A |
5909332 | Spurbeck et al. | Jun 1999 | A |
5954837 | Kim | Sep 1999 | A |
6069758 | Chung | May 2000 | A |
6172836 | Bang | Jan 2001 | B1 |
6275346 | Kim et al. | Aug 2001 | B1 |
6307696 | Bishop et al. | Oct 2001 | B1 |
6330293 | Klank et al. | Dec 2001 | B1 |
6353649 | Bockleman et al. | Mar 2002 | B1 |
6480349 | Kim et al. | Nov 2002 | B1 |
6594094 | Rae | Jul 2003 | B2 |
6594098 | Sutardja | Jul 2003 | B1 |
6760185 | Roth et al. | Jul 2004 | B1 |
6973150 | Thuringer | Dec 2005 | B1 |
6996193 | Yamagata et al. | Feb 2006 | B2 |
7035029 | Sawada et al. | Apr 2006 | B2 |
7068461 | Chue et al. | Jun 2006 | B1 |
7075967 | Struhsaker et al. | Jul 2006 | B2 |
7199956 | Moser et al. | Apr 2007 | B1 |
7297582 | Asakura et al. | Nov 2007 | B2 |
7602568 | Katchmart | Oct 2009 | B1 |
7643548 | Hafeez | Jan 2010 | B2 |
7665007 | Yang et al. | Feb 2010 | B2 |
7738538 | Tung | Jun 2010 | B1 |
7917563 | Shih et al. | Mar 2011 | B1 |
7995691 | Yang | Aug 2011 | B2 |
8019026 | Maltsev et al. | Sep 2011 | B2 |
8077571 | Xia et al. | Dec 2011 | B1 |
8121229 | Kuo et al. | Feb 2012 | B2 |
8139305 | Mathew et al. | Mar 2012 | B2 |
8159768 | Miyamura | Apr 2012 | B1 |
8266505 | Liu et al. | Sep 2012 | B2 |
8331050 | Zou | Dec 2012 | B1 |
8400726 | Zheng | Mar 2013 | B1 |
8405924 | Annampedu | Mar 2013 | B2 |
8516347 | Li et al. | Aug 2013 | B1 |
8553730 | Schmidl et al. | Oct 2013 | B2 |
8560900 | Bellorado | Oct 2013 | B1 |
8593750 | Shibano | Nov 2013 | B2 |
8625216 | Zhang et al. | Jan 2014 | B2 |
8681444 | Zhang | Mar 2014 | B2 |
8706051 | Park et al. | Apr 2014 | B2 |
8774318 | Lakkis | Jul 2014 | B2 |
9043688 | Chan | May 2015 | B1 |
9078204 | Okazaki | Jul 2015 | B2 |
9129646 | Mathew et al. | Sep 2015 | B2 |
9236952 | Sun | Jan 2016 | B2 |
9362933 | Chaichanavong | Jun 2016 | B1 |
9537589 | Kim et al. | Jan 2017 | B2 |
9613652 | Link et al. | Apr 2017 | B2 |
10152457 | Bellorado | Dec 2018 | B1 |
20020094048 | Simmons et al. | Jul 2002 | A1 |
20030026016 | Heydari et al. | Feb 2003 | A1 |
20030048562 | Heydari et al. | Mar 2003 | A1 |
20030185291 | Nakahira | Oct 2003 | A1 |
20040047403 | Choi et al. | Mar 2004 | A1 |
20070002890 | Mangold et al. | Jan 2007 | A1 |
20070018733 | Wang | Jan 2007 | A1 |
20080292029 | Koslov | Nov 2008 | A1 |
20090074113 | Sung | Mar 2009 | A1 |
20090110048 | Luschi | Apr 2009 | A1 |
20090110120 | McNamara | Apr 2009 | A1 |
20090113429 | Luschi | Apr 2009 | A1 |
20090268857 | Chen | Oct 2009 | A1 |
20100138722 | Harley | Jun 2010 | A1 |
20100211830 | Sankaranarayanan et al. | Aug 2010 | A1 |
20100272150 | Kil et al. | Oct 2010 | A1 |
20110072330 | Kolze | Mar 2011 | A1 |
20110158359 | Khayrallah et al. | Jun 2011 | A1 |
20130182347 | Maeto | Jul 2013 | A1 |
20130339827 | Han | Dec 2013 | A1 |
20140337676 | Yen et al. | Nov 2014 | A1 |
20140355147 | Cideciyan et al. | Dec 2014 | A1 |
20150124912 | Eliaz | May 2015 | A1 |
20150179213 | Liao et al. | Jun 2015 | A1 |
20150189574 | Ng et al. | Jul 2015 | A1 |
20160055882 | Cideciyan et al. | Feb 2016 | A1 |
20160134449 | Liu et al. | May 2016 | A1 |
20160156493 | Bae et al. | Jun 2016 | A1 |
20160270058 | Furuskog et al. | Sep 2016 | A1 |
20160293205 | Jury | Oct 2016 | A1 |
20170162224 | Mathew et al. | Jun 2017 | A1 |