Radiation detection with non-parametric decompounding of pulse pile-up

Information

  • Patent Grant
  • 12066473
  • Patent Number
    12,066,473
  • Date Filed
    Monday, March 23, 2020
    4 years ago
  • Date Issued
    Tuesday, August 20, 2024
    3 months ago
Abstract
A method of determining a spectrum of energies of individual quanta of radiation received in a radiation detector is disclosed. Spectrum sensitive statistics are computed from a time series of digital observations from the radiation detector, defining a mapping from a density of amplitudes of the pulses to the spectrum sensitive statistics. The spectrum is determined by estimating the density of amplitudes of the pulses by applying an inversion of the mapping to the spectrum sensitive statistics. The statistics may be based on a first set of nonoverlapping time intervals of constant length L at least as long as a duration of the pulses without regard to entirety of clusters of the pulses; and a second set of nonoverlapping time intervals of constant length L1 less than L also without regard to entirety of clusters of the pulses. A method of estimating count rate is also disclosed.
Description
FIELD OF THE INVENTION

A novel method is provided for estimating the energy distribution of quanta of radiation such as photons incident upon a detector in spectroscopic systems such as in X-ray or gamma-ray spectroscopy. The method is particularly useful for count-rate regimes where pulse pile-up is an issue. A key step in the derivation of the estimator of an embodiment of the invention is the novel reformulation of the problem as a decompounding problem of a compound Poisson process. The method can be applied to any form of radiation detector detecting quanta or other corpuscles of radiation, such as x-rays, gamma rays or other photons, neutrons, atoms, molecules, or seismic pulses. Applications of spectroscopy from such detectors are well known. Such applications are described widely in the prior art, including in international patent applications PCT/AU2005/001423, PCT/AU2009/000393, PCT/AU2009/000394, PCTAU2009/000395, PCT/AU2009/001648, PCT/AU2012/000678, PCT/AU2014/050420, PCT/AU2015/050752, PCT/AU2017/050514, and PCT/AU2017/050512, each of which is incorporated herein in its entirety for the purpose of describing potential applications of the current invention and any other background material needed to understand the current invention.


1 BACKGROUND

X-RAY and gamma-ray spectroscopy underpin a wide range of scientific, industrial and commercial processes. One goal of spectroscopy is to estimate the energy distribution of photons incident upon a detector. From a signal processing perspective, the challenge is to convert the stream of pulses output by a detector into a histogram of the area under each pulse. Pulses are generated according to a Poisson distribution whose rate corresponds to the intensity of X-rays or gamma-rays used to illuminate the sample. Increasing the intensity results in more pulses per second on average and hence a shorter time before an accurate histogram is obtained. In applications such as baggage scanning at airports, this translates directly into greater throughput. Pulse pile-up occurs when two or more pulses overlap in the time domain. As the count rate (the average number of pulses per second) increases so does the incidence of pulse pile-up. This increases the difficulty in determining the number of pulses present and the area under each pulse. In the limit, the problem is ill-conditioned: if two pulses start at essentially the same time, their superposition is indistinguishable from a single pulse. The response of an X-ray or gamma-ray detector to incident photons can be modelled as the superposition of convolutions of pulse shape functions Φj(t) with Dirac-delta impulses,










r


(
t
)


=




j
=

-








a
j



δ


(

t
-

τ
j


)






★Φ
j



(
t
)


.







(
1
)







The arrival times . . . , τ−1, τ0, τ1, . . . are unknown and form a Poisson process. Each photon arrival is modelled as a Dirac-delta at time τj, and with amplitude aj that is proportional to the photon energy and induces a detector pulse shape response Φj. The amplitudes aj are realizations of identically distributed random variables Aj whose common probability density function ƒA(x) is unknown. The pulse shape function Φj is determined by the geometry of the detector and the photon interaction. In some systems the variation in pulse shape is minimal and may be ignored, while other systems (e.g., HPGe) individual pulse shapes may differ significantly from one another ill. It is assumed all pulse shapes are causal i.e., Φj(t)=0 for t<0, uni-modal, of finite energy, and decay exponentially toward zero as t→∞. The integral of the pulse shape functions are normalized to unity, i.e., ∫−∞Φj(t) dt=1, so that the area under the pulse is given by Aj. The observed signal consists of the detector output corrupted by noise, i.e.,










s


(
t
)


=


r


(
t
)


+


w


(
t
)


.






(
2
)







The mathematical goal of pulse pile-up correction is to estimate ƒA(x) given a uniformly-sampled, finite-length version of s(t). We assume throughout that the noise distribution w(t) is zero-mean Gaussian with known variance σ2. We also assume the photon arrival times form a homogeneous Poisson process with a known rate. Consisting of the detector response R corrupted by a noise process W, where Sk, Rk and Wk represent the kth elements of each series and where 0≤k<K. Let S, R and W be the uniformly sampled time-series corresponding to these signals,









S
=

R
+
W





(
3
)






=

{


s


(

t
k

)


:

0

k
<
K


}





(
4
)






R
=

{


r


(

t
k

)


:

0

k
<
K


}





(
5
)






W
=

{


W

t
k


:

0

k
<
K


}





(
6
)







where






t
0


<

t
1

<

.



.



.

<


t

K
-
1


.













Summary of Pulse Processing Methodologies: Numerous approaches have been proposed over the decades to address the issue of pulse pile-up. Approaches can be broadly categorized into two types: time-domain based and energy-domain based. A popular strategy is to attempt to detect when pile-up has occurred in the time domain, and either reject or compensate for the affected pulses. Early spectroscopic systems adopted a rejection-based approach along with matched filtering. The disadvantage of this approach is that an increasing proportion of pulses are rejected as the probability of pile-up grows. The system rapidly succumbs to paralysis, placing an upper limit on the count rate [1]. Strategies that compensate or correct for pile-up have grown in number with the increase of cheap computational power. These include template fitting [2], baseline subtraction [3], adaptive filtering [4, 5], sparse regression [6, 7] and more. These approaches all attempt to identify and compensate for pile-up in the time domain, and are generally best suited to systems with low pulse shape variation. The complexity of these approaches increases significantly with increasing variability between pulse shapes Φj. It can be shown that any method that attempts to characterise individual pulses will suffer from pile-up. The best that these approaches can do is to reduce the onset of pile-up. Energy-based approaches attempt to address pile-up based on the statistics of an ensemble of pulses rather than individual pulses. They typically operate on histograms of estimated energy (the areas under clusters of pulses). The early work of Wielopolski and Gardner [8] and more recent extensions of their idea [9] operate primarily in the energy domain using ensemble-based strategies. Trigano et al. [10, 11] estimate the incident spectrum utilizing marginal densities from the joint distribution of the statistical properties of variable length clusters of pulses where the beginning and end of each cluster is detected. This circumvents the need to identify individual pulses, and is robust to pulse shape variation. Ilhe et al. [12] examine exponential shot-noise processes, restricting pulse shapes to a simple exponential to obtain tractable results. Further work [13] has been done to allow a wider range of pulse shapes. In both cases, knowledge is required of the pulse shape, along with estimates of the characteristic function and its derivative.


2 SUMMARY OF THE INVENTION

We chose an energy-based pile-up correction approach in order to i) avoid the limitations associated with the detection of individual pulses [14] and to ii) be able to handle pulse shape variation without undue increase in computational complexity. Rather than utilizing the joint distribution [10, 11] or shot-process [12] approaches, we recast the pile-up problem as a ‘decompounding’ problem of a compound Poisson process. A compound Poisson process is a discrete-time random process where each component consists of the sum of a random number of independent identically distributed random variables, where the number of random variables in each sum is Poisson distributed [15]. ‘Decompounding’ of a compound Poisson process is the task of using the random sums to estimate the distribution from which the random variables have been drawn. Buchmann and Grübel [16] formulated the decompounding of compound Poisson processes in the context of insurance claims and queuing theory. Decompounding of uniformly sampled compound Poisson processes has received some attention in recent times [16, 17, 18, 12, 19]. The context of these derivations frequently assume (reasonably) that each event is detectable (i.e., there is no ambiguity regarding the number of events), or that the density estimators are conditioned on at least one event occurring in each observation [20]. These assumptions are of limited value when addressing the spectroscopic pile-up problem.


The investigation of non-parametric decompounding without conditioning on event detection has received relatively little attention in the literature. Gugushvili [18] proposes a non-parametric, kernel-based estimator for the decompounding problem in the presence of Gaussian noise. In embodiments of the invention, the inventors have conceived that once a method for selecting the kernel bandwidth is obtained, along with a method for transforming the observed detector output to fit the mathematical model, this estimator can be readily extended and applied to a reformulation of the spectroscopic pile-up problem.


In accordance with a first broad aspect of the invention there is provided a method of determining a spectrum of energies of individual quanta of radiation received in a radiation detector, the method comprising the steps of: (1) obtaining a time series of digital observations from the radiation detector comprising pulses corresponding to the detection of the individual quanta; (2) computing spectrum sensitive statistics from the detector signal, the spectrum sensitive statistics defining a mapping from a density of amplitudes of the pulses to a spectrum sensitive statistics; (3) determining the spectrum by estimating the density of amplitudes of the pulses by applying an inversion of the mapping to the spectrum sensitive statistics.


In embodiments, the spectrum sensitive statistics may be based on a sum of the digital observations over a plurality of time intervals and the mapping may be defined using an approximate compound Poisson process, which may be augmented by a modelled noise. The mapping may be expressed as a relation between characteristic functions of the amplitudes, the spectrum sensitive statistics and the modelled noise. The characteristic functions of the spectrum sensitive statistics may be computed with the use of a histogram of the sum of the digital observations to which is applied an inverse Fourier transform. Computation of a characteristic function of the amplitudes may comprise the use of a low pass filter.


In a first embodiment, the plurality of time intervals are nonoverlapping and have have a constant length L and each interval is selected to encompass zero or more approximately entire clusters of the pulses. This may be accomplished by requiring a maximum value of the detector signal at a beginning and end of each time interval. In this embodiment, the compound Poisson process may be defined as a sum of the amplitudes of the pulses in each time interval. The mapping may be expressed as defined in equations (40) and (41), which may be augmented by windowing functions.


In a second embodiment, a plurality of intervals comprise a first set of nonoverlapping time intervals constant length L selected without regard to entirety of clusters of the pulses, and a 2nd set of nonoverlapping time intervals of constant length L1 less than L also selected without regard to entirety of clusters of the pulses. L is at least as long as a duration of the pulses and preferably L1 is less than the duration of the pulses. In this embodiment, the compound Poisson process may be defined as in Section 6. The mapping may be expressed as defined in Section 6. The second embodiment may utilise processes and calculations for each set of time intervals as defined for the set of time intervals in the first embodiment.


In embodiments, a data-driven strategy is used that results in a near optimal choice for a kernel parameter, which minimises the integrated-square-of-errors (ISE) of the estimated probability density function of incident photon energies.


According to a second broad aspect of the invention there is provided a method of estimating count rate of individual quanta of radiation received in a radiation detector, the method comprising the steps of: (1) obtaining a time series of digital observations from the radiation detector comprising pulses corresponding to the detection of the individual quanta; (2) computing spectrum sensitive statistics from the detector signal, the spectrum sensitive statistics using the intervals of constant length L and constant length L1 as described above in relation to the 1st broad aspect; (3) determining an estimate of a characteristic function of the compound Poisson process using formula (109); (4) estimating the count rate from the estimate of the characteristic function. Step (4) above may be achieved by using an optimisation routine or some other means to fit a curve, estimating a DC offset of a logarithm of the estimate of the characteristic function, or fitting a curve to the logarithm of the estimate of the characteristic function.


The rest of this application is organized as follows. Sections 3, 4 and 5 relate to the 1st embodiment of the 1st aspect of the invention. Sections 3 provides preliminary background, defines notation, outlines the mathematical model and gives a derivation of the estimator of the 1st embodiment, including modifications. Section 4 shows the performance of the modified estimator of the 1st embodiment for both simulated and experimental data, and discusses the results. Section 5 provides a conclusion for the 1st embodiment. Section 6 describe the 2nd embodiment, with reference to the 1st embodiment where relevant. Section 7 describes the 2nd aspect of the invention, a novel method of estimating count rate.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 shows the relationship between realizations of the interval Y and the sampled detector response.



FIG. 2 plots a typical estimate made by the data-driven logistic shaped filter for an experiment with a parameter pair.



FIG. 3 plots a typical estimate made at the same operating point as FIG. 2, but with an estimator having a rectangular filter and a selected bandwidth.



FIG. 4 shows the distribution densities of the integrated square of the error (ISE) measures as a function of sample count using a rectangular filter and various fixed bandwidths.



FIG. 5 shows the distribution densities of the ISE measure as a function of the total number of estimates N in each histogram at a first count rate.



FIG. 6 shows the distribution densities of the ISE measure as a function of the total number of estimates N in each histogram at a second count rate.



FIG. 7 shows the distribution densities of the ISE measure as a function of the total number of estimates N in each histogram at a third count rate.



FIG. 8 shows plots of the observed and estimated probability mass vectors. FIG. 9 plots various quantities obtained during the estimation process.



FIG. 10 illustrates the observed and true probability density of input photon energies for the experiment from which FIGS. 9-13 were derived.



FIG. 11 illustrates the trajectory of the curve γ in the complex plane.



FIG. 12 illustrates internal quantities similar to FIG. 9, however there are some additional signals.



FIG. 13 relates to the 2nd broad aspect of calculating the count rate.





3 DERIVATION OF ESTIMATOR OF THE FIRST EMBODIMENT

The general approach we take to addressing pile-up is based on the following strategy; i) obtain statistics from s(t) that are sensitive to the distribution of incident photon energies, and estimate those statistics using the observed, finite-length sampled version of s(t), ii) obtain a mapping from the density of incident photon energies to the statistical properties of the observed statistics, iii) estimate the density of the incident photon energies by inverting the mapping. Section 3.1 describes our choice of statistics. Section 3.2 argues that these statistics (approximately) have the same distribution as a compound Poisson process. Section 3.3 introduces a decompounding technique for recovering the spectrum from these statistics. It is based on the decompounding algorithm in [18] but further developed to obtain near optimal performance in terms of the integrated square of error. Our approach to the pile-up problem follows the general theme of finding some statistics of s(t) that are sensitive to the underlying spectrum, estimating these statistics from a finite-time sampled version of s(t), then inverting the map that describes the statistical properties of these statistics given the underlying spectrum, thereby producing an estimate of the spectrum.


3.1 Choice of Statistic


We wish to obtain estimates of the photon energies from the observed signal given in (2). In typical modern spectroscopic systems, the detector output s(t) is uniformly sampled by an ADC. Without loss of generality, we assume the raw observations available to the algorithm are {s(k): k∈custom character≥0}. Since identification of individual pulses can be difficult, we look instead for intervals of fixed length L∈custom character>0 containing zero or more clusters of pulses. Precisely, we define these intervals to be [Tj, Tj+L) where










T
0

=

inf


{


k
:

|

s


(
k
)


|


ϵ



,





s


(

k
+
L

)


|


ϵ


,

k





0




}








(
7
)







T
j

=

inf


{



k
:




s


(
k
)


|


ϵ



,






s


(

k
+
L

)


|


ϵ


,



T

j
-
1


+
L


k

,

k





0




}

.








(
8
)







Here, ϵ is chosen as a trade-off between errors in the energy estimate and the probability of creating an interval. The value of ϵ should be sufficiently small to ensure the error in the estimate of total photon energy arriving within each interval is acceptably low, yet sufficiently large with respect to the noise variance to ensure a large number of intervals are obtained. Although the probability of partitioning the observed data into intervals approaches zero as the count-rate goes to infinity, this approach succumbs to paralysis at higher count-rates compared to pile-up rejection strategies based on individual pulses, since multiple photons are permitted to pile-up within in each interval. Section 4.2 describes the selection of L and ϵ for real data. Each interval contains an unknown, random number of pulses and may contain zero pulses.


We estimate the total photon energy xj in the interval [Tj, Tj+L) using the sampled raw observations. Since the area under each pulse is proportional to the photon energy Aj defined in (1), we let










x
j

=




k
=

T
j




T
j

+
L
-
1




s


(
k
)







(
9
)







The number of photon arrivals, the energy of each arriving photon and the detector output noise in each interval [Tj, Tj+L) are assumed to be random and independent of other intervals. For pulse shapes with exponential decay, a small amount of the photon energy arriving in an interval may be recorded in the next interval. The amount of leakage is proportional to ϵ, and is negligible for sufficiently small ϵ. Consequently, the estimates x1, x2, . . . may be treated as the realization of a weakly-dependent, stationary process where each estimate is identically distributed according to the random variable X. This relationship is illustrated in FIG. 1 for the noise free case using a typical pulse shape.


3.2 Approximation with Compound Poisson Process


In this subsection we describe the distribution of X in terms of ƒA(x). We will then invert this in section 3.3, to obtain an estimator for the density ƒA(x). Using (9), (2), (1) and the fact that custom character(t) is causa we have










x
j

=





{


:


τ


<

T
j



}







k
=

T
j




T
j

+
L
-
1





a





Φ




(

k
-

τ



)





+




{


:


T
j



τ


<


T
j

+
L



}







k
=

T
j




T
j

+
L
-
1





a





Φ




(

k
-

τ



)





+




k
=

T
j




T
j

+
L
-
1




w


(
k
)








(
10
)







As justified below, this simplifies to










x
j




y
j

+

z
j






(
11
)







where






y
j


=




{


:


T
j



τ


<


T
j

+
L



}




a







(
12
)







and






z
j


=




k
=

T
j




T
j

+
L
-
1





w


(
k
)


.






(
13
)







Both yj and zj are i.i.d. sequences of random variables. We denote their distributions by Y and Z. The distribution of Z is fully determined from the distribution of w(t), which is assumed zero-mean Gaussian with known variance σ2. Moreover, Y is a compound Poisson process since the number of terms in the summation (number of photon arrivals in an interval of length L) has Poisson statistics. Equations (11)-(13) are justified as follows. The first term of (10) represents leakage from earlier intervals and is approximately zero. This is easily shown for Gaussian noise by performing a Taylor expansion about ϵ=0










P


r


(

|

s


(
k
)


|

<
ϵ


)



=



1
2



erf


(



r


(
k
)


+
ϵ



2


σ


)



-


1
2



erf


(



r


(
k
)


-
ϵ



2


σ


)








(
14
)









0.7

9

7

8

5


ϵ
σ



e

-



r


(
k
)


2


2


σ
2






+


O


(

ϵ
3

)


.






(
15
)







Thus there is a finite but small probability that some energy belonging to a previous interval will be included in the current estimate. In practice, this contribution is comparable to the noise for sufficiently small E. The third term in (10) is zero since custom character(t) is causal. The second term in (10) can be written as













{


:


T
j



τ


<


T
j

+
L



}





a








k
=

T
j





T
j

+
L
-
1





Φ




(

k
-

τ



)










{


:


T
j



τ


<


T
j

+
L



}




a







(
16
)








where we assume the pulse shapes custom character(t) are sufficiently smooth such that









Σ

k
=

T
j




T
j

+
L
-
1





Φ




(

k
-

τ



)








-








Φ




(
t
)



d

t



=

1
.






It approximates the total energy of all the photons arriving in the interval [Tj, Tj+L). Let νj designate the number of photon arrivals in the interval [Tj, Tj+L). We assume νj is a realization of a homogeneous Poisson process with rate parameter λ, where λ is expressed in terms of the expected number of photons per interval of length L. Henceforth we shall assume that (11) holds exactly, and write









X
=

Y
+
Z





(
17
)







Finally, we write xj as










x
j




y
j

+

z
j






(
18
)








where we assume Z has known variance σ2. In this subsection we model the statistic of section 3.1 using a compound Poisson process. This allows us to derive an estimator for the density ƒA(x) in terms of observable quantities. The number of photons arriving in the interval [Tj, Tj+L) is a Poisson random variable which we designate νj. The total energy in the interval Y can be modelled as a compound Poisson process i.e.,










y
j

=

{








k
=
0



v
j

-
1




a

(




j
,
1


+
k

)



,





ν
j

>
0





0




ν
j

=
0









(
19
)







ν
j



P


n


(
λ
)







(
20
)








where custom characterj,1=min{custom character:Tj≤τcustom character<Tj+L} is the index of the first-photon arrival time in the interval, the arrival times are assumed ordered, and custom character representing photon energy are independent realizations of the random variable A with density function ƒA(x). The {νj} form a homogeneous Poisson process with rate parameter λ. The Poisson rate λ is expressed in terms of the expected number of photons per interval of length L.










y
j






k
=

T
j




T
j

+
L





r


(
k
)


.






(
21
)







The relationship between realizations of Y and the sampled detector response is illustrated in FIG. 1. The observed xj can be approximated in terms of the custom character by substituting (2) into (9),










x
j

=




k


[



T
^

j

,



T
^

j

+
L


]





(


r


(
k
)


+

w


(
k
)



)






(
22
)







x
j

=





k
=

T
j




T
j

+
L




r


(
k
)



+




k
=

T
j




T
j

+
L




w


(
k
)








(
23
)









y
j

+

z
j






(
24
)








where zj is the realization of the unobservable random variable Y that represents the photon energy in an interval of the discrete-time detector response,










Y
j

=




k


[


T
j

,


T
j

+
L


]





R
k






(
25
)







where zj is a realization of Z, an independent random variable representing errors in the sampling process and estimation of Tj. We assume Z has known variance σ2. With these definitions of X and Y, the number of intervals which can be found in a finite length of detector output is a random variable N. At high count-rates this approach succumbs to paralysis, as the probability of being able to partition the observed data into intervals approaches zero. The onset of paralysis occurs at higher count-rates compared to pile-up rejection based strategies, since multiple photons are permitted to pile-up within in each interval. Assume the time-series defined in (3)-(6) has been sampled uniformly. Without loss of generality, assume unit sample intervals beginning at t0=0 i.e., tk=k, 0≤k<K. Let R be a discrete-time random process representing the sampled detector response of (1). Let Y={Yj:0≤j<N} be a discrete-time random process whose components Yj represent the total photon energy arriving during a fixed time interval. A compound Poisson process can be used to model Y, i.e.,










Y
j

=

{








k
=
1


v
j




A
k


,





ν
j

>
0





0




ν
j

=
0









(
26
)







ν
j



P


n


(
λ
)







(
27
)








where νj is an independent Poisson random variable, and Ak are independent identically distributed random variables with density function ƒA(x). The {νj} form a homogeneous Poisson process with rate parameter λ. The process Y is not directly observable. Assume the pulse shape Φ(t) has finite support. Let custom characterA(t) be the indicator function for the set A. Let the pulse length custom character be given by custom character=sup({t: Φ>0})−inf({t: Φ(t)>0}). Let S=R+W={s(k):0≤k<K} be a discrete-time random process representing the observed detector output given by (2). It consists of the detector response R corrupted by a noise process W. Without loss of generality, we assume unit sample intervals. From the observations S we form the process X, where










X
j



=





Y
j

+

Z
j






(
28
)








and where Zj is a random variable from an independent noise process of known variance σ2. A simple model for testing theory is obtained when we let the pulse shape Φ(t)=custom character(0,1)(t) in (1), in which case we let Xj=Sj, and N is simply the sample length K. Obtaining Xj from S is more complicated for real data. In that case we partition the process S into non-overlapping blocks of length L, where L>custom character. The Poisson rate λ is expressed in photons per block. The start of each block Tjcustom character is chosen such that the total energy of any pulse is fully contained within the block in which it arrives










T
j

=

min






{



k






:







R
k


=
0

,






R

k
+
L


=
0

,







T

j
-
1


+
L

<
k
<

K
-
L



}






(
29
)








FIG. 1 shows that







Y
j

=




k
=

T
j




T
j

+
L









R
k

.







We let







X
j

=




k
=


T
^

j





T
^

j

+
L




S
k














X
j

=




k
=


T
^

j





T
^

j

+
L




S
k






(
30
)








where {circumflex over (T)}j is an estimate of Tj. Section 4.2 describes the selection of L and ϵ for real data. With this definition of Xj, the number of components in Y becomes a random variable for a given sample length K. At high count-rates this approach succumbs to paralysis, as the probability of being able to create a block approaches zero. The onset of paralysis occurs at higher count-rates compared to pile-up rejection based strategies, since multiple photons are permitted to pile-up within in each block. Let Y={Yj: 0≤j<N} be a discrete-time random process whose components Yj are given by










Y
j

=




k
=

T
j




T
j

+
L




R
k






(
31
)







T
j

=


min

k
>


T

j
-
1


+
L





{



R
k

<
d

,






R

k
+
L


<
d


}






(
32
)







where L∈custom character is a constant chosen such that h and d is a small threshold value close to zero. The random variable Yj thus represents the total photon energy arriving during a fixed time interval of length L. The value of d ensures the signal associated with photon arrivals is very small at the start and end of each interval. This is illustrated in FIG. 1. A compound Poisson process can be used to model Y, i.e.,










Y
j

=

{








k
=
1


v
j




A
k


,





v
j

>
0





0




v
j

=
0









(
33
)







v
λ

=

{


v
j

:

0

j
<
N


}





(
34
)







v
j



Pn


(
λ
)






(
35
)








where νλ is a homogeneous Poisson process with rate parameter λ, and Ak are independent identically distributed random variables with density function ƒA(x). Let S=R+W be a discrete-time random process representing the sampled detector output given by (2). It consists of the detector response R corrupted by a noise process W. The process Y is not directly observable. Using (2), (25) and (32), we model observations by the process X={Xj: 0≤j<N}, i.e.,











X
j

=






k
=

T
j




T
j

+
L




S
k














(
36
)







=






k
=

T
j




T
j

+
L




(


R
k

+

W
k


)









(
37
)







=




Y
j

+




k
=

T
j




T
j

+
L




W
k










(
38
)








=
Δ






Y
j

+

Z
j









(
39
)










    • where Z is a noise process of known variance σ2. All the random variables (νj, A1, . . . , Aνj, Zj) involved in modelling a given observation Xj are assumed independent. Let X1, X2, . . . , XN be N independent, identically distributed observations. Let X, Y, Z, A be the collections of Xj, Yj, Zj, Aj: 0≤j<N. Let the corresponding characteristic functions be ϕX, ϕY, ϕZ, ϕA.


      3.3 Basic Form of Estimator





We seek to invert the mapping from the distribution of photon energy A to the distribution of X. Our strategy is to first obtain the characteristic function of X in terms of ƒA, then invert the mapping assuming the count-rate and noise characteristics are known. Let ϕX, ϕY, ϕZ, ϕA be the characteristic functions of X, Y, Z, A. It is well known [15] that for the compound Poisson process Y with rate λ,











ϕ
Y



(
u
)


=


e

-
λ




e

λ


ϕ
A



(
u
)









(
40
)








and since X=Y+Z then











ϕ
X



(
u
)


=



ϕ
Y



(
u
)






ϕ
Z



(
u
)


.






(
41
)







Given the observations xj we can form an empirical estimate {circumflex over (ϕ)}X of the characteristic function of X. Treating this as the true characteristic function, we can invert (40), (41) to obtain the characteristic function of A and then take the Fourier transform to find the amplitude spectrum ƒA. Specifically, using (40), (41) and exploiting the assumption that Z is Gaussian to ensure ϕZ(u) will be non-zero ∀u∈custom character, we let γ: custom charactercustom character be the curve described by










γ


(
u
)


=



ϕ
X



(
u
)




e

-
λ





ϕ
Z



(
u
)








(
42
)






=

e


λϕ
A



(
u
)







(
43
)







Temporarily assuming ∀u, γ(u)≠0, after taking the distinguished logarithm of (43) and rearranging we have











ϕ
A



(
u
)


=


1
λ



dlog


(
γ
)









(
u
)

.






(
44
)







Ideally, ƒA is recovered by taking a Fourier transform











f
A



(
x
)


=




-







e


-
i






2

π





ux





ϕ
A



(
u
)







du






(
45
)







The basic form of our proposed estimator is given in (88) and is derived from (45) via a sequence of steps. First, ϕX is estimated from the data (Step 1). Simply substituting this estimate for ϕX in (42) does not produce an ISE optimal estimate of γ. The approximate ISE is obtained from an approximate estimate of the error distribution of ϕX (Step 2). We then determine a sensible windowing function G(u) (in Step 3) and estimate γ by












γ
^



(
u
)


=


G


(
u
)







ϕ
^

X



(
u
)




e

-
λ





ϕ
Z



(
u
)






.




(
46
)







The windowing function G(u) is designed to minimise the approximate ISE between ƒA and our estimate of ƒA based on (44), (45) and (46), but with γ in (44) replaced by (46). A similar idea is used for estimating ϕA from (44): a weighting function H(u) is found (in Step 4) such that replacing ϕA in (45) by












ϕ
^

A



(
u
)


=


H


(
u
)




1
λ


dlog






(

γ
^

)






(
47
)








produces a better estimate of ƒA than using the unweighted estimate 1/λd log ({circumflex over (γ)}). Finally, the weighting function H(u) is modified (in Step 5) to account for the integral in (45) having to be replaced by a finite sum in practice. The following subsections expand on these five steps.


3.4 Estimating ϕX


An estimate of ϕX(u) is required to estimate γ(u). In this subsection we define a histogram model and describe our estimation of ϕX(u) based on a histogram of the xj values. Assume N intervals (and corresponding xj values) have been obtained from a finite length data sample. Although the empirical characteristic function












ϕ
^


X





emp




(
u
)


=


1
N






j
=
0


N
-
1




e

i

u


x
j









(
48
)








provides a consistent, asymptotically normal estimator of the characteristic function [21], it has the disadvantage of rapid growth in computational burden as the number of data points N and the required number of evaluation points u∈custom character increases. Instead, we use a histogram based estimator that has a lower computational burden. Assume that a histogram of the observed X values is represented by the 2M×1 vector n, where the count in the mth bin is given by











n
m

=




k
=
0


N
-
1






[


m
-
0.5

,

m
+
0.5


)




(

x
k

)




,

m



{


-
M

,





,

M
-
1


}

.






(
49
)







All bins of the histogram have equal width. The bin-width is chosen in relation to the magnitude of the xj values. Since the effect of choosing a different bin width is simply equivalent to scaling the xj values, we assume the bin-width to be unity without loss of generality. The bins are apportioned equally between non-negative and negative data values. The number of histogram bins 2M influences the estimator in various ways, as discussed in later subsections. For now, it is sufficient to assume that 2M is large enough to ensure the histogram includes all xj values. We estimate ϕX(u) by forming a histogram of scaled xj values and take the inverse Discrete Fourier transform i.e.,











ϕ
X



(
u
)


=




m
=

-
M



M
-
1






n
m

N




e


i





2

π





u





m


2

M



.







(
50
)







This is a close approximation of the empirical characteristic function but where xj terms have been rounded to the nearest histogram bin centre (and u contracted by a factor of 2π). The term nm simply counts the number of rounded terms with the same value. Clearly, this function can be efficiently evaluated at certain discrete points u∈−M, . . . , M−1 using the fast Fourier Transform (FFT).


3.5 Error Distribution of {circumflex over (ϕ)}X


The design of the filters G(u) and H(u) in (46) and (47) rely on the statistics of the errors between {circumflex over (ϕ)}X and the true characteristic function. In this subsection we define and describe the characteristics of these errors. We assume the density function ƒX is sufficiently smooth (i.e., |dnƒX(u)/dun|≤Cncustom character∀n∈custom character) and that the width of the histogram bins are sufficiently small (relative to the standard deviation of the additive noise Z) such that the errors introduced by rounding xj values to the centre of each histogram bin are approximately uniformly distributed across each bin, have zero mean and are small relative to the peak spreading caused by Z. In other words, the source of error arising from the binning of xj values is considered negligible. Due to both the statistical nature of Poisson counting and the expected count in each bin being non-integer (custom character[nm]∈custom character≥0), discrepancies exist between the observed number of counts in any given histogram bin and the expected number of counts for that bin. We combine these two sources of error in our model and refer to it as ‘histogram noise’. We emphasize that this noise is distinct from the additive noise Z modelled in (11), which causes peak spreading in the histogram. Let the probability that a realization of X falls in the m-th bin be










px
m

=

Pr


(


m
-
0.5


X
<

m
+
0.5


)






(
51
)







Let the normalized histogram error ϵm in the m-th bin be the difference between the observed count nm and the expected count custom character[nm]=Npxm in the mth bin, relative to the total counts in the histogram N i.e.,










ϵ
m

=



n
m

-

N

p


x
m



N





(
52
)








Using (50), (51) and (52) we have















ϕ
^

X



(
u
)


=






m
=

-
M



M
-
1






n
m

N



e

i



2

π

u

m


2

M












=







m
=

-
M



M
-
1





px
m



e

i



2

π

u

m


2

M






+




m
=

-
M



M
-
1





ϵ
m



e

i



2

π

u

m


2

M



















ϕ
X



(
u
)


+


ϕ
ϵ



(
u
)



















(
53
)






(
54
)



















(
55
)










If the histogram is modelled as a Poisson vector, show that










𝔼


[

ϵ
i

]


=
0




(
56
)







𝔼


[


ϵ
i



ϵ
j


]


=

{





px
j

N




i
=
j





0



i

j









(
57
)






𝔼
[





ϕ
ϵ



|
2


]

=


1
N

.






(
58
)







Since the characteristics of the histogram noise can be expressed in terms of the total number of observed intervals N, the impact of using observation data of finite length may be accounted for by incorporating this information into the design of G(u) and H(u).


3.6 Estimating γ


Having obtained {circumflex over (ϕ)}X, the next task is to estimate γ. Rather than substitute {circumflex over (ϕ)}X(u) for ϕX(u) in (42), we instead use (46) as the estimator, which requires us to choose a windowing function G(u). In this subsection we attempt to find a function G(u) that is close to optimal. When the distribution of errors in {circumflex over (ϕ)}X(u) are considered, the windowing function G(u)=Gopt(u) that results in the lowest ISE estimator of the form given in (46) is











G
opt



(
u
)


=

1


1
+


e


-
2


λℜ


{


ϕ
A



(
u
)


}




N


e


-
2


λ








ϕ
Z



(
u
)




2










(
59
)








where custom character{z} denotes the real component of z∈custom character. We cannot calculate Gopt(u) since ϕA(u) is unknown, so instead we attempt to find an approximation. We let










G


(
u
)


=


1


1
+

1

N


e


-
2


λ







ϕ
Z



2



(
u
)






.





(
60
)







This is justified by considering the magnitude of the relative error between the functions gopt(u) and g1(u) where











g
opt



(
u
)


=

1
+


e


-
2


λℜ


{


ϕ
A



(
u
)


}




N


e


-
2


λ







ϕ
Z



2



(
u
)








(
61
)








g
1



(
u
)


=

1
+


1


N


e


-
2


λ



|

ϕ
Z



|
2



(
u
)



.






(
62
)







The magnitude of the relative error is given by














g
opt

-

g
1



g
1




=






e


-
2


λℜ


{

ϕ
A

}



-
1



N


e


-
2


λ



|

ϕ
Z



|
2



+
1





.





(
63
)







Since custom characterA}∈[−1,1], we see the right hand side of (63) is maximum when custom characterA(u)}=−1. The relative error is thus bound by














g
opt

-

g
1



g
1








e

2

λ


-
1



N


e


-
2


λ



|

ϕ
Z



|
2



+
1







(
64
)








which justifies the approximation when λ is small, or when N|ϕZ|2(u)>>e. Furthermore, we note that the above bound is quite conservative. The distribution of photon energies in spectroscopic systems can typically be modelled as a sum of K Gaussian peaks, where the kth peak has location μk and scale σk i.e.,












f
A



(
x
)


=





k
=
0



K
-
1





α
k



1



2

π




σ
k





e

-



(

x
-

μ
k


)

2


2


σ
k
2













where




(
65
)










k
=
0


K
-
1




α
k


=
1.




(
66
)







Consequently, the characteristic function will have the form











ϕ
A



(
u
)


=




k
=
0


K
-
1





α
k



e


-
2



π
2



σ
k
2



u
2






e

i

2

π


μ
k


u


.







(
67
)








i.e., oscillations within an envelope that decays as e−cu2 for some c>0. The upper bound given by (64) is quite conservative since |custom characterA}|>>1 for most values of u. The approximation error will be significantly smaller at most evaluation points across the spectrum. Having chosen G(u), we can form an estimate of γ using (46). The windowing function reduces the impact of histogram noise arising from the finite number of data samples. For large values of Ne−2λZ(u)|2, the impact of windowing is negligible and the estimator is essentially the same as using (42) directly. However, in the regions where










ln

N

<


2

λ

+

4


π
2



σ
2



u
2







(
68
)








the windowing becomes significant, and acts to bound our estimate of γ i.e., Using the fact that the noise Z is Gaussian (so ϕZ(u)∈custom character and hence |ϕZ|2Z2), and since e−2λ>0 we see that

















γ
^



(
u
)




=



|





ϕ
^

X



(
u
)




e

-
λ





ϕ
Z



(
u
)






1


1
+

1

N


e


-
2


λ







ϕ
Z



2



(
u
)







|







=









ϕ
^

X



(
u
)




1




e


-
2


λ





ϕ
Z
2



(
u
)



+

(



ϕ
Z
2



(
u
)



N





ϕ
Z



2



(
u
)



)







<










N





.

























(
69
)




































(
70
)





























(
71
)










This ensures the argument to the distinguished logarithm in (47) remains finite even though limu→∞ϕZ(u)=0.


3.7 Estimating ϕA


Once {circumflex over (γ)} has been obtained, we proceed to estimate ϕA using (47). This requires another windowing function H(u). In this subsection we find a function H(u) for estimating ϕA that is close to ISE optimal. We begin by defining a function ψ(u) for notational convenience










ψ


(
u
)


=


1

G


(
u
)





e

-
λ






ϕ
Z



(
u
)


.






(
72
)







The ISE is minimized when H(u)=Hopt(u), where the optimal filter Hopt(u) is given by














H
opt



(
u
)


=






ϕ
A



(
u
)





ϕ
^

A



(
u
)



=



1
λ


d


log


(


ϕ
X



e

-
λ




ϕ
Z



)




(
u
)




1
λ


d


log


(



ϕ
X

+

ϕ
ϵ


ψ

)




(
u
)










=





ϕ
A



(
u
)





ϕ
A



(
u
)


+


1
λ


d


log


(


ϕ
X


ϕ
X


)




(
u
)


+


1
λ


d


log


(
G
)




(
u
)


























(
73
)




































(
74
)




















Again, we cannot calculate the optimal filter by using (73)-(74) since ϕX(u), ϕA(u) and ϕϵ(u) are unknown. We instead make the following observations to obtain an approximation of the ISE-optimal filter.


3.7.1 Initial Observations


The optimal filter remains close to unity as long as the estimated {circumflex over (ϕ)}A(u) remains close to the true value of ϕA(u). This will invariably be the case for small values of u since













𝔼


[




ϕ
ϵ



(
u
)




]







π

4

N

















ϕ
X



(
u
)






1





for





small





u















(
75
)
















(
76
)










Furthermore, equation (73) shows that if |ϕϵ(u)|≤≤|ϕX(u)|, then {circumflex over (ϕ)}X(u)=ϕX(u)+ϕϵ(u)≈ϕX(u) so Hopt(u)≈1. For larger values of u, when the magnitude of |ϕX(u)| becomes comparable to or less than |ϕϵ(u)|, the estimator









ϕ
^

A



(
u
)


=


1
λ


d






log


(


(


ϕ
X

+

ϕ
ϵ


)

/
ψ

)




(
u
)







is dominated by noise and no longer provides useful estimates of ϕA(u). In the extreme case |ϕX(u)|<<ϕϵ(u)| so |{circumflex over (ϕ)}X(u)|≈|ϕϵ(u)| and hence













{


ϕ
^

A

}





1
λ


ln





ϕ
ϵ

ψ








(
77
)







The window H(u) should exclude these regions from the estimate, as the bias introduced in doing so will be less than the variance of the unfiltered noise. Unfortunately the estimate of ϕA(u) can be severely degraded well before this boundary condition is reached, so (77) is not particularly helpful. A more useful method for detecting when noise begins to dominate is as follows.


3.7.2 Filter Design Function


Further manipulation of (67) shows that for typical spectroscopic systems, the magnitude of ϕA will have the form














ϕ
A



2



(
u
)


=





k
=
0


K
-
1









α
k
2



e


-
4



π
2



σ
k
2



u
2





+






k
=
0


K
-
1










j
=
0


K
-
1




j

k








α
k



α
j






cos






(

2


π


(


μ
k

-

μ
j


)



u

)



e


-
2




π
2



(


σ
k
2

+

α
j
2


)




u
2









(
78
)








i.e.; a mean component that decays according to the peak widths σk, and a more rapidly decaying oscillatory component that varies according to the location of the spectral peaks μk. In designing the window H(u), we are interested in attenuating the regions in |{circumflex over (ϕ)}A| where |ϕA|2≲|ϕϵ/ψ|2, i.e., where the signal power is less than the histogram noise that has been enhanced by the removal of ϕZ during the estimation of γ. To obtain an estimate of |ϕA|, a low-pass, Gaussian shaped filter Hlpf(u) is convolved with |{circumflex over (ϕ)}A| to attenuate all but the slowly varying, large scale features of |{circumflex over (ϕ)}A|. We denote this |{circumflex over (ϕ)}Asmooth|(u)














ϕ
^

Asmooth





(
u
)


=





1
λ


d






log


(

γ
^

)




(
u
)


















H

1

pf




(
u
)


.






(
79
)







We see that |ϕϵ(u)| has a Rayleigh distribution with scale parameter







σ

R

a

y


=


1


2

N



.






Consequently











1
λ







ϕ
ϵ



(
u
)





ψ


(
u
)







Rayleigh
(


σ
Ray

=

1


λψ


(
u
)





2

N





)

.





(
80
)







It is well known that the cumulative distribution function of a Rayleigh distributed random variable XRay is given by











F
Ray



(

x
;

σ
Ray


)


=

Pr


(



X
Ray

<
x

;

σ
Ray


)






(
81
)






=

1
-


e

-


x
2


2


σ
Ray
2





.






(
82
)







Hence, to assist with computing the window H(u) we will make use of the function











α
min



(
u
)


=

1
-

e


-

1
2







ϕ
^

Asmooth






(
u
)

2



λ
2




ψ


(
u
)


2


2

N







(
83
)








Pr


(



1
λ







ϕ
ϵ



(
u
)





ψ


(
u
)




<





ϕ
^

Asmooth





(
u
)



)






(
84
)








to control the shape of H(u). The function αmin(u) provides an indication of how confident we can be that the estimate {circumflex over (ϕ)}A(u) contains more signal energy than noise energy. The approximation in (84) arises from the fact that |{circumflex over (ϕ)}Asmooth| is also a random variable slightly affected by the noise ϵ. On occasion—particularly for larger values of |u|—the histogram noise may result in sufficiently large values of αmin(u) to give a false sense of confidence, and potentially allow noisy results to corrupt the estimate of ϕA. To overcome this problem, the function was modified to be uni-modal in u











α

m

o

d




(
u
)


=

inf


{



α
min



(
υ
)


,







υ


<


u




}






(
85
)







This modification was justified on the assumption that Gaussian noise causes ϕZ(u) to be decreasing in |u|. Consequently we expect custom character[|ϕϵ(u)|/ψ(u)] to be increasing in |u|. If we ignore the local oscillations in ϕA(u) that are due to peak locations in ƒA(x), the envelope approximated by the smoothed |ϕAsmooth|(u) will be non-increasing in |u|. Equation (74) indicates the optimal window has the form λϕA(u)/(λϕA(u)+d log({circumflex over (ϕ)}XX)(u)+d log(G)(u), so the overall window shape will be decreasing in |u|. Hence, if the estimated characteristic function in the region of some u0 (where the signal to noise ratio is high) has determined that the window value should be H(u0)<1, then it is reasonable to reject the suggestion that in the region u1>u0 (where the signal to noise ratio will be worse) that H(u1)>H(u0). Using the knowledge that |Hopt(u)| should be close to unity for small |u|, close to zero for large |u|, and should ‘roll off’ as the signal-to-noise-ratio decreases—we consider two potential windowing functions as approximations of Hopt(u).


3.7.3 Rectangle Window


The indicator function provides a very simple windowing function










H


(
u
)


=


1

{



α
mod



(
u
)


>

α
0


}





(
u
)

.






(
86
)







The threshold value α0 determines the point at which cut-off occurs, and can be selected manually as desired (e.g., α0=0.95). Once the threshold is chosen, the estimator exhibits similar ISE performance regardless of peak locations in the incident spectra. Rather than requiring the user to select a window width depending on the incident spectrum1, the width of the window is automatically selected by the data via αmod(u). While simplicity is the primary advantage of the rectangular window, the abrupt transition region provides a poor model for the roll-off region of the optimal filter. The second filter shape attempts to improve on that.


3.7.4 Logistic Window


A window based on the logistic function attempts to model smoother roll-off. It is given by










H


(
u
)


=


1
+

e

-


β
0



(

1.0
-

α
0


)






1
+

e

-


β
0



(



α
mod



(
u
)


-

α
0


)










(
87
)








where α0 again acts as a threshold of acceptance of the hypothesis that the signal energy is greater than the noise energy in the estimate {circumflex over (ϕ)}A(u). The rate of filter roll off in the vicinity of the threshold region is controlled by β0>0. This provides a smoother transition region than the rectangle window, reducing Gibbs oscillations in the final estimate of ϕA. Once again, although the parameters α0, β are chosen manually, they are much less dependent on ϕA and can be used to provide close to optimal filtering for a wide variety of incident spectra. Typical values used were α0=0.95, β0=40.0. The performance of the rectangle and logistic window functions are compared in section 4.


3.8 Estimating ƒA


Having designed a window function H(u) and thus an estimator {circumflex over (ϕ)}A(u), the final task is to estimate ƒA(x) by inverting the Fourier transform. This sub-section describes several issues that arise with numerical implementation. Firstly, it is infeasible to evaluate {circumflex over (ϕ)}X, {circumflex over (γ)}(u) and {circumflex over (ϕ)}A numerically on the whole real line. Instead we estimate it at discrete points over a finite interval. The finite interval is chosen sufficiently large such that a tolerably small error is incurred as a result of excluding signal values outside the interval. This is justified for ƒA(x) being a Gaussian mixture, since the magnitudes of ϕX and ϕA will decay as e−cu2 for some c>0. The Fast Fourier Transform (FFT) is used to evaluate {circumflex over (ϕ)}X at discrete points, and hence also determines the points where {circumflex over (γ)}(u) and {circumflex over (ϕ)}A are evaluated. Likewise, the FFT is used to evaluate the final estimate of {circumflex over (ƒ)}A at discrete points. In order to use the FFT, the signals outside the interval should be sufficiently small to reduce impact of aliasing. The evaluation points also need to be sufficiently dense to avoid any ‘phase wrap’ ambiguity when evaluating d log({circumflex over (γ)})(u). Both these objectives can be achieved by increasing the number of bins 2M in the histogram (zero-padding) until a sufficiently large number of bins is attained. As M increases, the sampling density of {circumflex over (γ)} increases, which allows phase wrapping to be detected and managed. A larger M also allows aliasing (caused by the Gaussian shaped tails of |ϕX|) to be negligible. Typically a value of M was chosen as the smallest power of two sufficiently large such that the non-zero values of the histogram were confined to the ‘lower half’ indexes i.e., M=min{M: nm=0, |m|∈{M/2, . . . , M}, M=2N, N∈custom character}. Secondly, the distinguished logarithm in (47) is undefined if {circumflex over (γ)}(u)=0. In estimating γ(u) from the data, there is a small but non-zero probability that the estimate will be zero. In this case, the distinguished logarithm in (47) is undefined and the technique fails. As |u| increases, |ϕX|(u) decreases and may approach |ϕϵ|(u). When |ϕX|(u) and |ϕϵ|(u) have similar magnitudes, the probability of |ϕXϵ| (and hence {circumflex over (γ)}) being close to zero can become significant. The filter H(u) should roll off faster than |ϕX|(u) approaches |ϕϵ|(u) to reduce the impact this may have on the estimate. Ideally H(u) should be zero in regions where noise may result in |{circumflex over (γ)}|(u) being close to zero. Gugushvili has shown [18] that for a rectangular window, the probability of inversion failure approaches zero as the length of the data set increases N→∞.


3.9 Discrete Notation


We digress momentarily to introduce additional notation. Throughout the rest of the paper, bold font will be used to indicate a 2M×1 vector corresponding to a discretely sampled version of the named function, e.g., {circumflex over (ϕ)}A represents a 2M×1 vector whose values are given by the characteristic function {circumflex over (ϕ)}A(u) evaluated at the points u∈{0, 1, . . . , M−1, −M, . . . , −2, −1}. Square bracket notation [k] is used to index a particular element in the vector, e.g., {circumflex over (ϕ)}A[M−1] has the value of {circumflex over (ϕ)}A(M−1). We also use negative indexes for accessing elements of a vector in a manner similar to the python programming language. Negative indexes should be interpreted relative to the length of the vector, i.e., {circumflex over (ϕ)}A[−1] refers to the last element in the vector (which is equivalent to {circumflex over (ϕ)}A[2M−1]).


3.10 Summary of Estimator


The estimation procedure we use may be summarized in the following steps.

    • 1. Partition sampled time series into intervals using (8).
    • 2. Calculate xj value for each interval according to (9).
    • 3. Generate histogram n from the xj values.
    • 4. Calculate {circumflex over (ϕ)}X using the inverse FFT to efficiently evaluate (50) at various sample points.
    • 5. Calculate ϕZ and G at the appropriate points.
    • 6. Calculate {circumflex over (γ)} via (46) using {circumflex over (ϕ)}X, G and ϕZ.
    • 7. Calculate |ϕAsmooth(u)|, a low-pass filtered version of










1
λ


d


log


(

γ
^

)




(
u
)




.






    • 8. Calculate αmod via (83) and (85).

    • 9. Calculate H using αmod and either (86) or (87).

    • 10. Calculate {circumflex over (ϕ)}A via (47) using {circumflex over (γ)} and H. If any element of {circumflex over (γ)} is zero and the corresponding element of H is non-zero, the estimation has failed as the distinguished logarithm is undefined.

    • 11. Calculate {circumflex over (ƒ)}A using the FFT of {circumflex over (ϕ)}A according to















f
^

A



[
k
]


=


1

2

M







m
=

-
M



M
-
1











ϕ
^

A



[
m
]





e

-


i





2

π





mk


2

M




.








(
88
)








3.11 Performance Measures


The performance of the estimator is measured using the integrated square of the error (ISE). The ISE measures the global fit of the estimated density.










ISE


(



f
^

A

,

f
A


)


=




-








(




f
^

A



(
x
)


-


f
A



(
x
)



)

2


dx






(
89
)







The discrete ISE measure is given by










ISE


(



p
^

A

,

p
A


)


=




m
=

-
M



M
-
1









(




p
^

A



[
m
]


-


p
A



[
m
]



)

2






(
90
)








where pA is a 2M×1 vector whose elements contain the probability mass in the region of each histogram bin i.e.,











p
A



[
m
]


=




m
-
0.5


m
+
0.5






f
A



(
x
)




dx
.







(
91
)







The vector {circumflex over (p)}A represents the corresponding estimated probability mass vector.


4 NUMERICAL RESULTS OF THE FIRST EMBODIMENT

Experiments were performed using simulated and real data.


4.1 Simulations


The ideal density used by Trigano et al. [11] was used for these simulations. It consists of a mixture of six Gaussian and one gamma distribution to simulate Compton background. The mixture density is given by









f



0.5

g

+

10


𝒩


(

40
,
1

)



+

10


𝒩


(

112
,
1

)



+

1


𝒩


(

50
,
2

)



+

1


𝒩


(

63
,
1

)



+

2


𝒩


(

140
,
1

)








(
92
)








where custom character(μ, σ2) is the density of a normal distribution with mean μ and variance σ2. The density of the gamma distribution is given by g(x)=(0.5+x/200)e−(0.5+x/200). The density was sampled at 8192 equally spaced integer points to produce the discrete vector pA of probability mass. The FFT was taken to obtain ϕA, a sampled vector of ϕA values.

    • A particular count rate λ was chosen for an experiment, corresponding to the expected number of events per observation interval. The expected pile-up density was obtained via (40). i.e., the discrete vector ϕA was scaled by λ, exponentiated, then scaled by e−λ and finally an FFT was applied











p
Y



[
m
]


=



FFT


(


e

-
λ




e

λϕ
A



)




[
m
]


.





(
93
)







Equation (93) was convolved with a Gaussian to simulate the effect of noise Z smearing out the observed spectrum










p
X

=


p
Y













1



2

π



σ





e

-


m
2


2


σ
2





.






(
94
)







This represents the expected density of the observed spectrum, including pile-up and additive noise. Observation histograms were created using random variables that were distributed according to (94). Experiments were parameterized by the pair (N, λ) where N∈{104, 105, 106, 107, 108, 109} and λ∈{1.0, 3.0, 5.0}. For each parameter pair (N, λ), one thousand observed histograms were made. Estimates of the probability mass vector pA were made using (88) with both (86) and (87) used for H(k). A threshold value of α0=0.95 was used for both window shapes, and β0=40.0 for the logistic shape. The discrete ISE measure of the error between each estimate {circumflex over (p)}A and the true vector pA were recorded. For comparison with asymptotic bandwidth results, estimates were made using a rectangular window whose bandwidth was selected according to the condition 1.3 specified by Gugushvili in [18] i.e., hN=(ln N)−β where β<½. We emphasize that the β of Gugushvili's filter is not to be confused with the β0 of (87). The asymptotic bandwidth criterion was implemented by using










H


[
k
]


=


1

{



k


<

α
0


}




[
k
]






(
95
)







where






α
0


=


M
π





(

ln





N

)

β

.






(
96
)







Three values for Gugushvilli's β were trialed, namely β=½, ⅓, ¼.

    • Estimates were also made using a rectangular filter (95) with fixed bandwidths of various values α/M∈[0.2, 0.4, 0.6, 0.8]. Finally time-series data was created according to (1) with an idealised rectangular pulse shape and 107 pulses whose energies were distributed according to (92). The pulse length and count rate were chosen to give a Poisson rate Δ=1.0. The algorithm described by Trigano et al. [11] was used to estimate the underlying amplitude density from a bi-dimensional histogram containing 32×1024 (duration×energy) bins—this choice of bins reportedly giving the best accuracy and reasonable execution times. The performance and processing time of the core algorithm were recorded for comparison with our proposed algorithm. FIG. 2 plots a typical estimate {circumflex over (p)}A made by the data-driven logistic shaped filter for an experiment with parameter pair (N=106, λ=3.0). The true vector pA (thin solid line), and the observed histogram {circumflex over (p)}X (lower curve containing some noise) are also plotted. Pile-up peaks can be clearly seen in the observed histogram. Although the estimated density suffers from ringing (due to the Gibbs phenomenon), it otherwise estimates the true density and corrects the pile-up that was present in the observed histogram. FIG. 3 plots a typical estimate made at the same operating point as FIG. 2, but with an estimator having a rectangular filter where the bandwidth was selected using (96) and β=¼. This corresponds to the operating region in FIG. 6 where the performance of the fixed bandwidth filter (β=¼) is approaching that of the data-driven filters. It is evident that while also correcting pile-up, the resulting estimate contains more noise. FIG. 4 shows the distribution densities of ISE measures as a function of sample count using a rectangular filter and various fixed bandwidths. Lines were plotted between distribution means (MISE) to assist visualization. The results for the data-driven rectangular filter (86) were also plotted, connected with a thicker curve. This clearly illustrates the weakness of fixed bandwidth filtering. For any fixed bandwidth, the ISE decreases as sample count increases, eventually asymptoting as the bias becomes the dominant source of error. At that point (which is noise and bandwidth dependent) the ISE remains largely constant despite increases in sample count. The fixed bandwidth excludes the use of some estimates {circumflex over (ϕ)}A[k] in the final calculation, even when they have high signal-to-noise-ratio (SNR). FIG. 4 also shows the results given by the rectangular filter with our proposed data-driven bandwidth selection. This curve lies close to the inflection point of each fixed bandwidth curve. This indicates the bandwidth selected for the data-driven rectangular filter is close to the optimal bandwidth value (for a rectangular filter) across the range of sample counts. FIG. 5-FIG. 7 show the distribution densities of the ISE measure as a function of the total number of estimates N in each histogram at three count rates λ∈{1.0, 3.0, 5.0}. The MISE curves for the logistic and rectangular filters are lower than those obtained using the bandwidth given by (96) for much of the region of application interest. There are various regions where the non-data-driven bandwidth (β=¼) gives similar performance to the data-driven bandwidths, however this is not maintained across the whole range of sample counts. The logistic filter shape has slightly better performance than the rectangular filter shape, although the differences between the two filters appears relatively minor to the ISE measure. Table 1 compares the results between the proposed algorithm and the algorithm recently described in [11]. The ISE for both methods were similar at the operating point under test (λ=1.0, N=107), however our proposed algorithm requires considerably less computation.









TABLE 1







Comparison With Algorithm Described in [11]









Algorithm
Avg. ISE
Avg. Time (sec)












Fast Trigano Algorithm
1.3 × 10−5
3.19


32 × 1024 (duration × energy) bins


Proposed Algorithm

1 × 10−5

0.019










4.2 Real Data


The estimator was applied to real data to assess its usefulness in practical applications. The threshold value ϵ found in (8) was chosen to be approximately one half the standard deviation of the additive noise w(t). This ensured a reasonably high probability of creating intervals, yet ensured errors in the estimation of interval energy were low. A value for the interval length L was chosen approximately four times the ‘length’ of a typical pulse—that is, four times the length of the interval {t: Φ(t)>ϵ}. An energy histogram was obtained from a manganese sample, with a photon flux rate of nominally 105 events per second. A slight negative skew was present in the shape of the main peaks of the observed histogram, suggesting a complicated noise source had influenced the system. This is barely visible in FIG. 8. The noise was modelled as a bimodal Gaussian mixture rather than a single Gaussian peak. A simple least-squares optimization routine was used to fit bimodal Gaussian parameters Z˜α1custom character1, σ12)+α2custom character2, σ22) to the noise peak located around bin index zero. A suitable value for λ was chosen manually. The logistic filter with data-driven bandwidth was used to estimate the true density. FIG. 8 shows plots of the observed and estimated probability mass vectors. The main peaks (bins 450˜600) have been enhanced while the pile-up has been attenuated though not fully removed. The first order pile-up peaks have been reduced. The peak-to-pile-up ratio (ratio of the height of main peak to that of first pile-up peak) has increased from around 6 to around 120. These improvements are comparable to other state of the art systems (e.g., [11]). There are several possible reasons the estimator fails to fully resolve pile-up. The accuracy of the estimator depends on correctly modelling the Gaussian noise peak. The bimodal Gaussian mixture modelled the noise peak such that the maximum error was less than 1% of the noise density peak. Given that the residual pile-up peaks in the estimated spectrum are below 1% of the main peak, the sensitivity of the estimator to errors in noise modelling may have contributed to this in some part. A second reason for the unresolved pile-up may be due to the uncertainty in the estimation of the observed spectrum. Several of the residual pile-up peaks are relatively close to the floor of the observed histogram. The residual peaks may simply be a noise induced artefact of the estimator. Finally, the mathematical model may be an overly simple approximation of the observed spectrum. The detection process includes numerous second-order effects that have not been included in the model (e.g., ballistic deficit, supply charge depletion, correlated noise, non-linearities, etc. . . . ). These minor effects may limit the accuracy of the pile-up correction estimator.


5 SUMMARY OF THE FIRST EMBODIMENT

We have taken the estimator proposed by Gugushvili [18] for decompounding under Gaussian noise, and adapted it for correcting pulse pile-up in X-ray spectroscopy. We have proposed a data-driven bandwidth selection mechanism that is easily implemented, and provides significant reduction in ISE/MISE across a broad range of sample counts of interest to spectroscopic applications (104˜109 counts). The data-driven rectangular bandwidth selection is close to optimal (for rectangular filters), and over the range of interest outperforms bandwidth selection based on asymptotic results or fixed bandwidth.

    • Although initial results appear promising, further work is required to improve the performance for practical implementations. The estimation still contains ‘ringing’ artefacts associated with the Gibbs phenomenon. Additional filter shape attempts to reduce this, there are other shapes that are closer to MSE optimal.


6 SECOND EMBODIMENT

This section gives a summary of the spectrum estimator of the 2nd embodiment. The 2nd embodiment solves the problem of the 1st embodiment which requires entire clusters to be approximately encompassed in each interval. In the 2nd embodiment, the entire data series if desired can be used, and the overlap is compensated by introduction of 2 different interval lengths L and L1.

    • We need to include a few additional terms not mentioned in the first embodiment. In particular {circumflex over (ϕ)}X1. The spectrum estimator is based on












f
^

A



(
x
)


=


1

2

πλ







-







e

-
iux




H


(
u
)



d






log


(
γ
)




(
u
)



du
.








(
97
)







The introduction of the filter H(u; α) allows us to address several implementation issues that arise. The estimation procedure we use may be summarized in the following steps.

    • 1. Partition sampled time-series into fixed length intervals [Tj, Tj+L), j∈custom character
    • 2. Calculate xj value for each interval according to xjk∈[Tj, Tj+L)s(k).
    • 3. Generate histogram n from the xj values.
    • 4. Calculate {circumflex over (ϕ)}X using the inverse FFT of n.
    • 5. Partition the sampled time series into a different set of intervals with length L1, and following similar calculations to obtain {circumflex over (ϕ)}X1.
    • 6. Calculate ϕZ and G.
    • 7. Calculate {circumflex over (γ)} using {circumflex over (ϕ)}X, G, ϕZ and {circumflex over (ϕ)}X1,










γ
^

=



ϕ
^

X



ϕ
Z



e

-
λ





ϕ
^


X





1








(
98
)









    • 8. Calculate |ϕAsmooth(u)|, a low-pass filtered version of













1
λ


d


log


(

γ
^

)




(
u
)




.






    • 9. Calculate αmod.

    • 10. Calculate H using αmod.

    • 11. Calculate {circumflex over (ϕ)}A using {circumflex over (γ)} and H. If any element of {circumflex over (γ)} is zero and the corresponding element of H is non-zero, the estimation has failed as the distinguished logarithm is undefined.

    • 12. Calculate {circumflex over (ƒ)}A using the FFT of {circumflex over (ϕ)}A according to















f
^

A



[
k
]


=


1

2

M







m
=

-
M



M
-
1











ϕ
^

A



[
m
]





e

-


i





2

π





mk


2

M




.








(
99
)








6.1 Algorithm Details


Partition the detector output stream into a set of non-overlapping intervals of length L i.e., [Tj, Tj+L), T0custom character≥0, Tj+1≥Tj+L, j∈custom character≥0. Let xj be the sum of the detector output samples in the jth interval i.e., custom character










x
j

=




k
=

T
j




T
j

+
L
-
1








s


(
k
)







(
100
)







Assuming L is greater than a pulse length, the jth interval may contain ‘complete’ pulses as well as pulses which have been truncated by the ends of the interval. It can be shown that xj consists of a superposition of the energy of ‘complete’ pulses which we denote custom character, the energies of truncated pulses which we denote with custom character1j and noise zj

    • Let the detector output stream be partitioned into a second set of non-overlapping intervals [T1j, T1j+L1), T1,0custom character≥0, T1,j+1≥T1,j+L, j∈custom character≥0 where L1<L. Let x1j be given by










x

1

j


=




k
=

T

1

j





T

1

j


+

L
1

-
1








s


(
k
)







(
101
)







If L1 is chosen to be slightly less than the pulse length, the x1j term will contain no ‘complete’ pulses, but consist of a superposition of only the energies of truncated pulses ylj and noise zj. The number of truncated pulses in any interval has a Poisson distribution. We have











X
1

=


Y
1

+

Z
1



,




(
102
)







ϕ

X
1


=


ϕ

Y
1





ϕ

Z
1


.






(
103
)









    • We can decompose the total energy in the interval [Tj, Tj, +L) into the energy contribution Y1 from pulses that have been truncated and the energy contribution Y0 from pulses that are fully contained in the interval [Tj, Tj+L) i.e.,












X
=


Y
0

+

Y
1

+

Z
0

+

Z
1






(
104
)








where Z0 represents noise in the regions where pulses are fully contained in the interval (a length of L−L1), and Z1 represents noise in the regions where pulses are truncated (a length of L1). Hence,










ϕ
X

=


ϕ

Y
0




ϕ

Y
1




ϕ

Z
0





ϕ

Z
1


.






(
105
)









    • By combining (103) with (105) we have













ϕ
X

=


ϕ

X
1




ϕ

Y
0




ϕ

Z
0







(
106
)







Rearranging gives










ϕ

Y
0


=


ϕ
X



ϕ

X
1




ϕ

Z
0








(
107
)











=


e

-

λ
0





e


λ
0




ϕ
A



(
u
)










(
108
)







We can estimate ϕX1 in a similar manner that we estimated ϕX or some other method, e.g., via the empirical characteristic function or by performing an FFT on the normalized histogram of x1j values.

    • When performing the decompounding operation, the Poisson rate λ0 for the reduced interval length L−L1 is used to account for the sub-interval over which the compound Poisson process Y0 occurs.


      6.2 Visualization of Internal Quantities


To aid the reader's understanding, FIG. 9 plots various quantities obtained during the estimation process. The upper blue curve (with a value around 0.3 at bin zero) plots |{circumflex over (ϕ)}X|, the estimated characteristic function of the observed spectrum. A brown curve is used to show the true value of |ϕƒ|, which is distinctly visible as the lower curve with periodic nulls in the region [6000, 10000]. The quantity |ϕϵ|/(λϕZe−λ) is shown in transparent red and appears as ‘noise’ whose average density peaks around bin #8000. The expected value of |ϕϵ|/(λϕZe−λ) is shown with a black dashed line. This is obtained using (75), the known value of λ, and assuming Gaussian noise with known σ to obtain ϕZ(k). The quantity |{circumflex over (ϕ)}ƒ| is shown with a transparent blue curve. This is barely visible as it coincides closely with |ϕƒ| in the intervals [0, 4000], [12000, 16000], and closely with |ϕϵ|/(λϕZe−λ) in the interval [5000, 11000]. Note that the colour of |ϕϵ|/(λϕZe−λ) appears to change from red to purple in the interval [5000, 11000] as both transparent plots overlap. A solid black line shows |{circumflex over (ϕ)}fsmooth|, a low pass filtered version of |{circumflex over (ϕ)}ƒ|. The low pass filtering removes any local oscillations in |{circumflex over (ϕ)}ƒ(k)| due to the peak localities, as described in paragraph on smoothing at the beginning of this section. The term |{circumflex over (ϕ)}fsmooth(k)| serves as an estimate of custom character[|ϕƒ(k)|]. It can be see that |{circumflex over (ϕ)}ƒ| provides a reasonably good estimate of |ϕƒ| in the region where |{circumflex over (ϕ)}fsmooth|>>custom character[|ϕϵ|/(λϕZe−λ)]. As these two quantities approach each other, the quality of the estimate deteriorates until it is eventually dominated by noise. The filter H(k) should include good estimates of |ϕƒ| while excluding poor estimates. To find the regions where good estimates of |ϕƒ| are obtained we address the question: Given custom character[|ϕϵ/(λϕZe−λ)], what is the probability that the calculated values of {circumflex over (ϕ)}ƒ in a local region arise largely from noise?


7 COUNT RATE ESTIMATION

The previous estimator assumed λ was known. An estimate of λ can be obtained without prior knowledge as follows.

    • 1. Using {circumflex over (ϕ)}X, {circumflex over (ϕ)}X1, {circumflex over (ϕ)}Z, G from the previous section, calculate











ϕ
^

Y

=


G







ϕ
^

X




ϕ
Z



ϕ

X
1








(
109
)









    • 2. Using {circumflex over (ϕ)}Y, estimate the count rate. This can be done a number of ways.

    • 3. One way is to use an optimization routine or some other means to fit a curve to {circumflex over (ϕ)}Y. The fitted parameters can be used to obtain an estimate of the count rate.

    • 4. Another way involves estimating the DC offset of Ψ=d log({circumflex over (ϕ)}Y). This can be done by averaging a suitable number of points of Ψ. The points obtained by filtering by H(u) in the previous section are usually suitable, although less points may also produce an adequate estimate.

    • 5. Another way involves using an optimization engine or some other means to fit a curve to Ψ=d log({circumflex over (ϕ)}Y). A suitable parameterized curve to fit d log({circumflex over (ϕ)}Y) is given by













f


(


u
;
λ

,
α
,
σ
,
μ

)


=


-
λ

+

λ





k
=
0


K
-
1









α
k



G


(


σ
k


u

)




e


-
j






2

π





u






μ
k











(
110
)







where





α

=

(


α
0

,





,

α

K
-
1



)





(
111
)






σ
=

(


σ
0

,





,

σ

K
-
1



)





(
112
)






μ
=

(


μ
0

,





,

μ

K
-
1



)





(
113
)











      • and where K∈custom character is chosen to allow the curve fit to sufficient accuracy. The parameter λ provides an estimate of the count rate. The optimization engine is not required to give equal weighting to each point in Ψ.







8 DESCRIPTION OF FIGURES

The following figures are to aid understanding of the process. FIG. 1 shows one possible scheme used to partition the detector output. The illustration depicts the sampled detector response to three incident photons. To aid clarity of the figure, the effects of noise have been removed. The output response has been partitioned into several regions of equal length (L). The number of pulses arriving in each region is unknown to the processing system. One pulse has arrived in the first interval. Two pulses have arrived in the second interval. No pulses have arrived in the third interval. The total photon energy arriving within each interval is calculated as the statistic of interest, being the sum of all sample values in each interval. Intervals are not temporally aligned with pulse arrivals. FIG. 2 illustrates the output of the estimation procedure. The true probability density of incident photon energy is plotted as a solid black line. The photons arrival rate is such that three photons on average arriving during any given interval [Tj, Tj+L). The standard deviation of additive noise in the detector output signal s(t) is equal to one histogram bin width. One million intervals were collected. A histogram was made of the total energy in each interval. This is plotted with the blue line. The effects of pile-up is clearly evident, particularly around bins 75, 150 and 225. The red trace plots the estimate of the true incident energy spectrum after the data has been processed by the system. Although some noise appears in the estimate, the effects of pile-up have been removed. The estimate is expected to correctly recover the true incident spectrum on average. This result was obtained using an internal filter whose bandwidth was determined automatically from the data. FIG. 3 illustrates the same quantities as FIG. 2 under the same operating conditions, however in this instance the bandwidth of the internal filter has been determined using asymptotic results from the literature. Although the estimated probability density of incident energies has been recovered, the variance is significantly greater compared to FIG. 2. FIG. 8 illustrates the operation of the system on real data. The blue trace plots the probability density of observed energy values, while the red trace plots the estimated true probability density of incident photon energies. There is no black trace as the true probability density is unknown. In this experiment, X-ray fluorescence of a manganese sample was used as a photon source. The photon arrival rate was around 105 photons per second. The interval length was chosen such that the average time between photons corresponded to the length of two intervals. Sufficient data was collected and partitioned to form 5.9×106 intervals. The standard deviation of the additive noise corresponds to 4.7 histogram bins. The estimation process has clearly reduced the pile-up peaks and enhanced the true peaks. FIG. 9 illustrates various quantities obtained during the simulation of the system described in the 2nd embodiment. It is described in section 5.1 Visualization of Internal Quantities. FIGS. 9-12 relate to the 2nd embodiment. FIG. 10 illustrates the observed and true probability density of input photon energies for the experiment from which FIGS. 9-13 were derived. The black trace plots the true probability density. The red trace plots the density expected to be observed when three photons on average arrive during a given interval length. The blue trace plots the actual observed density. Up to tenth-order pile-up can be seen in the observed density. FIG. 10 included several plots arising from a typical spectroscopic system. The actual incident photon density (‘Ideal Density’) is plotted with a solid dark line. An observed histogram obtained by partitioning the time-series data is shown in dark blue. Distortion of the spectrum caused by pulse pile-up is evident FIG. 13 plots various internal quantities using a logarithmic vertical axis. The dark blue curve that dips in the centre of the plot is |{circumflex over (ϕ)}X|. The green quantity that crosses the plot horizontally is |{circumflex over (ϕ)}Y|. The upper cyan curve that dips in the centre of the plot is |ϕZ|.1 FIG. 11 illustrates the trajectory of the curve γ in the complex plane. FIG. 12 illustrates internal quantities similar to FIG. 9, however there are some additional signals. The horizontal red trace that is largely noise, and the corresponding black dashed line represent |ϕϵ|, the magnitude of the characteristic function of the histogram noise. The transparent green plot that forms the ‘noisy peak’ in the center of the figure is the estimate |ϕϵ|. This quantity was plotted in blue in FIG. 9, and was barely visible as it was obscured by |{circumflex over (ϕ)}X|/(λϕZe−λ), which is not shown in FIG. 12. The horizontal trace with an average value of −3 is a plot of |ϕY|. The cyan trace that begins with a value of zero at bin zero, and dips to a minimum around bin 8000 is |ϕZ|, the magnitude of the characteristic function of the additive Gaussian noise. FIG. 13 relates to the 2nd broad aspect of calculating the count rate. It illustrates internal quantities used in the calculation of {circumflex over (λ)}. The cyan trace that begins with a value of zero at bin zero, and dips to a minimum around bin 8000 is |ϕZ|, the magnitude of the characteristic function of the additive Gaussian noise. The dark blue trace that dips to a minimum in the center of the Figure is |{circumflex over (ϕ)}X|, the estimate of the characteristic function of the observed data. The yellow/green horizontal trace with an average value of −3 is the estimate of |ϕY|.


REFERENCES



  • [1] G. F. Knoll, Radiation Detection and Measurement, 3rd Edition. New York: Wiley, 2000.

  • [2] P. A. B. Scoullar and R. J. Evans, “Maximum likelihood estimation techniques for high rate, high throughput digital pulse processing,” in 2008 IEEE Nuclear Science Symposium Conference Record, pp. 1668-1672, October 2008.

  • [3] M. Haselman, J. Pasko, S. Hauck, T. Lewellen, and R. Miyaoka, “FPGA-based pulse pile-up correction with energy and timing recovery,” IEEE Transactions on Nuclear Science, vol. 59, pp. 1823-1830, October 2012.

  • [4] T. Petrovic, M. Vencelj, M. Lipoglavsek, R. Novak, and D. Savran, “Efficient reduction of piled-up events in gamma-ray spectrometry at high count rates,” IEEE Transactions on Nuclear Science, vol. 61, pp. 584-589, February 2014.

  • [5] B. A. VanDevender, M. P. Dion, J. E. Fast, D. C. Rodriguez, M. S. Taubman, C. D. Wilen, L. S. Wood, and M. E. Wright, “High-purity germanium spectroscopy at rates in excess of 106 events/s,” IEEE Transactions on Nuclear Science, vol. 61, pp. 2619-2627, October 2014.

  • [6] Y. Sepulcre, T. Trigano, and Y. Ritov, “Sparse regression algorithm for activity estimation in spectrometry,” IEEE Transactions on Signal Processing, vol. 61, pp. 4347-4359, September 2013.

  • [7] T. Trigano, I. Gildin, and Y. Sepulcre, “Pileup correction algorithm using an iterated sparse reconstruction method,” IEEE Signal Processing Letters, vol. 22, pp. 1392-1395, September 2015.

  • [8] L. Wielopolski and R. P. Gardner, “Prediction of the pulse-height spectral distortion caused by the peak pile-up effect,” Nuclear Instruments and Methods, vol. 133, pp. 303-309, March 1976.

  • [9] N. P. Barradas and M. A. Reis, “Accurate calculation of pileup effects in PIXE spectra from first principles,” X-Ray Spectrometry, vol. 35, pp. 232-237, July 2006.

  • [10] T. Trigano, A. Souloumiac, T. Montagu, F. Roueff, and E. Moulines, “Statistical pileup correction method for HPGe detectors,” IEEE Transactions on Signal Processing, vol. 55, pp. 4871-4881, October 2007.

  • [11] T. Trigano, E. Barat, T. Dautremer, and T. Montagu, “Fast digital filtering of spectrometric data for pile-up correction,” IEEE Signal Processing Letters, vol. 22, pp. 973-977, July 2015.

  • [12] P. Ilhe, E. Moulines, F. Roueff, and A. Souloumiac, “Nonparametric estimation of mark's distribution of an exponential shot-noise process,” Electronic Journal of Statistics, vol. 9, no. 2, pp. 3098-3123, 2015.

  • [13] P. Ilhe, F. Roueff, E. Moulines, and A. Souloumiac, “Nonparametric estimation of a shot-noise process,” in 2016 IEEE Statistical Signal Processing Workshop (SSP), pp. 1-4, June 2016.

  • [14] C. McLean, M. Pauley, and J. H. Manton, “Limitations of decision based pile-up correction algorithms,” in 2018 IEEE Statistical Signal Processing Workshop (SSP), pp. 693-697, June 2018.

  • [15] D. Snyder and M. Miller, Random Point Processes In Time And Space. New York: Springer-Verlag, 2, revised ed., September 2011.

  • [16] B. Buchmann and R. Grübel, “Decompounding: An estimation problem for Poisson random sums,” Annals of Statistics, pp. 1054-1074, 2003.

  • [17] B. van Es, S. Gugushvili, and P. Spreij, “Deconvolution for an atomic distribution,” Electronic Journal of Statistics, vol. 2, pp. 265-297, 2008.

  • [18] S. Gugushvili, Non-Parametric Inference for Partially Observed Levy Processes. PhD, University of Amsterdam, Thomas Stieltjes Institute, 2008.

  • [19] S. Said, C. Lageman, N. Le Bihan, and J. H. Manton, “Decompounding on compact Lie groups,” IEEE Transactions on Information Theory, vol. 56, pp. 2766-2777, June 2010.

  • [20] B. van Es, S. Gugushvili, and P. Spreij, “A kernel type nonparametric density estimator for decompounding,” Bernoulli, vol. 13, pp. 672-694, August 2007.

  • [21] J. Yu, “Empirical characteristic function estimation and its applications,” Econometric Reviews, vol. 23, pp. 93-123, December 2004.


Claims
  • 1. A method of determining a spectrum of energies of individual quanta of radiation received in a radiation detector, the method comprising the steps of: (1) obtaining a time series of digital observations from the radiation detector comprising pulses corresponding to the detection of the individual quanta;(2) computing spectrum sensitive statistics from the detector signal, the spectrum sensitive statistics defining a mapping from a density of amplitudes of the pulses to the spectrum sensitive statistics; and(3) determining the spectrum by estimating the density of amplitudes of the pulses by applying an inversion of the mapping to the spectrum sensitive statistics.
  • 2. The method of claim 1 further comprising basing the spectrum sensitive statistics on a sum of the digital observations over a plurality of time intervals.
  • 3. The method of claim 2 further comprising defining the mapping using an approximate compound Poisson process.
  • 4. The method of claim 3, further comprising augmenting the approximate compound Poisson process by a modelled noise.
  • 5. The method of claim 4, further comprising expressing the mapping as a relation between characteristic functions of the amplitudes, the spectrum sensitive statistics and the modelled noise.
  • 6. The method of claim 5, further comprising computing the characteristic functions of the spectrum sensitive statistics by applying an inverse Fourier transform to a histogram of the sum of the digital observations.
  • 7. The method of claim 5, further comprising computing the characteristic functions of the amplitudes with a low pass filter.
  • 8. The method of claim 2, further comprising selecting each of the plurality of time intervals to encompass zero or more approximately entire clusters of the pulses, and defining the plurality of time intervals to be nonoverlapping and have a constant length L.
  • 9. The method of claim 8, further comprising requiring a maximum value of the detector signal at a beginning and end of each time interval.
  • 10. The method of claim 8, further comprising defining the approximate compound Poisson process as a sum of the amplitudes of the pulses in each time interval.
  • 11. The method of claim 2, further comprising selecting the plurality of intervals to include: a first set of nonoverlapping time intervals of constant length L without regard to entirety of clusters of the pulses; and a second set of nonoverlapping time intervals of constant length L1 less than L also without regard to entirety of clusters of the pulses; wherein L is at least as long as a duration of the pulses.
  • 12. The method of claim 11, further comprising selecting L1 to be less than the duration of the pulses.
  • 13. The method of claim 1, further comprising using a data driven strategy selected to result in a near optimal choice for a kernel parameter which minimizes an integrated square of errors of an estimated probability density function of the energies of the individual quanta of radiation.
  • 14. A method of estimating count rate of individual quanta of radiation received in a radiation detector, the method comprising the steps of: (1) obtaining a time series of digital observations from the radiation detector comprising pulses corresponding to the detection of the individual quanta;(2) computing spectrum sensitive statistics from the detector signal, based on a sum of the digital observations over a plurality of time intervals, the spectrum sensitive statistics defining a mapping from a density of amplitudes of the pulses to the spectrum sensitive statistics using an approximate compound Poisson process, the plurality of time intervals including: a first set of nonoverlapping time intervals of constant length L selected without regard to entirety of clusters of the pulses; and a second set of nonoverlapping time intervals of constant length L1 less than L also selected without regard to entirety of clusters of the pulses; wherein L is at least as long as a duration of the pulses;(3) determining an estimate of a characteristic function of the approximate compound Poisson process using:
  • 15. The method of claim 14, further comprising estimating the count rate by using an optimization routine or other means to fit a curve, estimating a DC offset of a logarithm of the estimate of the characteristic function, or fitting a curve to the logarithm of the estimate of the characteristic function.
Priority Claims (1)
Number Date Country Kind
2019900974 Mar 2019 AU national
PCT Information
Filing Document Filing Date Country Kind
PCT/AU2020/050275 3/23/2020 WO
Publishing Document Publishing Date Country Kind
WO2020/191435 10/1/2020 WO A
US Referenced Citations (8)
Number Name Date Kind
3872287 Koeman Mar 1975 A
6590957 Warburton et al. Jul 2003 B1
20080025385 Barat et al. Jan 2008 A1
20120041700 Scoullar et al. Feb 2012 A1
20140029819 Zeng et al. Jan 2014 A1
20160161390 Greiner Jun 2016 A1
20180204356 Xia et al. Jul 2018 A1
20200170586 Takahashi Jun 2020 A1
Non-Patent Literature Citations (3)
Entry
Gugushvili, Shota. “Decompounding under Gaussian noise.” arXiv preprint arXiv:0711.0719 (2007).
International Search Report received in PCT/AU2020/050275, mailed Jun. 9, 2020.
Extended European Search Report received in EP Application No. 20777486.0, mailed Nov. 18, 2022.
Related Publications (1)
Number Date Country
20220137111 A1 May 2022 US