Device, method, and medium for predicting a probability of an occurrence of a data

Information

  • Patent Grant
  • 6766280
  • Patent Number
    6,766,280
  • Date Filed
    Monday, August 26, 2002
    22 years ago
  • Date Issued
    Tuesday, July 20, 2004
    20 years ago
Abstract
In a Bayes mixture probability density calculator for calculating Bayes mixture probability density which reduces a logarithmic loss A modified Bayes mixture probability density is calculated by mixing traditional Bayes mixture probability density calculated on given model S with a small part of Bayes mixture probability density for exponential fiber bundle on the S. Likewise, a prediction probability density calculator is configured by including the Bayes mixture probability density calculator, and by using Jeffreys prior distribution in traditional Bayes procedure on the S.
Description




BACKGROUND OF THE INVENTION




This invention relates to technology for statistical prediction and, in particular, to technology for prediction based on Bayes procedure.




Conventionally, a wide variety of methods have been proposed to statistically predict a data on the basis of a sequence of data generated from the unknown source. Among the methods, Bayes prediction procedure has been widely known and has been described or explained in various textbooks concerned with statistics and so forth.




As a problem to be solved by such statistical prediction, there is a problem for sequentially predicting, by use of an estimation result, next data which appear after the data sequence. As regards this problem, proof has been made about the fact that a specific Bayes procedure exhibits a very good minimax property by using a particular prior distribution which may be referred to as Jeffreys prior distribution. Such a specific Bayes procedure will be called Jeffreys procedure hereinafter. This proof is done by B. Clarke and A. R. Barron in an article which is published in Journal of Statistical Planning and Inference, 41:37-60, 1994, and which is entitled “Jeffreys prior is asymptotically least favorable under entropy risk”. This procedure is guaranteed to be always optimum whenever a probability distribution hypothesis class is assumed to be a general smooth model class, although some mathematical restrictions are required in strict sense.




Herein, let logarithmic regret be used as another index. In this event also, it is again proved that the Jeffery procedure has a minimax property on the assumption that a probability distribution hypothesis class belongs to an exponential family. This proof is made by J. Takeuchi and A. R. Barron in a paper entitled “Asymptotically minimax regret for exponential families”, in Proceedings of 20th Symposium on Information Theory and Its Applications, pp. 665-668, 1997.




Furthermore, the problem of the sequential prediction can be replaced by a problem which provides a joint (or simultaneous) probability distribution of a data sequence obtained by cumulatively multiplying prediction probability distributions.




These proofs suggest that the Jeffreys procedure can have excellent performance except that the prediction problem is sequential, when the performance measure is the logarithmic loss.




Thus, it has been proved by Clarke and Barron and by Takeuchi and Barron that the Bayes procedure is effective when the Jeffreys prior distribution is used. However, the Bayes procedure is effective only when the model class of the probability distribution is restricted to the exponential family which is very unique, in the case where the performance measure is the logarithmic regret instead of redundancy.




Under the circumstances, it is assumed that the probability distribution model class belongs to a general smooth model class which is different from the exponential family. In this case, the Jeffreys procedure described in above B. Clarke and A. R. Barron's document does not guarantee the minimax property. To the contrary, it is confirmed by the instant inventors in this case that the Jeffreys procedure does not have the minimax property.




Furthermore, it often happens that a similar reduction of performance takes place in a general Bayes procedure different from the Jeffreys procedure when estimation is made by using the logarithmic regret in lieu of the redundancy.




SUMMARY OF THE INVENTION




It is an object of this invention to provide a method which is capable of preventing a reduction of performance.




It is a specific object of this invention to provide improved Jeffreys procedure which can accomplish a minimax property even when logarithmic regret is used a performance measure instead of redundancy.




According to a first embodiment of the invention, a Bayes mixture density calculator operable in response to a sequence of vectors χ


n


=(χ


1


, χ


2


, . . . , χ


n


) selected from a vector value set χ to produce a Bayes mixture density on occurrence of the χ


n


, comprising a probability density calculator, supplied with a sequence of data χ


t


and a vector value parameter u, for calculating a probability density for the χ


t


, p(χ


t


|u), a Bayes mixture calculator for calculating a first approximation value of a Bayes mixture density p


w





n


) on the basis of a prior distribution w(u) predetermined by the probability density calculator to produce the first approximation value, an enlarged mixture calculator for calculating a second approximation value of a Bayes mixture m(χ


n


) on exponential fiber bundle in cooperation with the probability density calculator to produce the second approximation value, and a whole mixture calculator for calculating (1−ε)p


w





n


)+ε·m(χ


n


) to produce a calculation result by mixing the first approximation value of the Bayes mixture density p


w





n


) with a part of the second approximation value of the Bayes mixture m(χ


n


) at a rate of 1−ε:ε to produce the calculation result where ε is a value smaller than unity.




According to a second embodiment of the invention which can be modified based on the first embodiment of the invention, a Jeffreys mixture density calculator operable in response to a sequence of vector χ


n


=(x


1


, x


2


, . . . , x


n


) selected from a vector value set χ to produce a Bayes mixture density on occurrence of the x


n


, comprising a probability density calculator responsive to a sequence of data χ


t


and a vector value parameter u for calculating a probability density p(χ


t


|u) for the χ


t


, a Jeffreys mixture calculator for calculating a first approximation value of a Bayes mixture density p


J





n


) based on a Jeffreys prior distribution w


J


(u) in cooperation with the probability density calculator to produce the first approximation value, an enlarged mixture calculator for calculating a second approximation value of a Bayes mixture m(χ


n


) on exponential fiber bundle in cooperation with the probability density calculator to produce the second approximation value, and a whole mixture calculator for calculating (1−ε)p


J





n


)+ε·m(χ


n


) to produce a calculation result by mixing the first approximation value of the Bayes mixture density p


J





n


) with a part of the second approximation value of the Bayes mixture m(χ


n


) at a rate of 1−ε:ε to produce the calculation result where ε is a value smaller than unity.




Also, when hypothesis class is curved exponential family, it is possible to provide with a third embodiment of the invention by modifying the first embodiment of the invention. According to the third embodiment of the invention, a Bayes mixture density calculator operable in response to a sequence of vector χ


n


=(χ


1


, χ


2


, . . . , χ


n


selected from a vector value set χ to produce a Bayes mixture density on occurrence of the χ


n


, comprising a probability density calculator responsive to a sequence of data χ


t


and a vector value parameter u for outputting probability density p(χ


t


|u) for the χ


t


on curved exponential family, a Bayes mixture calculator for calculating a first approximation value of a Bayes mixture density p


w





n


) on the basis of a prior distribution w(u) predetermined by the probability density calculator to produce the first approximation value, an enlarged mixture calculator for calculating a second approximation value of a Bayes mixture m(χ


n


) on exponential family including curved exponential family in cooperation with the probability density calculator to produce the second approximation value, and a whole mixture calculator for calculating (1−ε)p


w





n


)+ε·m(χ


n


) to produce a calculation result by mixing the first approximation value of the Bayes mixture density p


w





n


) with a part of the second approximation value of the Bayes mixture m(χ


n


) at a rate of 1−ε:ε to produce the calculation result where ε is a value smaller than unity.




According to a forth embodiment of the invention which can be modified based on the third embodiment of the invention, a Jeffreys mixture density calculator operable in response to a sequence of vector x


n


=(x


1


, x


2


, . . . , x


n


) selected from a vector value set χ to produce a Bayes mixture density on occurrence of the χ


n


, comprising a probability density calculator responsive to a sequence of data χ


t


and a vector value parameter u for calculating probability density p(χ


t


|u) for the χ


t


on curved exponential family, a Jeffreys mixture calculator for calculating a first approximation value of a Bayes mixture density p


J





n


) based on a Jeffreys prior distribution w


J


(u) in cooperation with the probability density calculator to produce the first approximation value, an enlarged mixture calculator for calculating a second approximation value of a Bayes mixture m(χ


n


) on exponential family including curved exponential family in cooperation with the probability density calculator to produce the second approximation value, and a whole mixture calculator for calculating (1−ε)p


J





n


)+ε·m(χ


n


) to produce a calculation result by mixing the first approximation value of the Bayes mixture density p


J





n


) with a part of the second approximation value of the Bayes mixture m(χ


n


) at a ratio of 1−ε:ε to produce the calculation result where ε is a value smaller than unity.




According to a fifth embodiment of the invention, a prediction probability density calculator operable in response to a sequence of vector x


n


=(x


1


, X


2


, . . . , x


n


) selected from a vector value set χ and x


n+1


to produce a prediction probability density on occurrence of the x


n+1


, comprising a joint probability calculator structured by the Bayes mixture density calculator of the first embodiment of the invention for calculating a modified Bayes mixture density q


(ε)(x




n


) and q


(ε)


(x


n+1


) based on predetermined prior distribution to produce first calculation results and a divider responsive to the calculation results for calculating probability density q


(ε)


(x


n+1


)/q


(ε)


(x


n


) to produce a second calculation result with the first calculation results kept intact.




According to a sixth embodiment of the invention which can be modified based on the fifth embodiment of the invention, a prediction probability density calculator operable in response to a sequence of x


n


=(x


1


, x


2


, . . . , x


n


) selected from a vector value set χ and x


n+1


to produce a prediction probability density on occurrence of the x


n+1


, comprising a joint probability calculator structured by the Jeffreys mixture density calculator of the second embodiment of the invention for calculating a modified Jeffreys mixture density q


(ε)


(x


n


) and q


(ε)


(x


n+1


) to produce first calculation results and a divider response to the calculation results for calculating a probability density q


(ε)


(x


n+l


)/q


(ε)( x




n


) to produce a second calculation result with the first calculation results kept intact.




Also, when hypothesis class is curved exponential family, it is possible to provide with a seventh embodiment of the invention by modifying the fifth embodiment of the invention. According the seventh embodiment of the invention, a prediction probability density calculator operable in response to a sequence of vector x


n


=(x


1


, x


2


, . . . , x


n


) selected from a vector value set χ and x


n+1


to produce a prediction probability density on occurrence of the x


n+1


, comprising a joint probability calculator structured by the Bayes mixture density calculator of the third embodiment of the invention for calculating a modified Bayes mixture density q


(ε)(x




n


) and q


(ε)


(x


n+1


) based on a predetermined prior distribution to produce first calculation results and a divider responsive to the calculation results for calculating a probability density q


(ε)(x




n+1


)/q


(ε)


(x


n


) to produce a second calculation result with the first calculation results kept intact.




According to an eighth embodiment of the invention which can be modified based on the seventh embodiment of the invention, a prediction probability density calculator operable in response to a sequence of vector x


n


=(x


1


, x


2


, . . . , x


n


) selected from a vector value set χand x


n+1


to produce a prediction probability density on occurrence of the x


n+1


, comprising a joint probability calculator structured by the Jeffreys mixture density probability calculator of the fourth embodiment of the invention for calculating a modified Jeffreys mixture density q


(ε)


(x


n+1


) and q


(ε)


(x


n


) to produce first calculation results and a divider responsive to the calculation results for calculating a probability density q


(ε)


(x


n+1


)/q


(ε)


(x


n


) to produce a second calculation result with the first calculation results kept intact.











BRIEF DESCRIPTION OF THE DRAWING





FIG. 1

shows a block diagram for use in describing a method according to a first embodiment of the invention, which is executed by the use of a first modified Bayes mixture distribution calculator;





FIG. 2

shows a block diagram for use in describing a method according to a second embodiment of the invention, which is executed by the use of a first modified Jeffreys mixture distribution calculator;





FIG. 3

shows a block diagram for use in describing a method according to a third embodiment of the invention, which is executed by the use of a second modified Bayes mixture distribution calculator;





FIG. 4

shows a block diagram for use in describing a method according to a fourth embodiment of the invention, which is executed by the use of a second modified Jeffreys mixture distribution calculator;





FIG. 5

shows a prediction probability calculator used in a method according to fifth through eighth embodiments of the invention.











DESCRIPTION OF THE PREFERRED EMBODIMENTS




First, explanation is made about symbols used in this specification. Let ν be a σ-finite measure on the Borel subsets of k-dimensional euclidean space


k


and χ be the support of ν. For example, it is assumed that Lebesgue measure dx on the


d


is ν(dχ) and


k


itself is χ (conversely, more general measure space could be assumed).




Herein, let consideration be made about a problem or a procedure which calculates, for each natural number t, a probability density of x


t+1


in response to a sequence x


t


and x


t+1


. Here,







x
t



=
def




(


x
1

,

x
2

,





,

x
t


)




χ
t






and






x

t
+
1





χ
.












Such a procedure is assumed to be expressed as q(x


t+1


|x


t


). This means that a probability density of x


t+1


is expressed on condition that x


t


are given. In this event, the following equation holds.








χ



q
(



x

t
+
1




&LeftBracketingBar;

x
t

)



v
(







x

t
+
1



)


=
1












In the above equation, q(x


t+1


|x


t


) is referred to as prediction probability distribution for t+1-th data x


t+1.






Then, if







q


(

x
n

)




=
def






t
=
0


n
-
1




q


(


x

t
+
1




x
t


)













(assuming that q(x


1


) is defined even if t=0), the following equation holds.










χ
n





q


(

x
n

)




v
(







x
n


)



=
1










where,







v


(



x
n


)




=
def






t
=
1

n




v


(



x
t


)


.












Therefore, q defines a joint probability distribution on infinite sequence set χ∞ (that is, q defines stochastic process). Giving a stochastic process q, a prediction procedure is determined.




Next, a model class is determined. Let p(x|u) be a probability density of x εχ based on a measure ν, where u is a real-valued parameter of d-dimensional. Then, the model class is defined by;






S


=
def



{


p


(

·


u


)


:

u

U


}











This class may be referred to as a hypothesis class. Where, let U be a subset of


d


. Assuming that p(x|u) is differentiable twice for u. And when let K be compact set included in U, S(K) is given by:







S


(
K
)




=
def



{


p


(

·


u


)


:

u

K


}











Furthermore, definition is made as follows.







p


(


x
n


u

)




=
def






t
=
1

n



p


(


x
t


u

)













That is, for a sequence of data x


n


, assuming that each element x


t


independently follows the same distribution p(·|u) (each element is specified by i.i.d. which is an abbreviation of an independently identical distributed state). For simplicity, in the specification, such assumption is introduced. However, the method according to the invention may be easily expanded to the case where the each element x


t


is not i.i.d.




A prior distribution is defined as a probability distribution in which a parameter u is regarded as a random variable. It is presumed that density for Lebesgue measure du of a certain prior distribution is provided as w(n). Then, p


w


is considered as probability density on χ


n


, if it is given by:







p
w

=


(

x
n

)




def






p


(


x
n


u

)




w


(
u
)





u














Thus obtained p


w


is referred to as Bayes mixture with prior density w.




Next, definition of Jeffreys prior distribution is recalled. Fisher information of parameter u is represented as J(u). That is, ij component of d-dimensional square matrix is obtained by following equation.








J
ij



(
u
)


=

-


E
u



[





2


log







p


(

x

u

)







u
j






u
i




]













In the above equation, log denotes natural logarithm and E


u


represents expected value based on p(x|u). A density on K for Lebesgue measure du of Jeffreys prior distribution is represented as w


J


(u), and obtained by:








w
J



(
u
)


=



det


(

J


(
u
)


)





C
J



(
K
)













where









C
J



(
K
)




=
def





K





det


(

J


(
u
)


)





u





,










that is, C


J


(K) is representative of a normalization constant.




Next, Jeffreys procedure proposed by B. Clarke and A. R. Barron will be explained. Their method responds to inputs x


n


and x


n+1


and produces outputs given by:










p
J

(

x

n
+
1


&RightBracketingBar;



x
n


)



=
def








K




p
(

x

n
+
1


&RightBracketingBar;


u


)




w
K



(
u
)





u







K




p
(

x
n

&RightBracketingBar;


u


)




w
K



(
u
)





u













Next, redundancy which is used as a performance measure which they employ is introduced. Let q(x


t


|x


t−1


) represent an output corresponding to an input (x


t−1


, x


t


) obtained a certain procedure q. Herein, it is assumed that each x


t


(t=1, 2, . . . , n+1) is a random variable following a certain p(·|u) (uεU


c


⊂U). Redundancy for u of q is determined by:








R
n



(

q
,
u

)




=
def






t
=
1

n








E
u



[

log






p


(

x
t



&RightBracketingBar;


u

)





q


(

x
t



&RightBracketingBar;



x

t
-
1



)



]













This may be referred to as cumulative Kullback-Leibler divergence. The value is always non-negative and means that the performance of q becomes more excellent as the value becomes small. In particular, this index is often used in the context of data compression. Also, it is noted that the redundancy may be rewritten as follows.








R
n



(

q
,
u

)


=


E
u



[

log



p
(


x
t



&LeftBracketingBar;
u
)




q


(

x
n

)




]












Optimality of Jeffreys procedure proposed by B. Clarke and A. R. Barron is realized when the following equation is true.








R
n



(


p
J

,
u

)


=



d
2


log


n

2

π





e



+

log







C
J



(
K
)



+

o


(
1
)













Herein, the value of o(1) nears zero as n increases in amount. This asymptotic equation uniformly holds for all u(uεK0) when let K


0


be any compact set which included K


o


(i.e. K


0


⊂K


o


).




Because







sup

u

K









R
n



(

q
,
u

)












is larger than or equal to the above value R


n


(p


J


,u) whenever q takes any value, the asymptotic equation is optimum. That is, the following equation holds.








inf
q








sup


u

K









R
n



(

q
,
u

)



=



d
2


log






n

2

π





e



+

log







C
J



(
K
)



+

o


(
1
)













For the above relationship, p


J


may be represented as asymptotically minimax for redundancy.




Next, logarithmic regret is introduced. It is assumed that a sequence of data x


n


is given. Logarithmic regret for the data x


n


of q with respect to probability model S is defined by following equation.







r


(

q
,

x
n


)




=
def






t
=
1

n







log



p
(



x
t



&LeftBracketingBar;


u
^



(
n
)




)



q
(



x
t



&LeftBracketingBar;

x

t
-
1




)















Where, û(n) is maximum likelihood estimation value of u on condition that the x


n


is given. That is, û(n) is defined as follows.









u
^



(
n
)




=
def



arg







max
u








p
(

x
n

&RightBracketingBar;


u




)










Like in the case of redundancy, the logarithmic regret is represented another way as follows.







r


(

q
,

x
n


)


=

log





p
(

x
n

&RightBracketingBar;




u
^



(
n
)



)


q


(

x
n

)














In this point, when S is assumed to be an exponential family, following equation (1) holds.










r


(


p
J

,

x
n


)


=



d
2


log


n

2

π



+

log







C
J



(
K
)



+

o


(
1
)







(
1
)













Exponential family is a model which can be represented by the following equation.








S={p


(χ|θ)=exp(θ·χ−ψ(θ)):θεΘ}






According to practice for notation of exponential family, θ may be used as parameter instead of u. θ is referred to as natural parameter or θ-coordinates in exponential family. More detail description is made in L. Brown, “Fundamentals of statistical exponential families”, Institute of Statistics, 1986.




Asymptotic equation (1) uniformly holds for all x


n


which satisfies û(n)εK


0


. Like in the case of redundancy, if the following equation is true, q has property of minimax for logarithmic regret. However, when S does not belong to exponential family, the above asymptotic equation for Jeffreys procedure is not true. Instead, it can be proved that the following formula holds.








sup


x
n

:



u
^



(
n
)




K
0






r


(


p
J

,

x
n


)



>


inf
q



sup


x
n

:



u
^



(
n
)



K





r


(

q
,

x
n


)













Taking the above into consideration, some modifications are required Here, one of solutions is explained. First, empirical Fisher information Ĵ is introduced and is given by:










J
^

(
x
&RightBracketingBar;


u

)



=
def



-






2


log







p
(
x
&RightBracketingBar;


u

)





u
j






u
i















Furthermore, a definition is added as follows.










J
^

(

x
n

&RightBracketingBar;


u

)



=
def




1
n






t
=
1

n








J
^

(


x
t



&LeftBracketingBar;
u
)















In this case, the following equation holds.










J
^

(

x
n

&RightBracketingBar;


u

)

=


-

1
n









2


log







p
(

x
n

&RightBracketingBar;


u

)





u
j






u
i















Using the definitions of J gives:








J


(


u


)=


E




u










n




|u


)]






Next, a random variable s is defined by:












s
(
x
&RightBracketingBar;


u

)



=
def





J
^

(
x
&RightBracketingBar;


u


)

-

J


(
u
)



,










where s is representative of a d-dimensional square matrix. Like in the case of the definition of Ĵ(χ


n


|u), s(x


n


|u) is defined by:










s
(

x
n


&RightBracketingBar;


u

)

=


1
n






t
=
1

n







s
(



x
t



&LeftBracketingBar;
u


)














Let ν be representative of the d-dimensional square matrix. In this event, a family of new probability density is defined by:




























p
_

(
x
&RightBracketingBar;


u

,
v

)



=
def




p
(
x
&RightBracketingBar;


u


)



exp
(


v
·

s
(
x
&RightBracketingBar;



u

)


-

ψ


(

u
,
v

)



)







where






v
·

s
(
x
&RightBracketingBar;



u


)



=
def





ij




v
ij




s
ij

(
x
&RightBracketingBar;


u



)






and



&IndentingNewLine;




ψ


(

u
,
v

)




=
def



log





p
(
x
&RightBracketingBar;


u





)



exp
(


v
·

s
(
x
&RightBracketingBar;



u

)


)



v


(


x

)



=

log







E
u

[

exp
(


v
·

s
(
x
&RightBracketingBar;



u

)

)



]

.










In this case, it is noted that {overscore (p)}(χ


n


|u,ν) is represented by:








{overscore (p)}(χ




n




|u,ν


)=


p





n




|u


)exp(


n


(ν·


s





n




|u)−ψ(




u,ν


)))









Next
,


V
B



=
def



{


v
:



i



j




,


&LeftBracketingBar;

v
ij

&RightBracketingBar;


B


}


,










and S is expanded into {overscore (S)} on the assumption that B is representative of a certain positive constant and ψ(u,ν) is finite for uεu, νεV


B


.








{overscore (S)}={{overscore (p)}


(·|


u


,ν):


uεu,νεV




B


}






{overscore (S)} thus obtained by expansion of S is referred to as exponential fiber bundle for S. In this case, the meaning of adjective “exponential” indicates that s(x|u) has the same direction as exponential curvature of S. More detail description is made in “Differential geometry in statistical inference”, Institute of Mathematical Statistics, Chapter 1, 1987.




Let ρ(u) be prior density on u and mixture density m be defined by:










m


(

x
n

)




=
def







p
_

(

x
n

&RightBracketingBar;


u



,
v

)



ρ


(
u
)





u





v

/


(

2

B

)


d
2













Herein, a range of integral for ν is V


B


. Also it is noted that (2B)


d


is Lebesgue volume of V


B


.




Mixture is constructed by combining the following equation (2) with p


J


.












q

(
ε
)




(

x
n

)




=
def





(

1
-

ε
n


)




p
J



(

x
n

)



+


ε
n

·

m


(

x
n

)





,




(
2
)













where 0<ε


n


<½. For q in the above equation, it is assumed that value of ε


n


is decreased according to an amount of n and the following inequality (3) holds.











n

,


ε
n



1

n
l







(
3
)













In the formula (3), 1 is representative of a certain positive number. On the basis of these assumptions, it is proved that q


(s)


asymptotically becomes minimax as the value of n increases.




This shows that, when q(x


t


|x


t−1


) is calculated not only by using the mixture for S like in the general Bayes procedure but also by slightly combining the mixture m(x


n


) for enlarged class, the calculation brings about a good result with respect to the logarithmic regret.




When S belongs to the model referred to as a curved exponential family, the procedure can be more simplified. This is specified by the case where S belongs to a smooth subspace of the exponential family T. More specifically, on the assumption that T is a {overscore (d)}-dimensional exponential family given by






(


T={p


(χ|θ)=exp(θ·χ−ψ(θ)):θεΘ⊂


{overscore (d)}


}), S is






represented on the condition of (d<{overscore (d)}) by:








S={p




c


(χ|


u


)=


p


(χ|φ(


u


)):


uεU⊂




d


}






where φ is a smooth function characterized in that u θ. For example, if χ is a finite set, any smooth model becomes a curved exponential family. Although the curved exponential family has high generality in comparison with the exponential family, it is not generalized as compared with the general smooth model class. More detail description is made by Shunichi Amari in “Differential-geometrical methods in statistics”, Lecture notes in Statistics, Springer-Verlag.




Under these circumstances, {overscore (S)} (exponential fiber bundle of S) is coincident with T by the first-order approximation. Therefore, mixture in exponential family T in which S is included can be used instead of the mixture in exponential fiber bundle. That is, it can be proved like in the above that q


(ε)


becomes minimax on the assumption given by:







m


(

x
n

)


=




Θ





p
(


x
n



&LeftBracketingBar;
θ
)



ρ


(
θ
)









θ














In the above equation, Θ′ represents a set including {Θ:θ=φ(u),uεU}, ρ represents a smooth prior distribution density on the Θ′.




In addition, in the case of the curved exponential family, calculation of Fisher information J(u) becomes easy. That is, in this case, an expected value can be determined without any operation by the following equation (4).










(




J
ij



(
u
)


=




α
=
1


d
_







β
=
1


d
_









φ
α



(
u
)






u
i









φ
β



(
u
)






u
j








2



ψ


(
θ
)







θ
α






θ
β








&RightBracketingBar;

)


θ
=

φ


(
u
)







(
4
)













This is because the Fisher information of θ in the exponential family T is given by:









2



ψ


(
θ
)







θ
α






θ
β













In the previous description, it has thus far been explained that logarithmic regret which is used as the performance measure for sequential prediction issue can be minimized by combining the mixture on the exponential fiber bundle with the Bayes mixture. This is advantageous even if the logarithmic loss is used as the performance measure for non-sequential prediction issue. This is because suggestion is made about the fact that decreasing a value of the following formula concerned with the logarithmic loss results in a decrease of each term in the equation, namely, log(1/(q(x


t


|x


t−1


))). The values of log(1/(q(x


t


|x


t−1


))) are referred to as the logarithmic loss. Now, the formula in question is given by:









t
=
1

n



log







p


(


x
t

|


u
^



(
n
)



)



q


(


x
t

|

x

t
-
1



)














It is an object of the present invention to provide a statistical estimation method which is improved by using the above described techniques.




Herein, description will be schematically made about first through eighth embodiments according to the present invention so as to facilitate understanding of the present invention.




In the first embodiment according to the present invention, calculation is carried out in connection with the modified Bayes mixture probability density. To this end, an output generated from a device which determines the Bayes mixture probability density on S is combined with an output generated from a device which calculates the Bayes mixture probability density on the exponential fiber bundle.




The second embodiment according to the present invention which is basically similar to the first embodiment is featured by using the Jeffreys prior distribution on calculating the Bayes mixture probability density on S.




The third embodiment according to the present invention is operable in a manner similar to the first embodiment except that operation is simplified when S is curved exponential family. Such simplification can be accomplished by utilizing the property of S.




The fourth embodiment according to the present invention which is basically similar to the third embodiment is featured by using the Jeffreys prior distribution in the device which determines the Bayes mixture probability density on S.




The fifth through the eighth embodiments according to the present invention are featured by calculating prediction probability density by the use of the devices according to the first through the fourth embodiments according to the invention, respectively.




Next, description will be made in detail about the first through the eighth embodiments according to the present invention with reference to the accompanying drawings.




Referring to

FIG. 1

, a device according to the first embodiment of the invention is operated in a following order or sequence.




(1) Inputs x


n


are provided to and stored into a probability density calculator shown by the block


11


in FIG.


1


.




(2) Next, a Bayes mixture calculator shown by the block


12


in

FIG. 1

calculates p(x


n


|u) for various values of u by the use of the probability density calculator


11


and also calculates approximation values of the Bayes mixture






(


given





by







p
w



(

x
n

)



=



p
(


x
n



&LeftBracketingBar;
u
)



w


(
u
)





u


)













by using the previous calculation results p(x


n


|u). Thereafter, the Bayes mixture calculator


12


sends the approximation values to a whole mixture calculator shown by the block


14


in FIG.


1


.




(3) An enlarged mixture calculator shown by the block


13


in

FIG. 1

calculates p(x


n


|u) for various values of u and p(x|u) for various values of both x and u in cooperation with the probability density calculator and calculates J(u) and Ĵ(χ


n


|u) for various values of u by the use of previous calculation results p(x


n


|u) and p(x|u). Further, using these results, the enlarged mixture calculator


13


calculates {overscore (p)}(χ


n


|u,ν) for various values of v and u, and calculates approximation values of Bayes mixture







m


(

x
n

)


=




p
_

(


x
n



&LeftBracketingBar;

u
,
v

)



ρ


(
u
)





u





v

/

B

d
2
















by the use of the previous calculation results {overscore (p)}(χ


n


|u,ν) and sends the approximation values to the whole mixture calculator


14


.




(4) The whole mixture calculator


14


calculates the mixture q


(ε)





n


)=(1−ε)p


w





n


)+ε·m(χ


n


) for a predetermined small value of ε on the basis of the values of two Bayes mixtures which have been stored and produces the mixture as an output.




Referring to

FIG. 2

, a device according to the second embodiment of the invention is basically similar in structure to the first embodiment of the invention except that the device illustrated in

FIG. 2

utilizes a Jeffreys mixture calculator


22


instead of the Bayes mixture calculator


12


used in FIG.


1


. In

FIG. 2

, the device carries out no calculation of the Bayes mixture








p
(


x
n



&LeftBracketingBar;
u
)



w


(
u
)





u













but calculation of the Jeffreys mixture given by








p
(


x
n



&LeftBracketingBar;
u
)




w
J



(
u
)





u













in accordance with the above operation (2). That is, the Jeffreys mixture calculator


22


calculates p(x


n


|u) for various values of u and p(x|u) for various values of x and u in cooperation with the probability density calculator


21


and calculates J(u) for various values of u by using previous calculation results p(x


n


|u) and p(x|u). Subsequently, the Jeffreys mixture calculator


22


further calculates w


J


(u) for various values of u by the use of the previous calculation results to obtain approximation values of








p
(


x
n



&LeftBracketingBar;
u
)




w
J



(
u
)





u













using w


J


(u).




Referring to

FIG. 3

, a device according to the third embodiment of the invention is successively operated in order mentioned below.




(1) Inputs x


n


are provided to and stored into a probability density calculator shown by


31


in FIG.


3


.




(2) A Bayes mixture calculator shown by


32


in

FIG. 3

calculates p


c


(x


n


|u) for various values of u in cooperation with the probability density calculator


31


and thereafter calculates approximation values of Bayes mixture








p
w



(

x
n

)


=




p
c

(


x
n



&LeftBracketingBar;
u
)



w


(
u
)





u














by using previous calculation results p


c


(x


n


|u). As a result, the approximation values are sent from the Bayes mixture calculator


32


to a storage


34


which is operable as a part of a whole mixture calculator.




(3) An enlarged mixture calculator


33


in

FIG. 3

calculates p(x


n


|θ) for various values of θ in cooperation with the probability density calculator


31


and calculates approximation values of Bayes mixture







m


(

x
n

)


=




Θ





p
(


x
n



&LeftBracketingBar;
θ
)



ρ


(
θ
)









θ














by using the previous calculation results. The approximation values are sent from the enlarged mixture calculator


33


to the whole mixture calculator


34


in FIG.


3


.




(4) The whole mixture calculator


34


calculates mixtures q


(ε)





n


)=(1−ε)p


w





n


)+ε·m(χ


n


) for a predetermined small value of ε on the basis of the values of two Bayes mixtures which have been stored and produces the mixtures as outputs.




Referring to

FIG. 4

, a device according to the fourth embodiment of the invention is successively operated in order.




(1) Inputs x


n


are provided to and stored into a probability density calculator shown by a block


41


in FIG.


4


.




(2) A Jeffreys mixture calculator shown by


42


in

FIG. 4

calculates p


c


(x


n


|u) and w


J


(u) for various values of u in cooperation with the probability density calculator


41


and a Jeffreys prior distribution calculator


45


(which is designed according to the equation (4)). In addition, the Jeffreys mixture calculator


42


calculates approximation values of Jeffreys mixture








p
J



(

x
n

)


=




p
c

(


x
n



&LeftBracketingBar;
u
)




w
J



(
u
)









u














by using the previous calculation results p


c


(x


n


|u) and w


J


(u), and sends the approximation values to a whole mixture calculator


44


in FIG.


4


.




(3) An enlarged mixture calculator shown by


43


in

FIG. 4

calculates p(x


n


|θ) for various values of θ in cooperation with the probability density calculator


41


and obtain approximation values of Bayes mixture







m


(

x
n

)


=




Θ





p
(


x
n



&LeftBracketingBar;
θ
)



ρ


(
θ
)









θ














by using the previous calculation results. The approximation values are sent from the enlarged mixture calculator


43


to the whole mixture calculator


44


.




(4) The whole mixture calculator


44


calculates mixtures q


(ε)





n


)=(1−ε)p


J





n


)+ε·m(χ


n


) for a predetermined small value of ε on the basis of the values of two Bayes mixtures which have been stored and produces the mixtures as outputs.




Referring to

FIG. 5

, a device according to each of the fifth through eighth embodiments of the invention includes a joint probability density calculator


51


. Herein, it is to be noted that the devices illustrated in

FIGS. 1 through 4

may be incorporated as the joint probability density calculators


51


to the devices according to the fifth through the eighth embodiments of the present invention, respectively. The device shown in

FIG. 5

is operated in order mentioned below.




(1) Inputs x


n


and x


n+1


are provided to the joint probability density calculator


51


in FIG.


5


.




(2) The joint probability density calculator


51


calculates q(x


n


) and q(x


n+1


) and sends the calculation results to a divider


52


in FIG.


5


.




(3) The divider calculates q(χ


n+1





n


)=q(χ


n+1


)/q(χ


n


) by using the two joint probabilities sent from the joint probability density calculator


51


.




According to the first embodiment of the invention, in regard to an issue of minimizing logarithmic regret for general probability model S, it is possible to calculate more advantageous joint probability distribution as compared with the conventional methods using traditional Bayes mixture joint probability on the S.




Furthermore, according to the second embodiment of the invention, in regard to an issue of minimizing logarithmic regret for general probability model S, it is possible to calculate more advantageous joint probability distribution as compared with the methods using traditional Jeffreys mixture joint probability on the S.




Moreover, the third embodiment of the invention is advantageous in regard to an issue of minimizing logarithmic regret for curved exponential family S in that the joint probability distribution is effectively calculated as compared with the methods using traditional Bayes mixture joint probability on the S.




In addition, the fourth embodiment of the invention is effective in connection with an issue of minimizing logarithmic regret for curved exponential family S in that it is possible to calculate more advantageous joint probability distribution as compared with the conventional methods using traditional Jeffreys mixture joint probability on the S.




Further, each of the fifth through the eighth embodiments of the invention can effectively calculate the prediction probability distribution in regard to a prediction issue using logarithmic loss as performance measure, as compared with the conventional methods. More specifically, the fifth embodiment is more convenient than the conventional methods using traditional Bayes mixture joint probability on probability model S while the sixth embodiment is effective as compared with the conventional methods using traditional Jeffreys mixture joint probability on probability model S. Likewise, the seventh embodiment of the invention is favorable in comparison with the conventional methods using traditional Bayes mixture joint probability on curved exponential family S while the eighth embodiment of the invention is superior to the conventional methods using traditional Jeffreys mixture joint probability on curved exponential family S.



Claims
  • 1. An apparatus for processing a sequence of vector xn=(x1, x2, . . . , xn) selected from a vector value set χto produce a Bayes mixture density on occurrence of the xn, comprising:a Bayes mixture density calculator having an input that receives a sequence of data xt and a vector value parameter u, and an output, the Bayes mixture density calculator comprising: a first calculator that calculates, from the sequence of data xt received at the input of the Bayes mixture density calculator, a probability density p(xt|u) for the data xt on curved exponential family; a second calculator, coupled to the first calculator, that calculates a first approximation value of a Bayes mixture density pw(xn) on the basis of a prior distribution w(u) predetermined by the first calculator to produce the first approximation value; a third calculator, coupled to the first calculator, that calculates a second approximation value of a Bayes mixture m(xn) on exponential family including curved exponential family in cooperation with the first calculator to produce the second approximation value; and a fourth calculator, coupled to the second and third calculators, that calculates (1−ε)pw(xn)−ε·m(xn) by mixing the first approximation value of the Bayes mixture density pw(xn) with a part of the second approximation value of the Bayes mixture m(xn) at a rate of 1−ε:ε, where ε is a value smaller than unity, to produce the Bayes mixture density on occurrence of the xn for arithmetic coding during data compression, the Bayes mixture density being output from the Bayes mixture density calculator.
  • 2. An apparatus for processing of a sequence of vector xn=(x1, x2, . . . , xn) selected from a vector value set χ to produce a Bayes mixture density on occurrence of the xn, comprising:a Jeffreys mixture density calculator having an input that receives a sequence of data xt and a vector value parameter u, and an output, the Jeffreyss mixture density calculator comprising: a first calculator that calculates, from the sequence of data xt received at the input of the Bayes mixture density calculator, a probability density p(xt|u) for the data xt on curved exponential family; a second calculator, coupled to the first calculator, that calculates a first approximation value of a Bayes mixture density pJ(xn) based on a Jeffreys prior distribution wJ(u) in cooperation with the first calculator to produce the first approximation value; a third calculator, coupled to the first calculator, that calculates a second approximation value of a Bayes mixture m(xn) on exponential family including curved exponential family in cooperation with the first calculator to produce the second approximation value; and a fourth calculator, coupled to the second and third calculators, that calculates (1−ε)pJ(xn) by mixing the first approximation value of the Bayes mixture density pJ(xn) with a part of the second approximation value of the Bayes mixture m(xn) at a ratio of 1−ε:ε, where ε is a value smaller than unity, to produce the Bayes mixture density on occurrence of the xn for arithmetic coding during data compression, the Bayes mixture density being output from the Jeffreys mixture density calculator.
  • 3. A prediction probability density calculator for use in statistically predicting data on the basis of a sequence of vector xn=(x1, x2, . . . , xn) selected from a vector value set χ and xn+1, the prediction probability calculator being for producing a prediction probability density on occurrence of the xn+1, comprising:a joint probability calculator structured by the apparatus claimed in claim 1 for calculating a modified Bayes mixture density q(ε)(xn) and q(ε)(xn+1) based on a predetermined prior distribution to produce first calculation results; and a divider coupled to receive an output of the joint probability calculator and responsive to the calculation results for calculating a probability density q(ε)(xn+1)/q(ε)(xn) to produce as an output a second calculation result with the first calculation results kept intact.
  • 4. A prediction probability density calculator operable in response to a sequence of vector xn=(x1, x2, . . . , xn) selected from a vector value set χ* and xn+1 to produce a prediction probability density on occurrence of the xn+1, comprising:a joint probability calculator structured by the apparatus claimed in claim 2 for calculating a modified Jeffreys mixture density q(ε)(xn) and q(ε)(xn+1) to produce first calculation results; and a divider coupled to receive an output of the joint probability calculator and responsive to the calculation results for calculating a probability density q(ε)(xn+1)/q(ε)(xn) to produce as an output a second calculation result with the first calculation results kept intact.
  • 5. A means for processing a sequence of data xn=(x1, x2, . . . , xn) to produce a mixture density on occurrence of the xn, comprising:means for calculating the mixture density having an input that receives the sequence of data xn, and an output, the Bayes mixture density calculating means comprising: means for calculating a first Bayes mixture density on a hypothesis class; means for calculating a second Bayes mixture density on an enlarged hypothesis class; and means for mixing the first Bayes mixture density with the second Bayes mixture density in a predetermined proportion to produce a the modified Bayes mixture density, wherein the means for calculating the mixture density outputs the modified Bayes mixture density as the mixture density on occurrence of the xn for arithmetic coding during data compression.
  • 6. A means for processing as claimed in claim 5, wherein the first Bayes mixture density and the second Bayes mixture density are calculated by the use of a predetermined prior distribution.
  • 7. A means for processing as claimed in claim 5, wherein a Jeffreys prior distribution is used to calculate the first Bayes mixture density and second Bayes mixture density.
  • 8. A means for processing as claimed in claim 5, wherein the first Bayes mixture density and the second Bayes mixture density are mixed together at a rate of 1−ε:ε, where ε takes a value smaller than unity.
  • 9. A means for processing as claimed in claim 5, wherein the hypothesis class belongs to the curved exponential family.
  • 10. A means for processing a sequence of data xn=(x1, x2, . . . , xn) and a data xn+1 to produce a prediction probability density on occurrence of the xn+1 comprising:means for calculating a prediction probability density having an input that receives the sequence of data xn and data xn+1, and an output, the means for calculating a prediction probability density comprising: means for calculating first Bayes mixture densities, on a hypothesis class, for the sequence of data xn and a sequence of data xn+1 which representing (x1, x2, . . . , xn, xn+1); means for calculating second Bayes mixture densities, on an enlarged hypothesis class, for the sequence of data xn and the sequence of data xn+1; means for mixing the first Bayes mixture densities for the sequence of data xn and the sequence of data xn+1 with the second Bayes mixture densities for the sequence of data xn and the sequence of data xn+1, in a predetermined proportion to produce the modified Bayes mixture densities for the sequence of data xn and the sequence of data xn+1, respectively; and means for dividing the modified Bayes mixture density for the sequence of data xn+1 by the modified Bayes mixture density for the sequence of data xn, wherein the means for calculating a prediction probability density outputs the result as the prediction probability density on occurrence of the xn+1 for arithmetic coding during data compression.
  • 11. A means for processing as claimed in claim 10, wherein the first Bayes mixture densities and the second Bayes mixture densities are calculated by the use of a predetermined prior distribution.
  • 12. A means for processing as claimed in claim 10, wherein the first Bayes mixture densities and the second Bayes mixture densities are calculated by the use of Jeffreys prior distribution.
  • 13. A means for processing as claimed in claim 10, wherein the first Bayes mixture densities and the second Bayes mixture densities are mixed together at a rate of 1−ε:ε, where ε takes a value smaller than unity.
  • 14. A means for processing as claimed in claim 10, wherein the hypothesis class belongs to curved exponential family.
  • 15. A method for processing a sequence of data xn=(x1, x2, . . . , xn) to produce a mixture density on occurrence of the xn, wherein the method comprises:receiving the sequence of data xn; calculating a first Bayes mixture density on a hypothesis class; calculating a second Bayes mixture density on an enlarged hypothesis class; and mixing the first Bayes mixture density with the second Bayes mixture density in a predetermined proportion to produce the mixture density on occurrence of the xn for arithmetic coding during data compression.
  • 16. A method for processing a sequence of data xn=(x1, x2, . . . , xn) and a data xn+1 to produce a prediction probability density on occurrence of the xn+1, wherein the method comprises:receiving the sequence of data xn and data xn+1; and repeating, for each sequence of data xn and each sequence of data xn+1 representing (x1, x2, . . . , xn, xn+1), the following first through third substeps of: (1) calculating a first Bayes mixture density, on a hypothesis class, for the sequence of data xn and the sequence of data xn+1; (2) calculating a second Bayes mixture density, on an enlarged hypothesis class, for the sequence of data xn and the sequence of data xn+1; and (3) mixing the first Bayes mixture densities for the sequence of data xn and the sequence of data xn+1 with the second Bayes mixture densities for the sequence of data xn and the sequence of data xn+1 in a predetermined proportion to produce modified Bayes mixture densities for the sequence of data xn and the sequence of data xn+1; and dividing the modified Bayes mixture density for the sequence of data xn+1 by the modified Bayes mixture density for the sequence of data xn, wherein the result is output as the prediction probability density on occurrence of the xn+1 for arithmetic coding during data compression.
  • 17. A computer readable medium which stores a program for processing a sequence of data xn=(x1, x2, . . . , xn) to produce a mixture density on occurrence of the xn, the program comprising the steps of:receiving the sequence of data xn; calculating a first Bayes mixture density on a hypothesis class; calculating a second Bayes mixture density on an enlarged hypothesis class; and mixing the first Bayes mixture density with the second Bayes mixture density in a predetermined proportion to produce the mixture density on occurrence of the xn for arithmetic coding during data compression.
  • 18. A computer readable medium which stores a program which is for processing a sequence of data xn=(x1, x2, . . . , xn) and a data xn+1 to produce a prediction probability density on occurrence of the xn+1 the program comprising the steps of:receiving the sequence of data xn and xn+1; repeating, for each sequence of data xn and each sequence of data xn+1 which representing (x1, x2, . . . , xn, xn+1), the following substeps; (1) calculating a first Bayes mixture density, on a hypothesis class, for the sequence of data xn and the sequence of data xn+1; (2) calculating a second Bayes mixture density, on an enlarged hypothesis class, for the sequence of data xn and the sequence of data xn+1 data (3) mixing the first Bayes mixture densities for the sequence of data xn and the sequence of data xn+1 with the second Bayes mixture densities for the sequence of data xn and the sequence of data xn+1, in a predetermined proportion to produce modified Bayes mixture densities for the sequence of data xn and the sequence of data xn+1; and dividing the modified Bayes mixture density for the sequence of data xn+1 by the modified Bayes mixture density for the sequence of data xn, wherein the result is output as the prediction probability density on occurrence of the xn+1 for arithmetic coding during data compression.
Parent Case Info

This is a divisional of Application Ser. No. 09/099,405 filed Jun. 18, 1998 now U.S. Pat. No. 6,466,894 the disclosure of which is incorporated herein by reference.

US Referenced Citations (28)
Number Name Date Kind
5072452 Brown et al. Dec 1991 A
5113367 Marrian et al. May 1992 A
5276632 Corwin et al. Jan 1994 A
5539704 Doyen et al. Jul 1996 A
5659771 Golding Aug 1997 A
5706391 Yamada et al. Jan 1998 A
5710833 Moghaddam et al. Jan 1998 A
5859891 Hibbard Jan 1999 A
5909190 Lo et al. Jun 1999 A
5924065 Eberman et al. Jul 1999 A
5956702 Matuoka et al. Sep 1999 A
5980096 Thalhammer-Reyero Nov 1999 A
6009452 Horvitz Dec 1999 A
6012058 Fayyad et al. Jan 2000 A
6061610 Boer May 2000 A
6067484 Rowson et al. May 2000 A
6076083 Baker Jun 2000 A
6095982 Richards-Kortum et al. Aug 2000 A
6128587 Sjolander Oct 2000 A
6136541 Gulati Oct 2000 A
6155704 Hunt et al. Dec 2000 A
6161209 Moher Dec 2000 A
6246972 Klimasauskas Jun 2001 B1
6304841 Berger et al. Oct 2001 B1
6336108 Thiesson et al. Jan 2002 B1
6408290 Thiesson et al. Jun 2002 B1
6496816 Thiesson et al. Dec 2002 B1
20030010128 Buell et al. Jan 2003 A1
Foreign Referenced Citations (1)
Number Date Country
8902122 Mar 1991 NL
Non-Patent Literature Citations (6)
Entry
John C. Chao, “Jeffreys Prior Analysis of the Simultaneous Equations Model in the Cases with n + 1 Endogenous Variables” (Jul. 1998).
Frank Kleibergen Richard Kleijn,“Bayesian Testing in Cointegration Models using the Jeffreys' Prior”.
Csiszar Budapest, “Information Theoretic Methods in Probability and Statistics”.
G. Larry Bretthorst, “An Introduction to Parameter Estimation Using Bayesian Probability Therory”.
Te Sun Han and Kingo Kobayashi, “Mathematics of Information and Coding, Translations of Mathematical Monographs”, volumen 203, American Mathematical Society 2002.
J. Rissanen, “Universal Modeling and Coding” IEEE trans. Information Theory, vol. 27, No. 1, pp. 12-23. 1981.