This invention relates to providing current images of an area being monitored overlaid with location information of a radiation source. More specifically, one embodiment of the invention relates to overlaying location information of a radiation source, based upon data received from detectors, on a current image of an area being monitored from a camera, where the camera and detectors are all mapped onto a single coordinate system.
This section is intended to provide a background or context to the invention that is, inter alia, recited in the claims. The description herein may include concepts that could be pursued, but are not necessarily ones that have been previously conceived or pursued. Therefore, unless otherwise indicated herein, what is described in this section is not prior art to the description and claims in this application and is not admitted to be prior art by inclusion in this section.
The need for accurate radiation surveillance is expanding as the perceived risk of unsecured nuclear materials entering and transmitting within the country increases. Tracking systems are required to detect, locate, and track a radiation source. Such a system is described in U.S. Pat. No. 7,465,924.
Current systems for detecting and tracking radioactive sources include a live video image of an area that includes the detected radioactive source. Further, current systems determine the most likely location of a radiation source. What current systems lack is the ability of presenting the live video image and the most likely location of the source in a way that an operator can easily determine the actual location of the source within the area being monitored. Current systems allow an operator to see an image of the area being monitored that may contain the most probable location of a source, however, the operator is unable to tell from the video alone where the radiation source is likely located. Collected data from the various detectors and cameras are not integrated together. Because collected data is not combined with image data, current systems require an operator to view data collected from detectors and video images separately from one another. This process must currently be done mentally by the operator. As such, this process is prone to error and the outcome is significantly dependent on the mental acuity of the operator.
Current systems are also limited to using the same type of radiation detectors within a single system. Each detector has a physical connection, means of accessing its data, and data formats. However, detectors of different types have various physical connections, means of accessing data, and data formats. Because of these limitations, current systems are typically built using detectors from the same vendor. This leads to systems that are inflexible in that detectors and cameras of different types are generally unable to be part of the same system. Current systems are also limited in the number of supported detectors and cameras based upon limitations of a system's computer power.
These prior art systems also tend to be limited with respect to configuring the arrangement of detectors and cameras. Generally, the location of the various detectors and cameras must be known to the system. The locations are determined based upon a physical grid manually setup over the area being monitored or calculations specific to the area being monitored. Both of these are time intensive, error prone, and may be impractical given the area being monitored.
Thus, there is a need for a source tracking system which 1) determines the location of the detectors and cameras in the system, independent of the area being monitored, on a single coordinate system, 2) allows the system to use any type of detector and camera, and 3) integrates information regarding the location of a source with image data from the area being monitored. These capabilities need to be provided in a way that maximizes the amount of data that the system can process.
The present invention relates to systems and methods for 1) determining the location of the detectors and cameras, independent of the area being monitored, on a single coordinate system, 2) allowing the system to use any type of detector and camera, and 3) integrating collected data regarding the location of a source with image data from the area being monitored. The present invention provides these capabilities while maximizing the amount of data that the system can process. In various embodiments, one or more of the cameras is selectively moveable in order to track a radioactive source moving within the area being monitored. For example, a camera may be configured to tilt and/or pan in response to the movements of the source. Thus, the depiction of the likely location of the source can be substantially maintained at or near the center of a visual display image. In other embodiments, one or more of the cameras is substantially fixed but a moveable electronic visual indicia, for example a crosshair, is generated that tracks a moving radioactive source within the visual display image. In various embodiments, a combination of moveable and fixed cameras may be utilized.
In one embodiment, the present invention relates to a radioactive source tracking system. In this embodiment, the system can include one or more detectors, one or more cameras, unified data collection system, processors, and means of communication for the components of the system. These components could be installed in an area to monitor for radiation sources, to provide near real-time images of the area containing the radiation source, and a graphical overlay of location information on those images.
In another embodiment, location information includes the most probable location of a radiation source within the area being monitored. In another embodiment, the location information is the most probable location of a source, along with the confidence that the source will be located in a region of space surrounding the most probable location. In yet another embodiment, the location information is the probability that a source is located at any given point within the monitored area. Thus, allowing the monitoring of multiple sources contained within the monitored area.
In yet another embodiment, the system includes a number of radiation detectors and a number of cameras. With each detector and camera generating data concerning the current state of the monitored area, the amount of data that requires processing is large. The system must be able to present location information in a timely manner, that is, the information must be timely enough to aid in the locating and recovery of a detected radiation source. To maximize the amount of data processed, the system provides for a unified data collection system for the detectors and cameras.
In still another embodiment, the system includes radiation detectors of various types. The system, therefore, can take advantage of inventories of detectors from of various types. This allows the systems to be easily setup, installed, modified, repaired, and expanded.
These and other objects, advantages, and features of the invention, together with the organization and manner of operation therefore, will become apparent from the following detailed description when taken in conjunction with the accompanying drawings, wherein like elements have like numerals throughout the several drawings described below.
a is a diagram depicting various elements used to map the location of a source using a substantially fixed camera;
The present invention relates to providing current images of a monitored area overlaid with location information of a radiation source. In general, the principal components of the present invention are detectors, video cameras, unified data collection, image and video output, and information overlaid on near real time images. In one embodiment, the function capabilities of the present invention include mapping of each detector and each camera onto a single coordinate system, without requiring manually creating a grid on the monitored space. Further capabilities include receiving information from the various detectors, determining location information of a radiation source, and overlaying the location information on a near real-time image of the area being monitored. The near real-time image contains the most probable location of the source within the area being monitored.
In one embodiment, the degree of confidence that the source will be located in a region of space surrounding the most probable location is determined. This confidence is overlaid on the images, containing the most probable location, from any of the system cameras. In another embodiment, the probability of the source being located at any point within the monitored area is calculated. This probability is overlaid on near real-time images from the system cameras.
Once the system detects a source, the location information of the source is calculated at step 116. The location information is relative to the single coordinate system. There are various ways of detecting and calculating the location of a radiation source known in the art. Examples include using the Sequential Analysis Test for detection and the Maximum Likelihood algorithms for location as disclosed in U.S. Pat. No. 7,465,924. In one embodiment, the location information contains the most probable location of a source. In another exemplary embodiment, the location information contains the most probable location of a source and the degree of confidence that the source will be located in a region of space surrounding the most probable location. In another exemplary embodiment, the location information includes the probability that a source is located at any given point within the area being monitored. The next step 118 is to map the image from one or more cameras into the single coordinate system. Then the location information is overlaid onto each mapped image at step 120. Finally, the overlaid image is displayed at step 122. In another embodiment, the location information is mapped into the single coordinate system at step 118 of the camera's image. Then the location information is overlaid onto the image of the camera at step 120 and the image is displayed at step 122.
In an Inter-Detector coordinate system it is useful to designate one reference location 520 as the origin of a coordinate system. A second detector is chosen such that the positive x axis pass through the second detector 522. Unit vectors 540 are then defined regarding they and z axes. Note that the orientation of this coordinate system does not depend on any feature of the area being monitored, although in the end features of the area being monitored may be referenced in the single coordinate system if needed. An example would be when the source position is needed to be tied back to Geographic Information System (GIS) coordinates. In this case building GIS coordinates might be a convenient frame of reference for locating the detector coordinate system.
When there are three or more detectors lying in a single horizontal plane, knowledge of the distances between each detector pair provides sufficient information to allow solving for the relative position of each detector. The distance between detectors can be determined using a measuring tape or in an advanced setting obtained in an automated fashion through the use of receiver/transmitter pairs on the detectors. The relative positions solved for are the x and y coordinates of each detector in a Cartesian coordinate system. This coordinate system is aligned such that one reference location 520 lies at the origin and a second detector 522 lies along one of the coordinate axes. The number of unknowns to be solved for is 2m−3 where m is the number of detectors. There is a unique solution when m=3. For m>3, the problem is over determined. The redundancy provides the opportunity to first detect any gross errors in measured inter-detector distances. If none exist, then the redundancy can be used to minimize the effect of routine measurement error on the calculated values of the detector coordinates.
p
2
=a
2
+b
2−2ab cos(P) (1)
To find the position of the third detector in relation to the first detector which is the (0,0) point on our coordinate plane the program or system creates a right triangle using the first and third detectors. The equation
x
3
=a cos(P)(3)
describes the relationships of the parts of this triangle where x3 is the distance on the x axis from detector one to detector three.
The equation
y
3
=a sin(P) (4)
describes the relationships of the parts of this triangle where y3 is the distance on the y axis from detector one to detector three. The position of the third detector on the x,y coordinate plane is therefore (x3,y3).
Finding any other detector point: Next, all remaining detector locations are determined at step 222. The process that is used to locate the detectors on the coordinate plane is the same for each detector. Essentially, the following process is used to locate each of the remaining detectors until each detector has been properly located on the single coordinate system. While iterating through the list of remaining unmapped detectors, the current detector for whose location is being solved is known as detector i.
The process starts by creating three different triangles using detectors one, two, three, and i at step 224. It then solves for the interior angle of the first detector in each of these triangles at step 226. The law of cosines is used in calculating this interior angle for the various triangles. For the triangle composed of the first detector, second detector and detector i the equation used is:
p
2
=a
2
+b
2
−ab cos(P1) (5)
This equation is rearranged to
where P1i is the interior angle of the first detector, pi is distance from the second detector to detector i, a is the distance from the first detector to i, and b is the distance from the first detector to the second detector.
For the triangle composed of detectors one, three, and i the law of cosines equation is:
p
2
=a
2
+b
2−2ab cos(P2). (7)
This equation can then be rearranged to the following equation.
P2i in this equation is the interior angle of the first detector, pi is distance from the third detector to detector i, ai is the distance from the first detector to i, and b is the distance from the first detector to the third detector.
Finally, for the third triangle composed of detectors one, two, and three the law of cosines equation is
p
2
=a
2
+b
2−2ab cos(P3). (9)
This equation is then rewritten as
where P3i is the interior angle of the first detector, pi is distance from the second detector to the third detector, ai is the distance from the first detector to the third, and b is the distance from the first detector to the second detector.
The angle P1i is the angle that is used to find the location of detector i on the coordinate grid. However, when the position of three points is known and only relational data comparing the points to the fourth point is known, there exists a dual solution. P1i can be either negative or positive. Several steps are performed to find the correct value of P1i. If
P
1
<=P
3
+P
2+0.1 and P1>=P3+P2−0.1 (11)
or if
P
1
<=P
3
−P
2+0.1 and P1>=P3−P2−0.1 (12)
where ξ is an infinitesimal.
However, if
P
2
<=P
1
+P
3+0.1 and P2>=P1+P3−0.1 (13)
or if
P
2<=2π−(P1+P3+0.1) and P2>=2π−(P1+P3−0.1) (14)
To find the position of the i detector in relation to the first detector which is the (0,0) point on our coordinate plane, the program or system creates a right triangle using the first, second, and i detectors at step 228. The equation
x
i
=a cos(P1) (15)
describes the relationships of the parts of this triangle where x, is the distance on the x axis from detector one to detector i, P1i, is the interior angle of detector one in the triangle composed of the first, second, and i detectors, and a, is the distance from the first to the i detector.
The equation
y
i
=a sin(P1) (16)
describes the relationships of the parts of this triangle where yi is the distance on the y axis from detector one to detector i, P1i is the interior angle of detector one in the triangle composed of the first, second, and i detectors, and ai is the distance from the first to the i detector. The location of the i detector is set equal to (xi,yi) at step 230. The process used to find detector i repeats until the locations of all detectors are known.
The cameras, in order to be effective at monitoring a radiation source which exists in Inter-Detector space, need to be able to find their position in Inter-Detector space. Quite often manual methods of measuring (tape measures, range finders, etc.) will be hard to use or completely unusable because of the location of the camera. An automated method is needed. In order to accurately find the (x,y,z) position of the camera in an Inter-Detector space the (x,y,z) position of three reference locations, each of which may be a detector if elected, needs to be known, the camera pan and tilt must be adjusted, and the tilt and pan of the camera known.
After the locations of at least three reference locations are determined, the location of the cameras within the Inter-Detector coordinate system is determined.
Next, the camera is moved such that the camera points to the reference location 520 at step 314. The state of the camera's tilt and pan is measured at step 316. The camera is then pointed at a second reference location at step 318. After the camera is pointed at the second detector, the camera's tilt and pan is measured at step 320. This process is repeated a third time, by pointing a camera at a third reference location at step 322 and measuring the camera's tilt and pan at step 324.
After measuring the camera's tilt and pan the (x, y, z) coordinates of the camera are determined at step 326. The (x, y, z) position of the camera can be solved with two equations (i being the index representing the detector the camera is currently pointed at). The first equation,
where
θ=θr+θc
represents the relationship between the camera pan and reference location i.
sin(θr+θc)√{square root over ((xd-i−xc)2+(yd-i−yc)2)}{square root over ((xd-i−xc)2+(yd-i−yc)2)}=(xd-i−xc) (18)
where θr 516 is the angle from the reference venue to the camera zero pan angle as previously defined, θc 512 is the camera's pan angle, xd-i, is the x axis value of the reference location i, xc is the x coordinate of the camera, yd-i, is they axis value of the reference location referenced by index i, and yc is they coordinate of the camera. The second equation,
where Zd-I′ is defined in
where ψ 514 is the tilt angle of the camera, xd-i, is the x axis value of the reference location referenced by index i, yd-i, is the y axis value of the reference location referenced by index i, zd-i is the z axis value of the reference location referenced by index i, xc is the x coordinate of the camera, yc is they coordinate of the camera, and zc is the z coordinate of the camera.
To find the x,y,z of the camera, each of the pan and tilt measurements from when the camera was pointing at the three separate reference locations are used. Equations (18) and (20) are applicable to each location and result in six non-linear equations (Eqs. (18) and (20) for each location) and four unknowns (xc,yc,zc,θr). The unknowns can be solved for by attempting all possible values of x, y, z, and θr in a venue-bound range. Whichever iteration results in an answer closest to zero is the correct iteration, and that set of x, y, and z values for the camera position is the position of the camera.
The unified data collection interface 450 collects data from various detectors 410, 412, and 414. In another embodiment the unified data collection interface 450 collects data from video cameras instead of detectors. In yet another embodiment the unified data collection interface 450 collects data from both radiation detectors and video cameras. Interfaces 420, 422, 424 are used to physically interface with each detector. In one embodiment of the system, the devices use USB or serial cables to connect with the interfaces. The interfaces are also coupled to the unified data collection interface 450. The interfaces are therefore used to physically connect the detectors with the system. In one embodiment, the interfaces use Ethernet to connect with the unified data collection system 450.
Allowing access to the data contained within the devices require that the system be able to communicate with the devices. Specifically, this requires that the system be able to address each detector and camera separately. This is accomplished by using a communication protocol 430 that encapsulates the protocols of the various detectors. In one embodiment TCP/IP is used as the communication protocol for the Ethernet network.
To provide access to unified data from various detectors, the data must be converted into a standard detector format. This is accomplished by using data converters 440, 442, 444. These data converters take output data of a specific format and convert the data to the standard detector format. In one embodiment, to maximize the amount of data that can be processed in real-time, each data converter is implemented in software. Specifically, each converter is implemented in its own thread in a multi-threaded process. This allows the data processing to be done in parallel. One skilled in the art would recognize that the converters could be spread across multiple processors in order to process more data. This standardized data is then made available to the location determination component 460.
The location information of a radiation source is then determined based upon the standardized data. Once a radiation source is located, current images from the cameras are mapped into the single coordinate system in the image mapping component 470. Finally, the location information of a radiation source is overlaid on the mapped current images in the image overlaying component 480 and the images are displayed 490.
Given an (x,y) coordinate of a radiation source within the Inter-Detector space, a camera can automatically provide an image of that area. This is done by determining the necessary pan and tilt required of the camera to locate the radiation source. The necessary pan and tilt to center the camera on the source can be solved as follows. The position of the source relative to the camera can be found by solving
X
s
′=x
s
−x
c
Y
s
′=y
s
−y
c
Z
s
′=z
s
−z
c (21)
The combined pan angle of the camera and angle from the reference venue can be solved by
in which θ is the combined pan angle of the camera and angle from the reference venue, Xs′ is the x distance between the camera and source, and Ys′ is the y distance between the camera and source.
The presence of the inverse sine function in the above expression requires special attention. For a given value of the inverse sine function there can be two angles that correspond to this value. A logic test is needed to select the appropriate angle. The selection is based on the sign of Xs′ and of Ys′. The approach is to locate the source to within one of four quadrants. Within that quadrant there is a unique relationship between the inverse sine and the angle θ. It is useful to consider the problem in terms of
For the four points:
A: Xs′>0 and Ys′>0=>0<θ<90=>θ=angle
B: Xs′>0 and Ys′<0=>90<θ<180=>θ=180−angle
C: Xs′<0 and Ys′<0=>180 <θ<270=>θ=angle−180
D: Xs′<0 and Ys′>0=>−90<θ<0=>θ=−angle
Now that the combined angle has been uniquely identified for the pan angle necessary to center the camera on the source computed from the camera pan angle is
θcθ−θr (24)
where θr is the angle from the reference venue to the camera zero pan angle, θc is the camera's pan angle, and θ is these two angles combined.
The necessary tilt angle to center the camera on the source is given by
in which ψ is the camera's pan angle, Xs′ is the x distance between the camera and source, Ys′ is the y distance between the camera and source, and Zs′ is the z distance between the camera and source.
In various embodiments, one or more of the cameras may be configured to selectively move by panning and/or tilting the camera to track the movement of the radioactive source. As the camera moves, its position is updated as described above. The movement of the camera may be automated such that camera tracks the movements of the radioactive source based on the determination of the most likely location of the source. As such, the video display image is updated in near real-time to track the source and most likely location of the source is continuously depicted substantially near the center of the video image as the camera moves accordingly.
In another exemplary embodiment, the coordinates of the image are transformed into the single coordinate system. The location information is then mapped onto the image, without transforming the location information.
In still other embodiments, one or more of the cameras is substantially fixed such that the camera is not configured to automatically move to track a moving radioactive source. A number of substantially fixed cameras may be utilized, each of which may cover selected portions of an area being monitored. Additionally, a combination of selectively moveable and substantially fixed cameras may be utilized in various embodiments. When a substantially fixed camera is utilized a visual indicia is generated and depicted with the video display image. The visual indicia is electronically generated and may include a crosshair, point, circle, rectangle, coloration or other reticle that indicates the determined most likely position of the radioactive source.
In a preferred embodiment utilizing a fixed camera, the source position is mapped onto the video display image even if the camera is not pointing directly at the source. The source location with respect to a coordinate system, for example a building space coordinate system, is mapped to one or more corresponding pixels of the video display image to visually indicate the most likely location of the radioactive source. The source position mapping is updated so that the depicted location of the source moves about the video display image in near real-time in response to movement of the radioactive source.
The mapping process may be accomplished by defining an infinite plane that both contains the radioactive source and is parallel to the camera imaging plane. The camera location is given by {right arrow over (c)}=(xc,yc,zc) and the source location by {right arrow over (a)}=(xs,ys,zs). A line L normal to the plane runs from {right arrow over (c)} to a point {right arrow over (b)} in the plane. L is decomposed into its vector components Lx, Ly, Lz which are given by
L
x
=L cos(ψ)sin(θr+θc) (27a)
L
y
=L cos(ψ)cos(θr+θc) (27b)
L
z
=L sin(ψ) (27c)
where θr is the camera's reference pan computed when the camera was set up, θc is the camera's internal pan, ψ is the camera's internal tilt.
From these components, a unit vector {right arrow over (n)} shown in
This vector is normal to the camera imaging plane. The infinite plane that is normal to this vector and intersects the source position is given by
which simplifies to
cos(ψ)sin(θr+θc)(x−xs)+cos(ψ)cos(θr+θc)(y−ys)+sin(ψ)(z−zs)=0 (30)
It is necessary to determine the point {right arrow over (b)} at which the line {right arrow over (n)}t+{right arrow over (c)} intersects the plane in Equation 29 where t is a scale factor determining length. To determine the value of t corresponding to the point along the line {right arrow over (n)}t+{right arrow over (c)} that intersects the plane, the coordinates for the line are substituted into Equation 29 to obtain
The value of t substituted back into the line yields the point {right arrow over (b)}={right arrow over (n)}t+{right arrow over (c)}. A depiction of the camera in relation to the coordinate system and the source is illustrated in
The field of the camera is also determined. Only a certain portion of the plane lies in the camera's field of view (FOV) rectangle. To find the four corners of the camera's field of view that lie on the plane defined by Equation 29 it is necessary to know the relation between the camera's zoom value, the distance between the camera and the plane, and the size of the field of view rectangle. At 0× magnification, w=r(vw/cd), where w is the width of the FOV rectangle, r is the distance from the camera to the plane, vw is the width of the camera's viewable area at a particular distance to a surface, cd,
w=|{right arrow over (c)}−{right arrow over (b)}|(vw/cd). (32)
For example, for a particular camera, w=r(vw/cd)=r( 88/92). The fraction 88/92 was arrived at by experiment. The camera was placed 92 inches away from a surface and the corners of the camera's viewable area were marked on the surface. The width of this area was 88 inches. Using these two pieces of information the width of the FOV rectangle can be determined by knowing the distance from the camera to the FOV plane as given by Equation 32. It will be appreciated that the fraction may be different for other cameras.
Under variable magnification,
where s is the magnification level of the camera. By way of example, for a particular camera s varies between 0× and 35× magnification but it will be appreciated that cameras of different magnification levels may be used.
The aspect ratio for the camera is also determined, which is determined by the pixel aspect ratio of the camera, where pxh is the number of pixels along the horizontal axis of the camera and pxv is the number of pixels along the vertical axis of the camera. This means that
where h is the upright-edge length of the FOV rectangle. For example, for a particular camera, the aspect ratio of the camera is 1.47:1, 704×480 (704/480=1.47:1 aspect ratio), which is specified by the manufacturer, and h=w(1/1.47). Again, it is contemplated that cameras of different aspect ratios may be utilized. Then, as shown in
{right arrow over (h)}={circumflex over (k)}×{right arrow over (n)} (35)
is defined, where {circumflex over (k)} is the upward pointing unit vector, and {right arrow over (h)} is parallel to the lower edge of the FOV or, equivalently, the ground assuming the ground is not inclined. Using {right arrow over (h)} the two midpoints are found by
Then, another vector is constructed
where {right arrow over (v)} is a vector of a length h/2 and parallel to the upwards edges of the FOV rectangle. Corner points can then be found by
{right arrow over (r)}
a
={right arrow over (m)}
1
±{right arrow over (v)} and {right arrow over (r)}
b
={right arrow over (m)}
2
±{right arrow over (v)}, or in expanded form (39)
{right arrow over (r)}
1
={right arrow over (m)}
1
+{right arrow over (v)}, (39a)
{right arrow over (r)}
2
={right arrow over (m)}
1
−{right arrow over (v)}, (39b)
{right arrow over (r)}
3
={right arrow over (m)}
2
+{right arrow over (v)}, and (39c)
{right arrow over (r)}
4
={right arrow over (m)}
2
−{right arrow over (v)}, (39d)
Once the location of the source on the plane and the corners of the plane are determined the location of the source can be determined and scaled to appear at the correct position on the visual display image. This is best understood by a view looking directly at the FOV rectangle which corresponds to the display screen a user views. Before the display coordinates are calculated though it is necessary to determine if the source position is in the FOV rectangle. The source position is not above or below the FOV rectangle if
az≧r2z,az≧r4z,az≦r1z, and az≦r3z (40)
are all true. A vector {right arrow over (z)} travels from the upper left hand corner to the source location {right arrow over (a)} and is given by {right arrow over (z)}={right arrow over (a)}−{right arrow over (r)}1. A line of length xfov is projected on to a line running from {right arrow over (r)}1 to {right arrow over (r)}3 or {right arrow over (r)}3−{right arrow over (r)}1. The projection is given by
are both true then the source lies in the FOV rectangle.
The magnitude xfov, gives the x location of detector on the user's screen after it is scaled to match the pixel density of the screen. In a particular embodiment,
will produce the x coordinate in pixels of the source on the user's screen with the scale factor being
and in the particular camera example described above where the user's screen is 704 pixels wide pxh=740. To find the y coordinate on the FOV rectangle the FOV rectangle is used
|{right arrow over (y)}|=√{square root over (|{right arrow over (z)}|2−xfov2)} (44)
and the result is scaled to match the display screen so that in pixels
The 0,0 coordinate for this x,y will be the upper left hand corner of the screen.
The foregoing description of embodiments of the present invention have been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the present invention to the precise form disclosed, and modification and variations are possible in light of the above teachings or may be acquired from practice of the present invention. The embodiments were chosen and described in order to explain the principles of the present invention and its practical application to enable one skilled in the art to utilize the present invention in various embodiments, and with various modifications, as are suited to the particular use contemplated.
The present application claims priority to U.S. Provisional Patent Application No. 61/242,700, filed Sep. 15, 2009, and the contents of which are incorporated herein by reference in their entirety.
The United States Government has rights in this invention pursuant to Contract No. DE-AC02-06CH11357 between the United States Government and the UChicago Argonne, LLC, representing Argonne National Laboratory.
Number | Date | Country | |
---|---|---|---|
61242700 | Sep 2009 | US |