This disclosure relates to methods and systems for creating (i.e., designing or designing and producing) free space reflective optical surfaces for use in head-mounted displays. More generally, the disclosure relates to methods and systems for creating free space optical surfaces for the display of imagery from a light-emitting display device held in close proximity to a user's eye.
The reflective optical surfaces are referred to herein as “free space” surfaces because the surface's local spatial positions, local surface curvatures, and local surface orientations are not tied to a particular substrate, such as the x-y plane, but rather, during the surface's design, are determined using fundamental optical principles (e.g., the Fermat and Hero least time principle) applied in three dimensional space.
A head-mounted display such as a helmet-mounted display or eyeglass-mounted display (abbreviated herein as a “HMD”) is a display device worn on the head of an individual that has one or more small display devices located near one eye or, more commonly, both eyes of the user.
Some HMDs display only simulated (computer-generated) images, as opposed to real-world images, and accordingly are often referred to as “virtual reality” or immersive HMDs. Other HMDs superimpose (combine) a simulated image upon a non-simulated, real-world image. The combination of non-simulated and simulated images allows the HMD user to view the world through, for example, a visor or eyepiece on which additional data relevant to the task to be performed is superimposed onto the forward field of view (FOV) of the user. This superposition is sometimes referred to as “augmented reality” or “mixed reality.”
Combining a non-simulated, real-world view with a simulated image can be achieved using a partially-reflective/partially-transmissive optical surface (a “beam splitter”) where the surface's reflectivity is used to display the simulated image as a virtual image (in the optical sense) and the surface's transmissivity is used to allow the user to view the real world directly (referred to as an “optical see-through system”). Combining a real-world view with a simulated image can also be done electronically by accepting video of a real world view from a camera and mixing it electronically with a simulated image using a combiner (referred to as a “video see-through system”). The combined image can then be presented to the user as a virtual image (in the optical sense) by means of a reflective optical surface, which in this case need not have transmissive properties.
From the foregoing, it can be seen that reflective optical surfaces can be used in HMDs which provide the user with: (i) a combination of a simulated image and a non-simulated, real world image, (ii) a combination of a simulated image and a video image of the real world, or (iii) purely simulated images. (The last case is often referred to as an “immersive” system.) In each of these cases, the reflective optical surface produces a virtual image (in the optical sense) that is viewed by the user. Historically, such reflective optical surfaces have been part of optical systems whose exit pupils have substantially limited not only the dynamic field of view available to the user, but also the static field of view. Specifically, to see the image produced by the optical system, the user needed to align his/her eye with the optical system's exit pupil and keep it so aligned, and even then, the image visible to the user would not cover the user's entire full static field of view, i.e., the prior optical systems used in HMDs that have employed reflective optical surfaces have been part of pupil-forming systems and thus have been exit-pupil-limited.
The reason the systems have been so limited is the fundamental fact that the human field of view is remarkably large. Thus, the static field of view of a human eye, including both the eye's foveal and peripheral vision, is on the order of ˜150° in the horizontal direction and on the order of ˜130° in the vertical direction. (For the purposes of this disclosure, 150 degrees will be used as the straight ahead static field of view of a nominal human eye.) Well-corrected optical systems having exit pupils capable of accommodating such a large static field of view are few and far between, and when they exist, they are expensive and bulky.
Moreover, the operational field of view of the human eye (dynamic field of view) is even larger since the eye can rotate about its center of rotation, i.e., the human brain can aim the human eye's foveal+peripheral field of view in different directions by changing the eye's direction of gaze. For a nominal eye, the vertical range of motion is on the order of ˜40° up and ˜60° down and the horizontal range of motion is on the order of ±˜50° from straight ahead. For an exit pupil of the size produced by the types of optical systems previously used in HMDs, even a small rotation of the eye would substantially reduce what overlap there was between the eye's static field of view and the exit pupil and larger rotations would make the image disappear completely. Although theoretically possible, an exit pupil that would move in synchrony with the user's eye is impractical and would be prohibitively expensive.
In view of these properties of the human eye, there are three fields of view which are relevant in terms of providing an optical system which allows a user to view an image generated by an image display system in the same manner as he/she would view the natural world. The smallest of the three fields of view is that defined by the user's ability to rotate his/her eye and thus scan his/her fovea over the outside world. The maximum rotation is on the order of ±50° from straight ahead, so this field of view (the foveal dynamic field of view) is approximately 100°. The middle of the three fields of view is the straight ahead static field of view and includes both the user's foveal and peripheral vision. As discussed above, this field of view (the foveal+peripheral static field of view) is on the order of 150°. The largest of the three fields of view is that defined by the user's ability to rotate his/her eye and thus scan his/her foveal plus his/her peripheral vision over the outside world. Based on a maximum rotation on the order of ±50° and a foveal+peripheral static field of view on the order of 150°, this largest field of view (the foveal+peripheral dynamic field of view) is on the order of 200°. This increasing scale of fields of view from at least 100 degrees to at least 150 degrees and then to at least 200 degrees provides corresponding benefits to the user in terms of his/her ability to view images generated by an image display system in an intuitive and natural manner.
There thus exists a need for a reflective optical surface for use in a HMD that has improved compatibility with the field of view, both static and dynamic, of the human eye. There also exists a need for a reflective optical surface which can be used to provide virtual images (in the optical sense) to a human eye in a HMD without the limitations imposed by an external exit pupil. The present disclosure provides methods and systems for creating such surfaces.
In the remainder of this disclosure and in the claims, the phrase “virtual image” is used in its optical sense, i.e., a virtual image is an image that is perceived to be coming from a particular place where in fact the light being perceived does not originate at that place.
Throughout this disclosure, the following phrases/terms shall have the following meanings/scope:
In accordance with an aspect, a computer-based method for designing a free space reflective optical surface for use in a head-mounted display that reflects a virtual image of a display surface for viewing at a preselected spatial location by a user's eye is disclosed. The method comprises using one or more computers to perform the step of representing, by the one or more computers, the display surface by a plurality of display objects. The method also comprises using one or more computers to perform the step of representing, by the one or more computers, the free space reflective optical surface by a plurality of surface elements, each surface element being characterized by (i) a spatial location relative to the display surface, a nominal user's eye, and the preselected spatial location of the virtual image, (ii) a normal, and (iii) a radius of curvature. The method also comprises using one or more computer to perform the step of associating, by the one or more computers, each display object with at least one surface element in the direction of which a virtual image of the display object at the preselected spatial location will be displayed to the nominal user's eye, each surface element being associated with a single display object. For each surface element, an initial spatial location of the element is defined. For each surface element, an initial direction of the element's normal using the element's initial spatial location, the location of the display object with which the element is associated, and the location of a center of rotation of the nominal user's eye so that light from the display object that reflects off of the element will pass through said center of rotation is calculated, by the one or more computers. For each surface element, an initial radius of curvature for the element so that the virtual image of the display object is at the preselected spatial location is calculated, by the one or more computers. For each surface element, a final spatial location of the element, a final direction of the element's normal, and a final radius of curvature for the element and a set of surrounding elements by iteratively adjusting the spatial locations of the elements until an error function satisfies a predetermined criterion is calculated by the one or more computers.
In another aspect, the aforementioned method may be part of a system comprising a processor and a memory unit coupled to the processor where there the memory unit stores a computer program which includes programming instructions for performing the aforementioned method.
In another aspect, a computer program embodied in a tangible computer medium may be provided for performing the aforementioned method.
In another aspect, a computer-based method for designing a free space reflective optical surface for use in a head-mounted display that reflects a virtual image of a display surface for viewing by a user's eye is provided. The method comprises using one or more computers to perform the step of representing, by the one or more computers, the display surface by a plurality of display objects. The method comprises using one or more computers to perform the step of representing, by the one or more computers, the free space reflective optical surface by a plurality of surface elements. The method comprises using one or more computers to perform the step of iteratively calculating, by the one or more computers, at least a spatial location and at least a normal for each surface element of the plurality of surface elements which will cause a virtual image of each display object to be displayed to a nominal user's eye in a desired direction of gaze of the eye for that display object.
In another aspect, the aforementioned method disclosed in the paragraph above may be part of a system comprising a processor and a memory unit coupled to the processor where there the memory unit stores a computer program which includes programming instructions for performing the aforementioned method.
In another aspect, a computer program embodied in a tangible computer medium may be provided for performing the aforementioned method disclosed two paragraphs above.
Reference will be made below in detail to embodiments, which are illustrated in the accompanying drawings. Wherever possible, the same reference numerals used throughout the drawings refer to the same or like parts. The accompanying drawings are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification. It is to be understood that the various features of the embodiments disclosed in this specification and in the drawings can be used in any and all combinations.
In order for a human to focus on an object that is closer than approximately 25 centimeters, it is generally necessary to adjust the optical properties of the light that is being emitted from the object. One way to adjust the optical properties of the light so that it is focusable by the human eye is to collimate the light, resulting in light that has parallel beams and a flat wavefront. The wavefront of light leaving a point source has a spherical shape and its curvature can be defined by a property called vergence, or V. Vergence is measured in diopters [D], where the amount of vergence is determined by the distance, in meters, from the source of light. So, if the observer is a distance “s” [m] from a point source, the vergence is:
That is, the vergence is equal to the inverse of the distance “s” from the point source, and has units [D] for diopters. It is shown as negative since that is the standard representation to indicate that the light rays are diverging.
In general, humans cannot adapt their eyes to focus on things closer than 25 centimeters. This is sometimes called the “near point.” Thus the vergence at the adaptation limit, Va, is:
Accordingly, if the vergence is more divergent than −4 D, such as when a non-optically-corrected object is closer than 25 cm, the eye cannot focus on it.
One of the goals of certain embodiments of the free-space reflective optical surfaces disclosed herein is that the vergence of all the light entering the eye after reflection from the surface has a negative vergence closer to zero then Va. Because the eye also cannot focus on light having a vergence greater than zero, another goal of those embodiments is that the distance to the virtual image of the object not go beyond infinity, so the vergence must be greater than zero, or
essentially meaning
V≦0 [D].
In order for the virtual image of the object to appear at a point farther away than 25 cm, the vergence to be achieved needs to be set to the inverse of the desired distance. For example, for a distance of 20 [m], the vergence of the light waves entering the eye is:
V=−1/20=−0.05 [D].
And, for a distance of 50 [m] the vergence is:
V=−1/50=−0.02 [D].
If a display is 25 cm away from the eye, the vergence of the display is:
and the eye can focus on it. If in
s=sP+sR
where sP and sR indicate the length of line segments P and R in
Assume that it is desired to have the virtual images seem to be 50 [m] away from the center of the eye, then the vergence of the light entering the eye must be:
To accomplish this, the reflective surface must converge the light that is emanating onto it as it directs the light into the eye. The amount of converging power, P, that the reflective surface must provide will vary depending on the distance from the display to the surface and to a lesser degree on the distance from the eye to the surface.
From
l′=W−sR Eq. (4)
The vergence associated with distance l′ is:
From the Gaussian mirror equation,
where L is the vergence associated with the distance/from the display to the reflector which is:
For completeness, the lateral magnification of the image is
An example of the calculation is as follows where a concave mirror of focal length 35 mm is assume to be placed 30 mm from the eye and a display distance of 34.976 mm is calculated as the distance l which will produce a virtual image that will appear to be 50 meters form the user's eye. The example uses Mathcad nomenclature.
fl:=35 mm
fl=0.035 m
Radius:=2fl=0.07 m
P:=1/fl=28.571 m−1
W:=50 m desired distance to the virtual image
sr:=30 mm distance from eye to reflector
elp:=W−sr=49.97 m
Lp:=−1/elp=−0.0200120072 m−1
L:=Lp−P=−28.591 m−1
el:=1/L=−0.035 m (note, el=sp)
el:=−34.976 mm
m:=L/Lp=1.42871429×103
Instead of calculating the location of the display given the power of the reflector, the location of the eye, and the location of the virtual image, the above analysis can be used to calculate the power of the reflector, given the distance to the display and the eye, and the desired distance to the virtual image. From Eq. (6) it can be seen that:
P=L′−L (9)
Substituting in for L′ and L from Eq. (5) and Eq. (7) gives:
Since l=sP, Eq. (10) becomes
As an example, if the desired image distance, W, is 50 [m], the reflector is 40 mm from the eye, and the display is 40 mm from the reflector, the reflector power needs to be P=24.98 [D], i.e., [0.04−(−0.04)−50]/[−0.04(50−0.04)]. Note that sp is negative.
Thus, for a given orientation of the display, distance to the surface of the reflector, and distance from the reflector to the eye, the correct reflector power can be determined. In a concave spherical reflector, the power is
where f is the focal length in meters [m]. In the spherical mirror, the focal length is related to the radius of curvature, r, as
r=2f (13)
and hence
So, to obtain the desired power as calculated from Eq. (11), it is necessary to ensure that the surface has the curvature specified by the radius calculation of Eq. (14).
If the display was a simple point source, the reflector requirements could be satisfied by a spherical concave reflector, but the display is generally a planar device with a grid of light emitting picture elements or pixels that cause the geometry to deviate from that which can be accomplished with a sphere. Also, as discussed above, it is desired to spread the light out over a larger area to obtain a wider field of view, e.g., a field of view capable of taking advantage of the wide field of view of the human eye (static and/or static+dynamic).
In accordance with the present disclosure, these challenges are met by dividing the reflective surface into a plurality (e.g., thousands) of surface elements 23, and adapting (adjusting) their positions, orientations, and curvatures to obtain desired reflector properties. Triangularly-shaped surface elements have been found to work successfully in the optimization, although other shapes can be used if desired. An example of a subset of surface elements 23 is shown in
The display surface, which may be flat or curved, is also divided into a plurality of pieces referred to herein as “display objects” or “virtual pixels.” There may be just a few display objects (even just one large virtual pixel being theoretically possible) or thousands of virtual pixels arranged geographically across the display surface (the typical case).
In a computer system (see below), a display surface composed of display objects is created, an eye center is created, and an initial mesh of reflective surface elements is created. Then the reflective surface elements are all pointed, i.e., their normals are pointed, to allow reflection into the eye as per the Fermat Hero law (described later) such that the first derivative of the optical path length between the display object and the center of rotation of the eye will have a zero at the point on the reflector surface in whose direction it is desired to see the display object when the user looks towards the reflector surface.
The radii of curvature and spatial locations of the surface elements are then calculated as follows. First, for each core surface element corresponding to a particular display object of the display, the radius of curvature of the surface element needed to place the virtual image of that display object at the desired distance from the front of a nominal user's eye is calculated using the analysis set forth above. Then, the surrounding surface elements are checked to determine if they are in the right place in agreement with a superimposed sphere whose center lies on the normal to the core surface element (see below). If not, some or all of the surrounding surface elements are moved towards their correct positions for the display object (virtual pixel) and core surface element being considered. The process then moves on to other display object/core surface element combinations until all combinations have been updated. As discussed below, an error function is then calculated and a determination made if further iterations are needed.
As shown in
where the individual errors, ε, are calculated as the difference between the center of the surface element under consideration, such as surface element u in
Note that the center of the radius of the sphere used to move the surface elements and calculate errors preferably should be from a place where the radius of curvature of the core surface element would be parallel with (or, more specifically, lie along) the normal that is needed to provide the Fermat/Hero reflection. This point, called the optPoint, has a radius of:
for a current distance from the eye, sr, and a current display object (virtual pixel), sp. Note that sp is a negative number by optical convention since it is reflected light from the mirror. This radius is used by first locating the line that bisects the vectors from the current surface element's centroid to (a) the virtual pixel and (b) the center of rotation of the eye. The distance Raxis is then traversed upon this line in order to place a point, the optPoint, to use as the center of a sphere to use for error checking and for iterative correction of surface elements at each surface element. The method of determining and using Raxis is illustrated in
The desired final configuration of the reflective surface is obtained by slowly moving the surface elements towards the optimal surface which, in this case of
If there was only one display object (virtual pixel), the zero error surface would be a sphere of the correct radius to provide enough diopter power correction so that the image of the display object appeared to be at the desired distance, W, of Eq. (4). The eyepoint, sR, can be included in the calculation, but makes very little difference when the collimation is such that the virtual image is expected to appear at 50 meters in front of the observer. It is included in the actual calculations, but only affects the fourth significant digit of the reflective surface calculations when W=50 [m].
Also, if there was only one or a few virtual pixels and there was a pupil to look through and the waveforms of light could be reasonably expected to be contained in an area near to and around an optical axis that goes through the optical instrument, then this system could be analyzed with paraxial techniques, such as might be used with a telescope or a camera lens. In this case though, as discussed above, there is no real non-biological optical axis, and the errors are detected and the system characterized with the techniques disclosed herein. Although classical and other techniques for measuring performance, e.g., the system's Modulation Transfer Function (MTF), can be included in the error function, the errors of performance over the whole field of view will be summed and reduced using an error function which includes errors of the type illustrated in Eq. (15). The magnitude of the total error that can be tolerated will, of course, depend on the particular application of the HMD and can readily be set by persons skilled in the art based on the present disclosure and the specifications which the HMD images need to satisfy.
It should be noted that the eye can deal with about 0.5 D of misfocus, and that can also be used as part of the error calculation, e.g., when a mean estimate of the radius of curvature of the reflective surface at each reflection point is determined after an optimization cycle has taken place. To enable a smooth transition of viewing all across the field of view, the reflective surface elements may be smoothly transitioned from one to another. For example, the smoothing may be performed by using Non-Uniform Rational B-Spline (NURBS) technology for splined surfaces, thus creating a smooth transition across the entire reflective optical surface.
The error surface discussed above is one metric upon which to determine improvement in and performance of the surface qualities. To then improve the surface reflection qualities, the individual surface elements are moved with respect to the error they contribute. This is illustrated in
There are three types of patterns of surface elements: (1) those having all nine triangles existing; (2) those having one set of three triangles missing, such as at an edge; and (3) those where there are five triangles missing as would happen at a corner. In each case, cs is present; the difference is in the number of surrounding surface elements. In these cases, an affect weighting is used to allow the surface element's adaptation amount to be increased to be more commensurate with the adaptation that may happen in a situation where the surface element is surrounded by other surface elements.
In particular, at a corner, instead of being surrounded by eight surface elements, a given surface element is only surrounded by three surface elements, as shown in
As the surface elements are moved, the surface curvature is controlled to obtain the correct power across the field of view with the changing distances between the surface elements, the display objects and the nominal user's eye. The normals of the surface elements are also adapted to ensure that the pointing angles of the areas of the display (the display objects) are correct.
It is important to be able to stretch the field of view across a wide angle to permit the user to see more information in their peripheral vision, and to be able to scan the display in a more natural manner.
The spot on a mirror where an image will appear has been known since the time of Fermat, Hero of Alexandria, and from additional follow-on work, the image has been shown to be at the points where the length of the optical path has reached a stable point, a maximum or minimum. This can be found by finding the zeros of the first derivative of the optical path length. For ease of presentation, it is assumed that the entire optical path is in air, it being understood that persons skilled in the art can readily adapt the disclosed methods to cases where all or part of the optical path is composed of one or more different optical materials. For instance, a circle of radius r centered at [x,y]=[0,0] has the following equation
x2+y2=r2 (17)
Solving for x yields
x=±√{square root over (r2−y2)} Eq. (18)
Assuming a point source (S) having the coordinates [xS,yS] and a view point (V) having the coordinates [xV,yV] are in space around a spherical reflector in air, represented in this analysis as a circle 33 in
Taking the first partial differential of Eq. (19) with respect to y for the positive radical gives
and using the negative radical term gives
The concept may be tested with some representative values. A pair of points is chosen in
V=[20, −50]
S=[40, 40]
and a circle 33 of radius 100 is centered at the origin 43.
The two points predicted from Eqs. (20) and (21) for y and Eq. (18) for x, using positive for the x value associated with the y value of Eq. (20), and negative for the x value associated with the y value of Eq. (21) are:
Q1=[−98.31, 18.276]
Q2=[97.685, 21.392]
These points are plotted in
For instance, there is an additional line 31 from the center 43 of the circle 33 to the edge of the circle in
This is further illustrated in
Over the range of y=[−200 . . . 100] this partial derivative only has one zero, at y=−63.4828, corresponding to an x value of 77.49, which is at the expected location where the line touches the circular curve.
It is important to note that line 45 is not a tangent line to circle 33. It has a different slope, having a normal that bisects the vectors from A2 to the eyepoint V and the display object at S. This is how the present disclosure is able to place images of individual virtual pixels or regions of a display in different areas of a viewing region, and how it serves to expand the field of view by iteratively adjusting the slope of the core surface elements and checking the errors of the surface elements and adjusting their locations until the errors are optically acceptable.
Returning to
In accordance with a specific embodiment of the procedures set forth in these figures, the iterative process uses a series of “on” surface elements. For a given “on” element, only the surrounding elements are adjusted at an iteration, following which, the system moves to the next element (the next “on” element) and adjusts its surrounding elements and so forth. The element that the system is “on” is not changed, only its neighbors are changed to better fit the surface of a sphere that touches the “on” element, and is centered at the “optPoint” for the “on” element. There is only one adjustment to each neighbor made at an iteration, and then the process moves on to the next “on” element and all its neighbors are adjusted once, until all surface elements have been made the “on” element. Then the global error is calculated, and if it is not low enough, the process repeats. The process does not repetitively adjust neighbor elements for one “on” element before moving on. Rather, the process makes one small adjustment to each neighbor as needed at each iteration, and then moves on to the next “on” element and its set of neighbors. The result of this iterative adjusting of the spatial locations of the surface elements is a final spatial location for each surface element, a final direction of each element's normal, and a final radius of curvature for each element and a set of surrounding elements. The final locations, normals, and radii of curvature are outputted, e.g., stored in memory, when the error function satisfies a predetermined criterion, e.g., when the error function is smaller than a predetermined value.
Applications of reflective optical surfaces designed in accordance with the methods disclosed herein are set forth in commonly assigned and co-pending U.S. patent application Ser. Nos. 13/211,372 and 13/211,365, both filed Aug. 17, 2011, herewith in the names of G. Harrison, D. Smith, and G. Wiese, and D. Smith, G. Wiese, G. Cuddihy, and G. Harrison, respectively, entitled “Head-Mounted Display Apparatus Employing One or More Reflective Optical Surfaces” and “Head-Mounted Display Apparatus Employing One or More Fresnel Lenses,” respectively, the contents of both of which are incorporated herein by reference.
The mathematical techniques discussed above, including the flow charts of
Once designed, the reflective optical surfaces disclosed herein can be produced e.g., manufactured in quantity, using a variety of techniques and a variety of materials now known or subsequently developed. For example, the surfaces can be made from plastic materials which have been metalized to be suitably reflective. Polished plastic or glass materials can also be used. For “augmented reality” applications, the reflective optical surfaces can be constructed from a transmissive material with embedded small reflectors thus reflecting a portion of an incident wavefront while allowing transmission of light through the material.
For prototype parts, an acrylic plastic (e.g., plexiglas) may be used with the part being formed by diamond turning. For production parts, either acrylic or polycarbonate may, for example, be used with the part being formed by, for example, injection molding techniques. The reflective optical surface may be described as a detailed Computer Aided Drafting (CAD) description or as a non-uniform rational B-Spline NURBS surface, which can be converted into a CAD description. Having a CAD file may allow the device to be made using 3-D printing, where the CAD description results in a 3D object directly, without requiring machining.
A variety of modifications that do not depart from the scope and spirit of the invention will be evident to persons of ordinary skill in the art from the foregoing disclosure. For example, although reflective optical surfaces which provide the user with a large field of view, e.g., a field of view greater than or equal to 100°, or greater than or equal to 150°, or greater than or equal to 200°, constitute an advantageous embodiment of the invention, the methods and systems disclosed herein can also be used to create reflective surfaces having smaller fields of view. Furthermore, in various embodiments, reflective optical surfaces designed in accordance with the computer-based methods disclosed herein may provide a user with a full foveal dynamic field of view, a full foveal+peripheral static field of view, or a full foveal+peripheral dynamic field of view.
Similarly, although the invention has been illustrated for systems in which the light emitted from the display has not been collimated before it reaches the reflective surface, it is equally applicable to light that has been partially or fully collimated, e.g., by optical elements located between the display and the reflective surface. In such cases, the radii of curvature of the core surface elements will be adjusted to take account of the collimation of the light incident on the elements.
While embodiments have been described with reference to various embodiments, it will be understood by those skilled in the art that various changes, omissions and/or additions may be made and equivalents may be substituted for elements thereof without departing from the spirit and scope of the embodiments. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the embodiments without departing from the scope thereof. Therefore, it is intended that the embodiments not be limited to the particular embodiment disclosed as the best mode contemplated for carrying out this invention, but that all embodiments falling within the scope of the appended claims are considered. Moreover, unless specifically stated, any use of the terms first, second, etc., does not denote any order or importance, but rather the terms first, second, etc., are used to distinguish one element from another.
This application claims priority under 35 U.S.C. 119(e) to U.S. Provisional Application No. 61/405,440 (entitled HEAD-MOUNTED DISPLAY, filed Oct. 21, 2010), U.S. Provisional Application No. 61/417,325 (entitled CURVED-STACKED FRESNEL ARCHITECTURE, filed Nov. 26, 2010), U.S. Provisional Application No. 61/417,326 (entitled CURVED-BEAM SPLITTER ARCHITECTURE, filed Nov. 26, 2010), U.S. Provisional Application No. 61/417,327 (entitled COMBINED ARCHITECTURE OF FRESNEL LENSE AND FLAT BEAM SPLITTER, filed Nov. 26, 2010), U.S. Provisional Application No. 61/417,328 (entitled COMBINED ARCHITECTURE OF FRESNEL LENSE AND CURVED BEAM SPLITTER, filed Nov. 26, 2010), and U.S. Provisional Application No. 61/427,530 (entitled CURVED MIRROR FOR HEAD MOUNTED DISPLAY, filed Dec. 28, 2010), which are incorporated herein in their entireties by reference.
Number | Name | Date | Kind |
---|---|---|---|
3880509 | Herndon et al. | Apr 1975 | A |
4026641 | Bosserman et al. | May 1977 | A |
5309169 | Lippert | May 1994 | A |
5325386 | Jewell et al. | Jun 1994 | A |
5347400 | Hunter | Sep 1994 | A |
5561538 | Kato et al. | Oct 1996 | A |
5572343 | Okamura et al. | Nov 1996 | A |
5581271 | Kraemer | Dec 1996 | A |
5699194 | Takahashi | Dec 1997 | A |
5701132 | Kollin et al. | Dec 1997 | A |
5701202 | Takahashi et al. | Dec 1997 | A |
5712649 | Tosaki et al. | Jan 1998 | A |
5754344 | Fujiyama | May 1998 | A |
5757544 | Tabata et al. | May 1998 | A |
5774268 | Takahashi | Jun 1998 | A |
5798739 | Teitel et al. | Aug 1998 | A |
5834676 | Elliott | Nov 1998 | A |
5844530 | Tosaki | Dec 1998 | A |
5982343 | Iba et al. | Nov 1999 | A |
6038387 | Machida | Mar 2000 | A |
6140979 | Gerhard et al. | Oct 2000 | A |
6140980 | Spitzer et al. | Oct 2000 | A |
6160666 | Rallison et al. | Dec 2000 | A |
6201646 | Togino et al. | Mar 2001 | B1 |
6215593 | Bruce | Apr 2001 | B1 |
6266194 | Tanijiri | Jul 2001 | B1 |
6445362 | Tegreene | Sep 2002 | B1 |
6522474 | Cobb et al. | Feb 2003 | B2 |
6529331 | Massof et al. | Mar 2003 | B2 |
6549332 | Kimura | Apr 2003 | B2 |
6633304 | Anabuki et al. | Oct 2003 | B2 |
6646811 | Inoguchi | Nov 2003 | B2 |
6704128 | Takeyama et al. | Mar 2004 | B2 |
6731434 | Hua et al. | May 2004 | B1 |
6751026 | Tomono | Jun 2004 | B2 |
6771423 | Geist | Aug 2004 | B2 |
6795042 | Nagata et al. | Sep 2004 | B1 |
6813085 | Richards | Nov 2004 | B2 |
6829087 | Freese et al. | Dec 2004 | B2 |
6873471 | Coates | Mar 2005 | B2 |
6919866 | Kanevsky et al. | Jul 2005 | B2 |
6963379 | Tomono | Nov 2005 | B2 |
7002551 | Azuma et al. | Feb 2006 | B2 |
7009773 | Chaoulov et al. | Mar 2006 | B2 |
7063256 | Anderson et al. | Jun 2006 | B2 |
7072096 | Holman et al. | Jul 2006 | B2 |
7095562 | Peng | Aug 2006 | B1 |
7119965 | Rolland et al. | Oct 2006 | B1 |
7151639 | Lung | Dec 2006 | B2 |
7295377 | Edelmann | Nov 2007 | B2 |
7324081 | Friedrich et al. | Jan 2008 | B2 |
7339742 | Amitai et al. | Mar 2008 | B2 |
7385600 | Marion | Jun 2008 | B2 |
7391573 | Amitai | Jun 2008 | B2 |
7432879 | Schonlau | Oct 2008 | B2 |
7446941 | Fukuda | Nov 2008 | B2 |
7499217 | Cakmakci et al. | Mar 2009 | B2 |
7545571 | Garoutte et al. | Jun 2009 | B2 |
7573525 | Yamasaki | Aug 2009 | B2 |
7605773 | Janssen | Oct 2009 | B2 |
7613356 | Uchiyama et al. | Nov 2009 | B2 |
7623294 | Harada et al. | Nov 2009 | B2 |
7732694 | Rosenberg | Jun 2010 | B2 |
7751122 | Amitai | Jul 2010 | B2 |
7804507 | Yang et al. | Sep 2010 | B2 |
7812815 | Banerjee et al. | Oct 2010 | B2 |
7843403 | Spitzer | Nov 2010 | B2 |
7928927 | Krenz et al. | Apr 2011 | B1 |
7949295 | Kumar et al. | May 2011 | B2 |
8046719 | Skourup et al. | Oct 2011 | B2 |
8059342 | Burke | Nov 2011 | B2 |
20010033401 | Kasai et al. | Oct 2001 | A1 |
20020036649 | Kim et al. | Mar 2002 | A1 |
20020094189 | Navab et al. | Jul 2002 | A1 |
20020163486 | Ronzani et al. | Nov 2002 | A1 |
20020186179 | Knowles | Dec 2002 | A1 |
20020196554 | Cobb et al. | Dec 2002 | A1 |
20040130783 | Solomon | Jul 2004 | A1 |
20050046953 | Repetto et al. | Mar 2005 | A1 |
20060103590 | Divon | May 2006 | A1 |
20060281061 | Hightower et al. | Dec 2006 | A1 |
20070097277 | Hong et al. | May 2007 | A1 |
20070132785 | Ebersole et al. | Jun 2007 | A1 |
20070177275 | McGuire, Jr. | Aug 2007 | A1 |
20070219760 | Yang et al. | Sep 2007 | A1 |
20070236800 | Cakmakci et al. | Oct 2007 | A1 |
20070243916 | Lee | Oct 2007 | A1 |
20080130309 | Condon et al. | Jun 2008 | A1 |
20080204731 | Williams | Aug 2008 | A1 |
20080309586 | Vitale | Dec 2008 | A1 |
20090002574 | Sorek et al. | Jan 2009 | A1 |
20090122385 | Hilton | May 2009 | A1 |
20090153437 | Aharoni | Jun 2009 | A1 |
20090228251 | Cakmakci et al. | Sep 2009 | A1 |
20100002154 | Hua | Jan 2010 | A1 |
20100060551 | Sugiyama et al. | Mar 2010 | A1 |
20100103075 | Kalaboukis et al. | Apr 2010 | A1 |
20100103196 | Kumar et al. | Apr 2010 | A1 |
20100149073 | Chaum et al. | Jun 2010 | A1 |
20100171680 | Lapidot et al. | Jul 2010 | A1 |
20100214635 | Sasaki et al. | Aug 2010 | A1 |
20100238161 | Varga et al. | Sep 2010 | A1 |
20100245387 | Bachelder et al. | Sep 2010 | A1 |
20100321409 | Komori et al. | Dec 2010 | A1 |
20110018903 | Lapstun et al. | Jan 2011 | A1 |
20110057863 | Sugihara et al. | Mar 2011 | A1 |
20110130636 | Daniel et al. | Jun 2011 | A1 |
20110202306 | Eng et al. | Aug 2011 | A1 |
20110213664 | Osterhout et al. | Sep 2011 | A1 |
20110214082 | Osterhout et al. | Sep 2011 | A1 |
20110221656 | Haddick et al. | Sep 2011 | A1 |
20110221657 | Haddick et al. | Sep 2011 | A1 |
20110221658 | Haddick et al. | Sep 2011 | A1 |
20110221659 | King, III et al. | Sep 2011 | A1 |
20110221668 | Haddick et al. | Sep 2011 | A1 |
20110221669 | Shams et al. | Sep 2011 | A1 |
20110221670 | King, III et al. | Sep 2011 | A1 |
20110221671 | King, III et al. | Sep 2011 | A1 |
20110221672 | Osterhout et al. | Sep 2011 | A1 |
20110221793 | King, III et al. | Sep 2011 | A1 |
20110221896 | Haddick et al. | Sep 2011 | A1 |
20110221897 | Haddick et al. | Sep 2011 | A1 |
20110222745 | Osterhout et al. | Sep 2011 | A1 |
20110225536 | Shams et al. | Sep 2011 | A1 |
20110227812 | Haddick et al. | Sep 2011 | A1 |
20110227813 | Haddick et al. | Sep 2011 | A1 |
20110227820 | Haddick et al. | Sep 2011 | A1 |
20110231757 | Haddick et al. | Sep 2011 | A1 |
20110250962 | Feiner et al. | Oct 2011 | A1 |
20120120498 | Harrison et al. | May 2012 | A1 |
20120120499 | Harrison et al. | May 2012 | A1 |
20120154920 | Harrison et al. | Jun 2012 | A1 |
Number | Date | Country |
---|---|---|
2750287 | Nov 2011 | CA |
2750287 | Jul 2012 | CA |
102007009828 | Sep 2008 | DE |
1418458 | May 2004 | EP |
2461907 | Jan 2010 | GB |
2002287077 | Oct 2002 | JP |
2006091477 | Apr 2006 | JP |
2008058461 | Mar 2008 | JP |
9722964 | Jun 1997 | WO |
2005017729 | Feb 2005 | WO |
2009066408 | May 2009 | WO |
2009094643 | Jul 2009 | WO |
2010123934 | Oct 2010 | WO |
2011114149 | Sep 2011 | WO |
2012052981 | Apr 2012 | WO |
2012083042 | Jun 2012 | WO |
Entry |
---|
Kiyokawa, Kiyoshi, “A Wide Field-of-View Head Mounted Projective Display Using Hyperbolic Half-Silvered Mirrors,” IEEE, Nov. 16, 2007, Cybermedia Center, Osaka University, Osaka, Japan. |
Jeon et al., “Mosaicing a Wide Geometric Field of View for Effective Interaction in Augmented Reality,”Mixed and Augmented Reality, 6th IEEE and ACM International Symposium, Mar. 2007, pp. 265-266. |
Cakmakci et al., “Optical Free-Form Surfaces in Off-Axis Head-Worn Display Design,” Mixed and Augmented Reality, 7th IEEE/ACM International Symposium; Mar. 2008, pp. 29-32. |
Yang et al., “Hybrid Diffractive-Refractive 67°-Diagonal Field of View Optical See-Through Head-Mounted Display,” Institute of Modern Optics, Aug. 17, 2005, pp. 351-355, vol. 116, No. 7, Optik—Internat, Nankai University, Tianjin, China. |
Takeda et al., “Design and Implementation of a Wide Field-of-View Head Mounted Projective Display,” Journal of the Institute of Image Information and Television Engineers, Jun. 2009, pp. 794-800, vol. 63, No. 6, Institute of Image Information and Television Engineers, Osaka, Japan. |
Nagahara et al., Wide Field of View Catadioptrical Head Mounted Display, Transactions of the Institute of Electronics, Information and Communication Engineers D-II Jan. 2005, pp. 95-104, vol. J88D-II, No. 1, Inst. Electron. Inf. & Commun. Eng, Japan. |
Pratt, P. D., “Advanced Helmet Sight Reticle Assembly (AHRA),” Jul. 1976, p. 364, Honeywell Inc., Minneapolis Minn. Systems and Research Div. |
Nagahara et al., “Wide Field of View Head Mounted Display for Tele-Presence with An Omnidirectional Image Sensor,” Computer Vision and Pattern Recognition Workshop, Jun. 16-22, 2003, vol. 7, p. 86. |
Takeda et al., “Poster: A Virtual Walkthrough System with a Wide Field-of-View Stereo Head Mounted Projective Display,” 3D User Interfaces, IEEE Symposium, Mar. 14-15, 2009, p. 149, Lafayette, LA. |
Mori et al., “A Wide-View Parallax-Free Eye-Mark Recorder with a Hyperboloidal Half-Silvered Mirror and Appearance-Based Gaze Estimation,” Visualization and Computer Graphics, IEEE Transactions, Aug. 26, 2010, p. 1, vol. PP, No. 99. |
Okuma et al., “An Augmented Reality System Using a Real-Time Vision Based Registration,” Pattern Recognition, 1998. Proceedings. Fourteenth International Conference, Aug. 16-20, 1998, p. 1226, vol. 2. |
Parviz, Babak A. , “Augmented Reality in a Contact Lens,” IEEE Spectrum, Sep. 2009, http://spectrum.ieee.org/biomedical/bionics/augmented-reality-in-a-contact-lens/0. |
Lingley et al., “A Single-Pixel Wireless Contact Lens Display,” J. Micromech. Microeng., Nov. 22, 2011, 21 (2011) 125014 (8pp); doi:10.1088/0960-1317/21/12/125014; Received Jun. 9, 2011, in final form Sep. 19, 2011. |
Vuzix High Resolution Video Eyewear—Products; Retrieved from http://www.vuzix.com/consumer; Retrieved Dec. 30, 2011. |
Lumus—Consumer Market Products; Retrieved from http://www.lumus-optical.com/index.php?option=com—content&task=view&id=9&Itemid=15; Retrieved Dec. 30, 2011. |
Head Mounted Displays from INITION; Retrieved from http://www.inition.co.uk/3D-Technologies/productsection/31; Retrieved on Dec. 30, 2011. |
Azuma et al., “Improving Static and Dynamic Registration in an Optical See-Through HMD,” Proc. of ACM SIGGRAPH 1994, Computer Graphics, Annual Conference Series, Orlando, FL, Jul. 24-29, 1994, pp. 197-204. |
Cakmakci et al., “Meshfree Approximation Methods for Free-Form Surface Representation in Optical Design With Applications to Head-Worn Displays,” Proc. of SPIE, 2008, vol. 7061, 70610D-1, http://www.creol.ucf.edu/Research/Publications/2012.pdf. |
Hastings, A., “Eye Box Performance Parameters for Non Pupil Forming. Head/Helmet Mounted Displays,” Tutorial, OPT 521, Dec. 6, 2006, www.optics.arizona.edu/optomech/.../tutorials/HastingsTutorial1.doc. |
Hopkins et al., “Simple Thin Lens Optical Systems,” US Department of Defense, Military Standardization Handbook: Optical Design, MIL-HDBK-141, Oct. 5, 1962, FSC-6650, Section 7, http://www.optics.arizona.edu/opti5101/references/mil-hdbk-141/ch7—12.pdf. |
Klepper, Sebastian, “Augmented Reality—Display Systems,” Technische Universitaet Muenchen, Munich, Germany, 2007, http://campar.in.tum.de/twiki/pub/Chair/TeachingSs07ArProseminar/1—Display-Systems—Klepper—Report.pdf. |
Melzer et al. “Guidelines for HMD Design,” in Helmet-Mounted Displays: Sensation, Pereption and Cognition Issues, C. E. Rash et al., ed., U.S. Army Aeromedical Research Laboratory, Fort Rucker, AL, 2009, Chapter 17, http://www.usaarl.army.mil/publications/hmd—book09/files/Section%2026%20-%20Chapter17%20Guidelines%20for%20HMD%20design.pdf. |
Melzer, James E., “Head-Mounted Displays,” The Avionics Handbook, Cary R. Spitzer, ed., CRC Press, Boca Raton FL, 2001, Chapter 5, http://www.davi.ws/avionics/TheAvionicsHandbook—Cap—5.pdf. |
Nagahara et al., “Super Wide Viewer Using Catadioptric Optics,” Proc. ACM Symposium on Virtual Reality Software and Technology (VRST2003), Oct. 2003, pp. 169-175, Osaka, Japan. |
Nagahara et al., “Wide Field of View Catadioptrical Head-Mounted Display,” Proc. of 2003 IEEE/RSJ, Intl. Conference on Intelligent Robots and Systems, Las Vegas NV, Oct. 2003, 3738-3743. |
Livingston et al., “An Augmented Reality System for Military Operations in Urban Terrain,” Proc of I/ITSEC '02, Orlando FL, Dec. 2-5, 2002. |
Vanden Brook, T., “Device Helps Pinpoint Snipers: Technology Is Critical for U.S. Combat Troops,” USA Today, Wednesday, Mar. 2, 2011. |
Schwald et al., An Augmented Reality System for Training and Assistance to Maintenance in the Industrial Context, Journal of WSCG, Feb. 3-7, 2003, vol. 11, No. 1, Plzen, Czech Republic. |
Rolland et al., “Development of Head-Mounted Projection Displays for Distributed, Collaborative, Augmented Reality Applications,” Oct. 2005, Presence, vol. 14, No. 5, pp. 528-549. |
Kato et al., “Marker Tracking and HMD Calibration for a Video-based Augmented Reality Conferencing System,” 1999. |
Kato et al., “Virtual Object Manipulation on a Table-Top AR Environment,” 2000. |
Billinghurst et al., “Collaboration with Tangible Augmented Reality Interfaces,” 2002. |
Liu et al., “An Optical See-Through Head Mounted Display with Addressable Focal Planes,” IEEE Int'l Symposium on Mixed and Augmented Reality Sep. 15-18, 2008, Cambridge, UK. |
Bayer et al., “Introduction to Helmet-Mounted Displays,” 2009, U.S. Army Medical Dept., Medical Research and Material Command. |
Rolland et al., “Invited Paper: Head-Worn Displays—Lens Design,” 48th Annual SID Symposium, Seminar, and Exhibition 2010, Display Week 2010, May 23, 2010-May 28, 2010, vol. 2, pp. 855-858, Society for Information Display. |
Spitzer et al., “Video I/O Interface for Wearable Computers,” Proceedings of the SPIE—The International Society for Optical Engineering, vol. 3689, pp. 278-283, 1999, Conference: Helmet- and Head-Mounted Displays IV, Apr. 5-6, 1999, Orlando, FL, SPIE-Int. Soc. Opt. Eng, USA. |
Upton et al., “Eyeglass Head-Up Display Vibrating Fiber Optic Assembly,” 1981 SID International Symposium. Digest of Papers, Apr. 28-30, 1981, vol. XII, pp. 48-49, New York, NY, SID, Los Angeles, CA. |
Rose, Melinda, “Microdisplays: Coming Soon to an Eye Near You?” Photonics Spectra, Sep. 2008, vol. 42, No. 9, pp. 68-69, Laurin Publishing Co. Inc. |
Kurze et al., “Smart Glasses: An Open Environment for AR Apps,” 2010 9th IEEE International Symposium on Mixed and Augmented Reality (ISMAR). Science & Technology Papers, Oct. 13-16, 2010, 313, 2010, Seoul, South Korea. |
Schonlau et al., “Personal Viewer: A Wide-Field Low-Profile See-Through Eyewear Display,” Proceedings of the SPIE—The International Society for Optical Engineering, Apr. 14-16, 2004, vol. 5443, No. 1, pp. 277-287, Orlando, FL, SPIE-Int. Soc. Opt. Eng. |
Mukawa et al., “A Full Color Eyewear Display Using Holographic Planar Waveguides,” IDW '08—Proceedings of the 15th International Display Workshops, Dec. 3, 2008-Dec. 5, 2008, vol. 1, pp. 259-262, Inst. of Image Information and Television Engineers. |
Mukawa et al., “A Full-Color Eyewear Display Using Planar Waveguides with Reflection Volume Holograms,” Journal of the Society for Information Display, vol. 17, No. 3, pp. 185-193, Mar. 2009, Society for Information Display. |
Dejong, C. Dean, “Full-Color, See-Through, Daylight-Readable, Goggle-Mounted Display,” Proceedings of SPIE—The International Society for Optical Engineering, Apr. 28, 2011-Apr. 28, 2011, vol. 8041, SPIE. |
von Waldkirch et al., “Spectacle-Based Design of Wearable See-Through Display for Accommodation-Free Viewing,” Pervasive Computing. Second International Conference, PERVASIVE 2004. Proceedings. (Lecture Notes in Comput. Sci. vol. 3001), Apr. 18-23, 2004, 106-23, Springer-Verlag, Berlin, Germany. |
Ayras et al., “Near-To-Eye Display Based on Retinal Scanning and a Diffractive Exitpupil Expander,” Proceedings of SPIE—The International Society for Optical Engineering, Apr. 12, 2010-Apr. 15, 2010, vol. 7723, No. 77230V, SPIE. |
Ferscha et al., “Wearable Displays for Everyone!” IEEE Pervasive Computing, Jan.-Mar. 2010, vol. 9, No. 1, pp. 7-10, Institute of Electrical and Electronics Engineers Inc. |
“SEOS Ultra Wide Field-of-View Head Mounted Display,” http://cgsd.com/SEOSHMD/, Jan. 3, 2003. |
International Search Report, May 16, 2012. |
Amery, John G. et al., “Flight Simulation Visual Requirements and a New Display System,” Cockpit Displays VI: Displays for Defense Applications, Proceedings of the SPIE, vol. 3690, Aug. 16, 1999, 16 pages. |
Author Unknown, “ABI Research Anticipated ‘Dramatic Growth’for Augmented Reality via Smartphones,” Human-Machine Technology Research Service, ABI Research, Oct. 22, 2009, 1 page. |
Holden, Windsor, “A New Reality for Mobile,” Whitepaper, Juniper Research Limited, Feb. 2011, 5 pages. |
Author Unknown, “Immersive Displays: Powerwall, CAVE, Headmounted Displays (HMD),” InterSense Applications, Downloaded at http://www.intersense.com/categories/11/, Accessed on Mar. 7, 2011, InterSense Incorporated, 3 pages. |
Feiner, Steven, et al., “MARS—Mobile Augmented Reality Systems”, Columbia University, Computer Graphics and User Interfaces Lab, Downloaded at http://graphics.cs.columbia.edu/projects/mars/, Accessed on Mar. 7, 2011, 4 pages. |
Haun, Bzur, “Gartner: Mobility market will reach $1 trillion by 2014,” Mobility Management News and Blogs, Visage Mobile, Oct. 21, 2010, 2 pages. |
Henderson, Steve, et al., “Augmented Reality for Maintenance and Repair (ARMAR),” Columbia University, Computer Graphics and User Interfaces Lab, Downloaded at http://graphics.cs.columbia.edu/projects/armar/, Jul. 2007, 4 pages. |
Perey, Christine, et al., “Where's the Money? Mobile AR Revenue Streams,” Mobile AR Summit Position Paper, Downloaded at http://www.perey.com/MobileARSummit/PEREY-Mobile%20AR-Revenue-Streams.pdf, Feb. 9, 2010, 4 pages. |
Written Opinion of the International Searching Authority for PCT/IB2011/055824 mailed May 2, 2013, 5 pages. |
International Preliminary Report on Patentability for PCT/IB2011/055824 mailed May 2, 2013, 7 pages. |
International Search Report for PCT/IB2011/055820 mailed May 21, 2012, 4 pages. |
International Search Report and Written Opinion for PCT/US2011/065201 mailed Mar. 7, 2012, 14 pages. |
International Preliminary Report on Patentability for PCT/US2011/065201 mailed Jun. 27, 2013, 11 pages. |
International Search Report for PCT/IB2011/055826 mailed Sep. 14, 2012, 4 pages. |
International Preliminary Report on Patentability for PCT/IB2011/055826 mailed May 2, 2013, 11 pages. |
Non-final Office Action for U.S. Appl. No. 13/211,365 mailed Oct. 24, 2012, 12 pages. |
Final Office Action for U.S. Appl. No. 13/211,365 mailed Feb. 22, 2013, 15 pages. |
Non-final Office Action and Examiner-Initiated Interview Summary for U.S. Appl. No. 13/211,365 mailed Jun. 14, 2013, 18 pages. |
Final Office Action for U.S. Appl. No. 13/211,365 mailed Oct. 18, 2013, 22 pages. |
Non-final Office Action for U.S. Appl. No. 13/211,372 mailed Nov. 21, 2012, 9 pages. |
Non-final Office Action for U.S. Appl. No. 13/211,372 mailed Mar. 7, 2013, 12 pages. |
Final Office Action for U.S. Appl. No. 13/211,372 mailed Aug. 1, 2013, 7 pages. |
Notice of Allowance for U.S. Appl. No. 13/211,372 mailed Sep. 6, 2013, 8 pages. |
Non-final Office Action for U.S. Appl. No. 13/327,217 mailed Jan. 17, 2014, 14 pages. |
Number | Date | Country | |
---|---|---|---|
20120123742 A1 | May 2012 | US |
Number | Date | Country | |
---|---|---|---|
61405440 | Oct 2010 | US | |
61417325 | Nov 2010 | US | |
61417326 | Nov 2010 | US | |
61417327 | Nov 2010 | US | |
61417328 | Nov 2010 | US | |
61427530 | Dec 2010 | US |