The present disclosure relates generally to computational microscopy and, more specifically, to systems and methods for generating an image under different illumination conditions.
Today's commercial microscopes rely on expensive and delicate optical lenses and typically need additional hardware to share and process acquired images. Moreover, for scanning optical microscopy, additional expensive equipment such as accurate mechanics and scientific cameras are required. A new generation of microscope technology, known as computational microscopy, has begun to emerge, and makes use of advanced image-processing algorithms (usually with hardware modifications) to overcome limitations of conventional microscopes. A computational microscope can, in some cases, produce high-resolution digital images of samples without using expensive optical lenses. In addition, a computational microscope may open the door for additional capabilities based on computer vision, sharing of data, etc.
Disclosed systems and methods relate to the field of computational microscopy. Certain disclosed embodiments are directed to systems and methods for focusing a microscope using images acquired under a plurality of illumination conditions. The disclosed embodiments also include systems and methods for acquiring images under a plurality of illumination conditions to generate a high-resolution image of a sample.
Consistent with disclosed embodiments, a microscope for constructing an image of a sample using image information acquired under a plurality of different illumination conditions is provided. The microscope may include at least one image capture device configured to capture at a first image resolution images of a sample. The microscope may further include a lens with a first numerical aperture. The microscope may also include an illumination assembly including at least one light source configured to illuminate the sample, wherein a maximal incidence angle of illumination represents a second numerical aperture which is at least 1.5 times the first numerical aperture. The microscope may further include at least one controller programmed to: cause the illumination assembly to illuminate the sample at a series of different illumination conditions; acquire from the at least one image capture device a plurality of images of the sample, wherein the plurality of images includes at least one image for each illumination condition; determine, from the at least one image, image data of the sample for each illumination condition; and generate, from the determined image data for each illumination condition and in a non-iterative process, a reconstructed image of the sample, the reconstructed image having a second image resolution higher than the first image resolution.
Also consistent with disclosed embodiments, a microscope for constructing an image of a sample using image information acquired under a plurality of different illumination conditions is provided. The microscope may include at least one image capture device configured to capture at a first image resolution, images of a sample. The microscope may also include a lens with a first numerical aperture. The microscope may further include an illumination assembly including at least one light source configured to illuminate the sample, wherein a maximal incidence angle of illumination represents a second numerical aperture which is at least 1.5 times the first numerical aperture. The microscope may further include at least one controller programmed to cause the illumination assembly to illuminate the sample at a series of different illumination angles; acquire from the at least one image capture device a plurality of images of the sample, wherein the plurality of images includes at least one image for each illumination angle; determine, from the at least one image, image data of the sample for each illumination angle, wherein the image data includes phase information of the sample under each illumination condition; and generate, from the image data for the series of different illumination angles, a reconstructed image of the sample, the reconstructed image having a second image resolution higher than the first image resolution.
Consistent with the disclosed embodiments, a method is provided for constructing an image of a sample using image information acquired under a plurality of different illumination conditions. The method may include illuminating a sample at a series of different illumination conditions, wherein an illumination of the sample is at an incidence angle representing a numerical aperture which is at least 1.5 times of a numerical aperture associated with an image capture device; acquiring, from the image capture device, a plurality of images captured at the first image resolution of the sample, wherein the plurality of images includes at least one image for each illumination condition; determining, from the at least one image, image data of the sample for each illumination condition, wherein the image data includes phase information of the sample under each illumination condition; and generating, from the determined image data for each illumination condition and in a non-iterative process, a reconstructed image of the sample, the reconstructed image having a second image resolution higher than the first image resolution.
Additionally, a non-transitory computer-readable storage media may store program instructions, which are executed by at least one controller and perform any of the methods described herein.
The foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the claims.
The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate various disclosed embodiments. In the drawings:
The following detailed description refers to the accompanying drawings. Wherever possible, the same reference numbers are used in the drawings and the following description to refer to the same or similar parts. While several illustrative embodiments are described herein, modifications, adaptations and other implementations are possible. For example, substitutions, additions or modifications may be made to the components illustrated in the drawings, and the illustrative methods described herein may be modified by substituting, reordering, removing, or adding steps to the disclosed methods. Accordingly, the following detailed description is not limited to the disclosed embodiments and examples. Instead, the proper scope is defined by the appended claims
Disclosed embodiments provide microscopes and methods that use one or more cameras to provide high-resolution images of a sample which may be located on a stage. In various embodiments, the microscope may use images of the sample captured under a plurality of illumination conditions. For example, the plurality of illumination conditions may include different illumination angles. In one aspect of the disclosure, the microscope may identify, in the captured images, multiple occurrences of the sample corresponding to the plurality of illumination conditions. The microscope may estimate a shift between the occurrences and determine a degree in which the microscope is out of focus. This aspect of the disclosure is described in detail with reference to
Image capture device 102 may be used to capture images of sample 114. In this specification, the term “image capture device” includes a device that records the optical signals entering a lens as an image or a sequence of images. The optical signals may be in the near-infrared, infrared, visible, and ultraviolet spectrums. Examples of an image capture device include a CCD camera, a CMOS camera, a photo sensor array, a video camera, a mobile phone equipped with a camera, etc. Some embodiments may include only a single image capture device 102, while other embodiments may include two, three, or even four or more image capture devices 102. In some embodiments, image capture device 102 may be configured to capture images in a defined field-of-view (FOV). Also, when microscope 100 includes several image capture devices 102, image capture devices 102 may have overlap areas in their respective FOVs. Image capture device 102 may have one or more image sensors (not shown in
In some embodiments, microscope 100 includes focus actuator 104. The term “focus actuator” refers to any device capable of converting input signals into physical motion for adjusting the relative distance between sample 114 and image capture device 102. Various focus actuators may be used, including, for example, linear motors, electrostrictive actuators, electrostatic motors, capacitive motors, voice coil actuators, magnetostrictive actuators, etc. In some embodiments, focus actuator 104 may include an analog position feedback sensor and/or a digital position feedback element. Focus actuator 104 is configured to receive instructions from controller 106 in order to make light beams converge to form a clear and sharply defined image of sample 114. In the example illustrated in
Microscope 100 may also include controller 106 for controlling the operation of microscope 100 according to the disclosed embodiments. Controller 106 may comprise various types of devices for performing logic operations on one or more inputs of image data and other data according to stored or accessible software instructions providing desired functionality. For example, controller 106 may include a central processing unit (CPU), support circuits, digital signal processors, integrated circuits, cache memory, or any other types of devices for image processing and analysis such as graphic processing units (GPUs). The CPU may comprise any number of microcontrollers or microprocessors configured to process the imagery from the image sensors. For example, the CPU may include any type of single- or multi-core processor, mobile device microcontroller, etc. Various processors may be used, including, for example, processors available from manufacturers such as Intel®, AMD®, etc. and may include various architectures (e.g., x86 processor, ARM®, etc.). The support circuits may be any number of circuits generally well known in the art, including cache, power supply, clock and input-output circuits.
In some embodiments, controller 106 may be associated with memory 108 used for storing software that, when executed by controller 106, controls the operation of microscope 100. In addition, memory 108 may also store electronic data associated with operation of microscope 100 such as, for example, captured or generated images of sample 114. In one instance, memory 108 may be integrated into the controller 106. In another instance, memory 108 may be separated from the controller 106. Specifically, memory 108 may refer to multiple structures or computer-readable storage mediums located at controller 106 or at a remote location, such as a cloud server. Memory 108 may comprise any number of random access memories, read only memories, flash memories, disk drives, optical storage, tape storage, removable storage and other types of storage.
Microscope 100 may include illumination assembly 110. The term “illumination assembly” refers to any device or system capable of projecting light to illuminate sample 114. Illumination assembly 110 may include any number of light sources, such as light emitting diodes (LEDs), lasers, and lamps configured to emit light. In one embodiment, illumination assembly 110 may include only a single light source. Alternatively, illumination assembly 110 may include four, sixteen, or even more than a hundred light sources organized in an array or a matrix. In some embodiments, illumination assembly 110 may use one or more light sources located at a surface parallel to illuminate sample 114. In other embodiments, illumination assembly 110 may use one or more light sources located at a surface perpendicular or at an angle to sample 114.
In addition, illumination assembly 110 may be configured to illuminate sample 114 in a series of different illumination conditions. In one example, illumination assembly 110 may include a plurality of light sources arranged in different illumination angles, such as a two-dimensional arrangement of light sources. In this case, the different illumination conditions may include different illumination angles. For example,
Consistent with disclosed embodiments, microscope 100 may include, be connected with, or in communication with (e.g., over a network or wirelessly, e.g., via Bluetooth) user interface 112. The term “user interface” refers to any device suitable for presenting a magnified image of sample 114 or any device suitable for receiving inputs from one or more users of microscope 100.
Microscope 100 may also include or be connected to stage 116. Stage 116 includes any horizontal rigid surface where sample 114 may be mounted for examination. Stage 116 may include a mechanical connector for retaining a slide containing sample 114 in a fixed position. The mechanical connector may use one or more of the following: a mount, an attaching member, a holding arm, a clamp, a clip, an adjustable frame, a locking mechanism, a spring or any combination thereof. In some embodiments, stage 116 may include a translucent portion or an opening for allowing light to illuminate sample 114. For example, light transmitted from illumination assembly 110 may pass through sample 114 and towards image capture device 102. In some embodiments, stage 116 and/or sample 114 may be moved using motors or manual controls in the XY plane to enable imaging of multiple areas of the sample.
As shown in
When sample 114 is located at a focal-plane 204, the image projected from lens 202 is completely focused. The term “focal-plane” is used herein to describe a plane that is perpendicular to the optical axis of lens 202 and passes through the lens's focal point. The distance between focal-plane 204 and the center of lens 202 is called the focal length and is represented by D1. In some cases, sample 114 may not be completely flat, and there may be small differences between focal-plane 204 and various regions of sample 114. Accordingly, the distance between focal-plane 204 and sample 114 or a region of interest (ROI) of sample 114 is marked as D2. The distance D2 corresponds with the degree in which an image of sample 114 or an image of ROI of sample 114 is out of focus. For example, distance D2 may be between 0 and about 3 mm. In some embodiments, D2 may be greater than 3 mm. When distance D2 equals to zero, the image of sample 114 (or the image of ROI of sample 114) is completely focused. In contrast, when D2 has a value other than zero, the image of sample 114 (or the image of ROI of sample 114) is out of focus.
As discussed above, D2 is the distance between focal-plane 204 and sample 114, and it corresponds with the degree in which sample 114 is out of focus. In one example, D2 may have a value of 50 micrometers. Focus actuator 104 is configured to change distance D2 by converting input signals from controller 106 into physical motion. In some embodiments, in order to focus the image of sample 114, focus actuator 104 may move image capture device 102. In this example, to focus the image of sample 114 focus actuator 104 may move image capture device 102 50 micrometers up. In other embodiments, in order to focus the image of sample 114, focus actuator 104 may move stage 116 down. Therefore, in this example, instead of moving image capture device 102 50 micrometers up, focus actuator 104 may move stage 116 50 micrometers down.
In some embodiments, controller 106 may be configured to identify the relative positions of the two (or more) representations using at least one common image feature of sample 114. As used herein, the term “image feature” refers to an identifiable element in a digital image, such as a line, a point, a spot, an edges, a region of similar brightness, a similar shape, an area of the image, etc. or other distinguishing characteristic of the pixels that comprise the image of sample 114. The changes between the two (or more) representations may be distinguishable with the naked eye and/or with the aid of image analysis algorithms that include feature detection or use a region of interest, which may be part, or all of the image, as the input features, such as, Marr-Hildreth algorithm, scale-invariant feature transform (SIFT) algorithm, speeded up robust features (SURF) algorithm, Digital image correlation (DIC) algorithm, cross correlation etc. As shown in
After identifying multiple occurrences of at least one image feature of sample 114 associated with a plurality of illumination conditions, controller 106 may estimate an amount of shift between the occurrences. In
In one embodiment, after estimating shift D3 between first representation 300 and second representation 302, controller 106 may determine distance D2 using the distance between the illumination sources L, the distance between the illumination source plane and current focal plane Z and D3 in order to calculate the distance D2. In one example the distance D2 may be calculated using the following linear equation:
In order for controller 106 to reduce the distance between sample 114 and focal-plane 204, controller 106 may also determine the direction of the required adjustment. For example, in some cases focal-plane 204 may be below sample 114 (as illustrated in
In some embodiments, controller 106 may determine that the quality of the image is not sufficient. For example, the level of sharpness associated with an image of sample 114 may be below a predefined threshold. The level of sharpness may vary due to, for example, unintentional movement of microscope 100, a change of the ROI of sample 114, and more. To improve the quality of the image, controller 106 may refocus microscope 100. In addition, controller 106 may determine a plurality of shift values that correspond with a plurality of portions of a field of view of image capture device 102 to determine three-dimensional information. The three-dimensional information may include a tilt information between microscope 100 and sample 114, 3D shape of the object, and/or field curvature of lens 202. Controller 106 may use the tilt information when reconstructing the image of sample 114 to improve the sharpness of the image of sample 114. Additional examples regarding the reconstruction of the image of sample 114 is provided below with reference to
At step 402, controller 106 may cause illumination assembly 110 to illuminate sample 114 under a first illumination condition. At step 404, controller 106 may acquire, from image capture device 102, a first image of sample 114 illuminated under the first illumination condition. In some embodiments, controller 106 may cause illumination assembly 110 to illuminate sample 114 using a single light source located within a numerical aperture of image capture device 102. Alternatively, controller 106 may cause illumination assembly 110 to illuminate sample 114 using a plurality of light sources located within the numerical aperture of image capture device 102.
At step 406, controller 106 may cause illumination assembly 110 to illuminate sample 114 under a second illumination condition different from the first illumination condition. Next, at step 408, controller 106 may acquire, from image capture device 102, a second image of sample 114 illuminated under the second illumination condition. In some embodiments, the illumination conditions may include at least one of: different illumination angles, different illumination patterns, different wavelengths, or a combination thereof. For example, the illumination conditions may include a first illumination angle and a second illumination angle symmetrically located with respect to an optical axis of image capture device 102. Alternatively, the illumination conditions may include a first illumination angle and a second illumination angle asymmetrically located with respect to an optical axis of image capture device 102. Alternatively, the illumination conditions may include a first illumination angle and a second illumination angle within the numerical aperture of image capture device 102. In the example depicted in
At step 410, controller 106 may determine an amount of shift D3 between one or more image features present in the first image of sample 114 and a corresponding one or more image features present in the second image of sample 114. In some embodiments, controller 106 may determine a plurality of shift values based on multiple image features and calculate an overall shift associated with shift D3. For example, the overall shift may be a mean, a median, a mode of the plurality of shift values. In other embodiments, controller 106 may determine a size of the distance change based on a magnitude of shift D3. In addition, controller 106 may also determine a direction of the distance change based on a direction of shift D3, or by purposely introducing a known separation between the sample and the focal plane. As discussed above, in some cases, focal-plane 204 may be below sample 114 (as illustrated in
At step 412, controller 106 may, where the amount of determined shift D3 is non-zero, cause focus actuator 104 to change distance D2 between sample 114 and focal-plane 204. As discussed above, 104 may move image capture device 102 to adjust distance D2 between sample 114 and focal-plane 204, or move stage 116 to adjust the distance between sample 114 and focal-plane 204. In some embodiments, controller 106 may cause focus actuator 104 to reduce the distance between sample 114 and focal-plane 204 to substantially zero, for example, as illustrated in
In some embodiments, controller 106 may repeat steps 402 to 410 to determine an amount of a new shift after adjusting distance D2 between sample 114 and focal-plane 204. If the amount of the new shift is still non-zero, or above a predefined threshold. Controller 106 may cause focus actuator 104 to change again distance D2 between sample 114 and focal-plane 204. In some embodiments, controller 106 may readjust distance D2 between sample 114 and focal-plane 204 until shift D3 would be substantially zero or below the predefined threshold. When the amount of the new shift is below a predetermined threshold, controller 106 may store the amount of determined shift for future focus compensation calculations. After completing process 400, microscope 100 is completely focused. Thereafter, and according to another aspect of the disclosure, microscope 100 may acquire a plurality of focused images to generate a high-resolution image of sample 114. As shown in
In some embodiments, controller 106 may use the determined distance D2 to perform calculations for computational correction of focus along with physical motion stage 116 or without causing stage 116 to move. Furthermore, in some embodiments, stage 116 and/or sample 114 may be moved using motors or manual controls in the XY plane to enable imaging of multiple areas of sample 114.
There are several known methods in the field of computational imaging processing for producing a high-resolution image of a sample from a set of low-resolution images. One of these methods is, for example, ptychography. These methods may use an iterative process in order to compute the high-resolution image in a way that the reconstructed image in each iteration is compared to a pre-iteration high-resolution image, and the difference between them serves as the convergence condition. The present disclosure describes microscopes and methods for producing a high-resolution image from a set of low resolution images taken with different illumination conditions, but does not require iterations as used by the known methods. Therefore, the disclosed microscopes and methods enable decreasing the computation time needed to reconstruct the high-resolution image.
Consistent with the present disclosure, controller 106 may acquire images at a first image resolution and generate a reconstructed image of sample 114 having a second (enhanced) image resolution. The term “image resolution” is a measure of the degree to which the image represents the fine details of sample 114. For example, the quality of a digital image may also be related to the number of pixels and the range of brightness values available for each pixel. In some embodiments, generating the reconstructed image of sample 114 is based on images having an image resolution lower than the enhanced image resolution. The enhanced image resolution may have at least 2 times, 5 times, 10 times, or 100 times more pixels than the lower image resolution images. For example, the first image resolution of the captured images may be referred to hereinafter as low-resolution and may have a value between 2 megapixels and 25 megapixels, between 10 megapixels and 20 megapixels, or about 15 megapixels. Whereas, the second image resolution of the reconstructed image may be referred to hereinafter as high-resolution and may have a value higher than 40 megapixels, higher than 100 megapixels, higher than 500 megapixels, or higher than 1000 megapixels.
At step 504, controller 106 may determine image data of sample 114 associated with each illumination condition. For example, controller 106 may apply a Fourier transform on images acquired from image capture device 102 to obtain Fourier transformed images. The Fourier transform is an image processing tool which is used to decompose an image into its sine and cosine components. The input of the transformation may be an image in the normal image space (also known as real-plane), while the output of the transformation may be a representation of the image in the frequency domain (also known as a Fourier-plane). Consistent with the present disclosure, the output of a transformation, such as the Fourier transform, is also referred to as “image data.” Alternatively, controller 106 may use other transformations, such as a Laplace transform, a Z transform, a Gelfand transform, or a Wavelet transform. In order to rapidly and efficiently convert the captured images into images in the Fourier-plane, controller 106 may use a Fast Fourier Transform (FFT) algorithm to compute the Discrete Fourier Transform (DFT) by factorizing the DFT matrix into a product of sparse (mostly zero) factors.
At step 506, controller 106 may aggregate the image data determined from images captured under a plurality of illumination conditions to form a combined complex image. One way for controller 106 to aggregate the image data is by locating in the Fourier-plane overlapping regions in the image data. Another way for controller 106 to aggregate the image data is by determining the intensity and phase for the acquired low-resolution images per illumination condition. In this way, the image data, corresponding to the different illumination conditions, does not necessarily include overlapping regions. By eliminating or reducing the amount of overlap needed, this method has a great advantage in reducing the number of illumination conditions needed in order to reconstruct an image with a certain resolution, and therefore increasing the acquisition speed of the image information.
At step 508, controller 106 may generate a reconstructed high-resolution image of sample 114. For example, controller 106 may apply the inverse Fourier transform to obtain the reconstructed image. In one embodiment, depicted in
The present disclosure provides several ways to determine the phase information under each illumination condition. According to one embodiment that may be implemented in the configuration of
According to another embodiment that may be implemented in the configuration of
According to another embodiment that may be implemented in the configurations of
According to yet another embodiment that may be implemented in the configurations of
In one embodiment, controller 106 may determine phase information under each illumination condition independently.
The example process of
At step 1006, controller 106 may determine, from the at least one image, image data of sample 114 for each illumination condition. In some embodiments, in order to determine the image data of sample 114 for each illumination condition, controller 106 may transform the at least one image from a real space to a Fourier space, aggregate the image data of the sample in the Fourier-space to form a combined complex image, and transform the combined complex image data back to the image space to generate the reconstructed image of sample 114. Consistent with some embodiments, determining image data of sample 114 for each illumination condition may include determining phase information of sample 114 under each illumination condition independently. As discussed above with reference to
In a first embodiment, controller 106 may acquire, from image capture device 102, a group of first images from different focal planes for each illumination condition and determine, from the group of first images, phase information under each illumination condition independently. In a second embodiment, controller 106 may acquire, from first image sensor 200A, a first image for each illumination condition; acquire, from second image sensor 200B, a second image different from the first image for each illumination condition; and combine information from the first image and the second image to determine phase information under each illumination condition independently. In a third embodiment, controller 106 may identify, for each illumination condition, an interference pattern between the first and second light beams and determine, from the interference pattern, phase information associated with each illumination condition independently. In a fourth embodiment, controller 106 may acquire, for each illumination condition, a first image from first image sensor 200A, and a second image from second image sensor 200B, wherein the second image is modulated differently from the first image; and combine information from the first image and the second image to determine phase information under each illumination condition.
At step 1008, controller 106 may generate, from the determined image data for each illumination condition, a reconstructed image of sample 114. The reconstructed image having a second image resolution higher than the first image resolution. In some embodiments, controller 106 may generate the reconstructed image in a non-iterative process. The term “generate a reconstructed image in a non-iterative process” refers to a process in which the reconstructed image is not compared to the acquired images nor are the acquired images compared to themselves. The non-iterative process may include using image data associated with a single illumination condition for each point in the combined complex image, as depicted in
After controller 106 generates the reconstructed image of sample 114, it may cause the reconstructed image to be shown on a display (step 1010) or identify at least one element of sample 114 in the reconstructed image (step 1012). In some embodiments, controller 106 may confirm the quality of the reconstructed image before using it. For example, controller 106 may generate the reconstructed image using a first set of constructing parameters, and determine that the reconstructed image is not in a desired quality. In one example, the determination that reconstructed image is not in the desired quality is based on a level of sharpness of the reconstructed image or parts of it, or a comparison with expected or known results based on prior knowledge. Thereafter, controller 106 may generate a second reconstructed image using a second set of constructing parameters. In addition, controller 106 may acquire another set of images of sample 114 after changing the focus of microscope 100, as described above with reference to
The foregoing description has been presented for purposes of illustration. It is not exhaustive and is not limited to the precise forms or embodiments disclosed. Modifications and adaptations will be apparent to those skilled in the art from consideration of the specification and practice of the disclosed embodiments. Additionally, although aspects of the disclosed embodiments are described as being stored in memory, one skilled in the art will appreciate that these aspects can also be stored on other types of computer readable media, such as secondary storage devices; for example, hard disks, floppy disks, CD ROM, other forms of RAM or ROM, USB media, DVD, or other optical drive media.
Computer programs based on the written description and disclosed methods are within the skill of an experienced developer. The various programs or program modules can be created using any of the techniques known to one skilled in the art or can be designed in connection with existing software. For example, program sections or program modules can be designed in or by means of .Net Framework, .Net Compact Framework (and related languages, such as Visual Basic, C, etc.), Java, C++, Objective-C, python, Matlab, Cuda, HTML, HTML/AJAX combinations, XML, or HTML with included Java applets. One or more of such software sections or modules can be integrated into a computer system or existing e-mail or browser software.
Moreover, while illustrative embodiments have been described herein, the scope of any and all embodiments having equivalent elements, modifications, omissions, combinations (e.g., of aspects across various embodiments), adaptations and/or alterations as would be appreciated by those skilled in the art based on the present disclosure. The limitations in the claims are to be interpreted broadly based on the language employed in the claims and not limited to examples described in the present specification or during the prosecution of the application. The examples are to be construed as non-exclusive. Furthermore, the steps of the disclosed routines may be modified in any manner, including by reordering steps and/or inserting or deleting steps. It is intended, therefore, that the specification and examples be considered as illustrative only, with a true scope and spirit being indicated by the following claims and their full scope of equivalents.
This application claims the benefit of priority of United States Provisional Patent Application No. 62/253,723, filed on Nov. 11, 2015; United States Provisional Patent Application No. 62/253,726, filed on Nov. 11, 2015; and United States Provisional Patent Application No. 62/253,734, filed on Nov. 11, 2015. All of the foregoing applications are incorporated herein by reference in their entirety.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/IB2016/001725 | 11/10/2016 | WO | 00 |
Number | Date | Country | |
---|---|---|---|
62253723 | Nov 2015 | US | |
62253726 | Nov 2015 | US | |
62253734 | Nov 2015 | US |