This document generally relates to improving the acquisition of images by digital image capture devices to reduce or eliminate the effects of glare.
The increased use of mobile devices such as cell phones and tablets has invigorated capturing samples of secure credentials to support remote identity assertion. While the intention is for a user to authenticate an identity through the Internet, the rising rates of identity theft and fraud, especially in the on-line context, can effectively impede the development of technologies for on-line transactions. In this context, to verify one's identity with a mobile capture of a secure credential such as a driver's license, passport or ID card, a high quality sample is preferable. However, lighting artifacts such as glare and shadows may hinder document authentication efforts. Improvements to digital image capture devices and processes that reduce or eliminate reflections or glare on imaged documents are desirable.
This specification relates to improvements to image capture devices (e.g., digital cameras) to prevent, reduce, or eliminate lighting artifacts such as glare or shadows from images captured by the device. Implementations of the present disclosure are generally directed to systems, devices, and methods for user interfaces that guide a user to manipulate a document in a manner that reduces glare or shadows in captured images. The proposed capture techniques seek to reduce the frequency with which the user's capture session results in samples unfit for the required validation operations. To effectively mitigate or reduce the deleterious effects of lighting artifacts (e.g., glare or shadows) during document capture using a mobile device, some implementations incorporate the use of a transformation of the capture preview window during the capture session. In some implementations, the user interfaces can be used to guide a user to manipulate a document in a manner that improves the detectability of document security features in images of the document.
In general, innovative aspects of the subject matter described in this specification can be embodied in methods that include the actions of obtaining, in real-time from an image capture device, a video stream that includes images of a document by a computing device. The computing device applies an artificial transformation to subsequent images of the video stream to provide transformed images of the document, where the transformed images depict an artificial transformation of the document in the subsequent images such that, in the transformed images, the document appears as if captured from a point of view relative to the image capture device that is different from an actual point of view depicted in the subsequent images before the artificial transformation is applied. The computing device provides a transformed video stream that includes the transformed images for display in an image preview window, and thereby, prompting a user to move the document with respect to the image capture device. Other implementations of this aspect include corresponding systems, apparatus, and computer programs, configured to perform the actions of the methods, encoded on computer storage devices. These and other implementations can each optionally include one or more of the following features.
In some implementations, the artificial transformation to the subsequent images of the video stream is applied in response to detecting a lighting artifact in at least one image of the video stream.
In some implementations, the artificial transformation causes the document, in a first set of the transformed images, to appear translated away from a center of image preview window in a first direction, and the artificial transformation causes the document in a second set of the transformed images captured after the user has moved the document in a second direction, opposite to the first direction, to appear substantially un-translated within the image preview window.
In some implementations, the artificial transformation causes the document, in a first set of the transformed images, to appear tilted in a first direction, and the artificial transformation cause the document in a second set of the transformed images captured after the user has tilted the document in a second direction, opposite to the first direction, to appear substantially un-tilted within the image preview window.
In some implementations, the artificial transformation causes the document, in a first set of the transformed images, to appear rotated in a first direction, and the artificial transformation causes the document in a second set of the transformed images captured after the user has rotated the document in a second direction, opposite to the first direction, to appear substantially un-rotated within the image preview window.
Some implementations include sending at least one of the images of the document from the video stream to a server, and receiving, from the server, a response indicating an authenticity of the document.
In some implementations, providing the transformed images includes providing, for display in the image preview window, the transformed video stream that includes the transformed images overlaid with a graphical image capture guide.
Some implementations include capturing at least one of the images of the document from the video stream when the document as depicted in a corresponding at least one of the transformed images substantially aligns with the graphical image capture guide.
Some implementations include, in response to detecting a security feature on the document in one or more of the images of the video stream as the user moves the document relative to the image capture device, capturing at least one of the images of the document from the video stream, and sending the at least one of the images to a server.
Some implementations include identifying, from the images of the document in the video stream, movement of the document relative to the image capture device in response to providing the transformed video image for display in the image preview window, and in response to identifying the movement of the document, providing, to a server, data confirming liveness of the images of the document in the video stream.
These and other implementations can each provide one or more advantages. In some examples, implementations of the present disclosure improve the operation image capture devices by, for example, removing glare from images captured by the image capture device. Implementations may provide processes for reducing or eliminating glare from images of documents captured by digital image capture devices.
The details of one or more implementations of the subject matter described in this specification are set forth in the accompanying drawings and the description below. Other features, aspects, and advantages of the subject matter will become apparent from the description, the drawings, and the claims.
Like reference symbols in the various drawings indicate like elements.
A document may be imaged to generate a digitized copy of the document. For example, a document may be imaged by taking a photo of a document with a capture device. A digital capture device may include a mobile computing device with a camera, e.g., a phone with a camera, a tablet with a camera, a standalone digital camera, or some other device that can capture a digital image of a view.
However, acquisition of a digitized copy of a document with a photo or video capture device may be corrupted by the presence of lighting artifacts such as glare or shadows. For example, glare on a document may refer to brightness that obscures underlying characteristics of the document. Glare may make data extraction from a digital copy difficult or impossible. For example, glare shown on a portion of a document may make it impossible to perform optical character recognition on portions of the image that show glare.
Mild glare effects, or simple overexposure, may be corrected to some extent with image post processing operations to improve contrast in the affected areas. However, glare that completely obscures all underlying features may not be recoverable in this way. For example, if a large portion of an image of a document is pure white because of glare on the document, it may be impossible to extract features from that portion of the document.
Glare may be especially problematic in imaging identification documents such as driver's licenses, passports, or other documents with a reflective or semi-reflective surface. These identification documents may be used to verify ages, prove driving privileges, access a secure area, cash a check, and so on, so correctly extracting all features of the document is important. Additionally, identification documents are often targets for counterfeiting and fraud so correctly extracting embedded security features may be important to prove the authenticity of the documents. Furthermore, identification documents are frequently laminated which may make the documents more reflective and more prone to showing glare than unlaminated documents.
Identification documents (“ID documents”) are broadly defined to include, for example, credit cards, bank cards, phone cards, passports, driver's licenses, network access cards, employee badges, debit cards, security cards, visas, immigration documentation, national ID cards, citizenship cards, permanent resident cards (e.g., green cards), Medicare cards, Medicaid cards, social security cards, security badges, certificates, identification cards or documents, voter registration cards, police ID cards, military ID cards, border crossing cards, legal instruments, security clearance badges and cards, gun permits, gift certificates or cards, membership cards or badges, etc. Also, the terms “document,” “card,” “badge” and “documentation” are used interchangeably throughout this patent application.
Glare detection through image processing can be used to advise a user during capture that an image is corrupt and prompt the user to correct the capture setup to remove the glare. Correcting a capture setup may include manipulating the position of the document relative to the image capture device. For example, to reduce reflections on the surface of the document, the document may be moved within the field of view (FOV) of the capture device, tilted, rotated, or a combination thereof. In some examples, rotation of the document can be considered to include out-of-plane rotation (e.g., tilting) as well as in-plane-rotation.
Accordingly, a user can be prompted to manipulate a document in a manner that reduces glare by artificially distorting preview images of the document shown in a user interface preview window. Such distortions may prompt the user to adjust the position of the document relative to the lens of a digital camera to compensate for the artificial distortion device, thereby, moving the document in a manner that reduces or eliminates the reflections causing glare in the digital images.
The document that is imaged may be an ID document, as described above. The digital image of the ID document with reduced glare can be used to authenticate the ID document or identity of a person that presents the ID document. For example, embedded visual security features can be extracted from the final digital image of the ID document and used to authenticate the ID document, or an image of the person extracted from the final digital image of the ID document can be compared to an image of a person captured at the time of authentication.
The user computing device 102 is configured to display an image preview window when a user 106 activates the camera on the computing device 102 to capture an image of an ID document 104.
As illustrated in
Such reflections (or shadows) can be reduced or eliminated by manipulating the position of the ID document 104, the orientation of the ID document 104, or both relative to the camera. The user computing device 102 can be configured to artificially transform the actual real-time image of the ID document 104 obtained from the camera and present the transformed image as the preview image 204 in the image preview window. The artificial transformation depicted in the preview image is intended to prompt the user 106 to manipulate the ID document 104 within the camera's FOV in a way that corrects the apparent distortion of the document as portrayed in the preview image 204. For example, the real-time image can be distorted by the artificial transformation in a way that prompts the user to manipulate the ID document 104 within the camera's FOV to redirect the reflections away from the camera's lens, thereby, reducing, shifting, or eliminating the apparent glare 210 in images of the ID document 104.
For example, the user computing device 102 can apply one or more image processing filters to the real-time image of the document in order to create the artificial transformation in the preview image. More specifically, one or more spatial filters can be applied to the pixels of the real-time image (e.g., each image in a video stream) to create a particular artificial transformation. For example, a skew filter may compress pixels closer to one side of a digital image to make the preview image 204 of the ID document 104 appear as if the document is tilted in one direction, thereby, prompting the user 106 to tilt the ID document 104 in the opposite direction. As another example, an image cropping filter may remove pixels on one or more sides of a digital image to make the ID document 104 in the preview image 204 appear as if it is off-center in the camera's FOV, thereby, prompting the user 106 to move the ID document 104 towards the perceived center.
In addition to the exemplary artificial translation described above, other artificial transformations of the preview image are possible such as, but not limited to, image rotation and image scaling. In some examples, artificially rotating the document can be considered to include out-of-plane rotation (e.g., tilting) as well as in-plane-rotation. In some implementations, multiple artificial transformations can be applied to generate the preview image. For example, a series of transformation filters can be applied to the real-time image to create the appearance of the ID document 104 being both translated and tilted in order to prompt the user to move and tilt the document with respect to the camera.
In some implementations, the user computing device 102 can be configured to detect lighting artifacts (e.g., glare or shadows) in the images of the ID document 104 and apply the artificial transformation in response to detecting the glare. For example, the user computing device 102 can detect lighting artifacts using image processing techniques such as edge or contrast detection. The user computing device 102 can then begin applying an artificial transformation to the real-time image of the ID document 104 in response to detecting glare or shadows. In some implementations, the user computing device 102 can use characteristics of the detected glare or shadow (e.g., location on the ID document, intensity, size, etc.) to identify an appropriate type of artificial transformation to apply to the real-time images in order to prompt a user to appropriately manipulate the ID document 104 to reduce the glare. For example, the user computing device 102 can include a set of rules that map various lighting artifact characteristics to different types of artificial transformations.
In some implementations, the user computing device 102 can automatically capture a still image of the ID document 104 when the user manipulates the ID document appropriately. For example, the user computing device 102 can capture a still image when the orientation of the artificially transformed preview image of the document approximately matches the image capture guide 206. For example, the user computing device 102 can use edge detection techniques to determine when the outline of the ID document 104 in the preview image approximately matches the orientation depicted by the image capture guide 206.
In some examples, an indicator, a message, a graphic, an animation, or a combination thereof can be displayed in the image preview window 202 to explain the action being solicited to correct for the artificial transformation. For example, a message or graphic can be displayed to explain the required action to align the artificially transformed image of the ID document with the image capture guide 206. A message may be displayed to inform a user to tilt the ID document 104 if the preview image is transformed to illustrate a perspective change to the ID document.
In some implementations, an artificial transformation can be applied as a default operation during image capture to prompt acquisition of multiple still images with variations in the ID document 104 presentation. Specifically, implementations can use a single frame for each capture and then use a stitching process to bind together components from multiple frames. In some implementations, multiple still images of the ID document 104 can be captured while a user moves the ID document 104 in response to the artificial transformation, and the user computing device 102 selects the best (or the best few) images in total to submit for authentication without performing image stitching.
Once one or more still images have been captured, authentication operations may be performed. For example, referring back to
In some implementations, the application of artificial transformations to produce artificial appearances of the ID in the image preview window of a user computing device 102 can be used as a liveness detection feature for document authentication. For example, the two or more different artificial transformations can be applied to the real-time image at different times to prompt the user to move the document in various ways as discussed above. The “liveness” of the document images can be detected by capturing several still images as the user presumably moves the document. The liveness of the images can be verified by detecting the different orientations of the ID document 104 depicted in the images. That is, the images will capture a “live” user's movement of the ID document 104. Either the images capturing the movement, or data indicating a determination of “liveness” can be sent to the authentication server 108 for evaluation during document authentication as proof of “liveness.”
In some implementations, the user computing device 102 can be configured to detect a document security feature in the images of the ID document 104 and apply the artificial transformation in response to detecting the security feature. Document security features can include, but are not limited to, a hologram, watermark, laser engraving, embossing, or a combination thereof. For example, the user computing device 102 can detect a security feature on the ID document 104 using image processing techniques such as edge detection, contrast detection, or object recognition. The user computing device 102 can then begin applying an artificial transformation to the real-time image of the ID document 104 in response to detecting the security feature. For example, some security features may be enhanced by light reflections, so the artificial transformations can be used to prompt the user 106 to manipulate the ID document 104 in a manner that accentuates the security feature. The user computing device 102 can capture one or more still images of the ID document 104 in a position that accentuates the security feature to aid with the authentication of the ID document 104.
The process 800 includes obtaining a real-time video stream of a document (810). For example, computing device 102 can obtain a video stream from an image capture device (e.g., a camera) that is coupled to the computing device 102. The video stream can include a series of images of a document (e.g., an identification document). For example, image capture device can capture images at a predefined frame rate (e.g., 15-120 fps).
The process 800 includes detecting lighting artifacts and/or document security feature(s) in images of the document (820). For example, the computing device 102 can detect lighting artifacts in the images of the video stream using image processing techniques such as edge or contrast detection. For example, the computing device 102 can detect the shape of the glare by detecting sharp differences in contrast between nearby pixels. As another example, the computing device 102 can detect a security feature on the ID document 104 using image processing techniques such as edge detection, contrast detection, or object recognition. Document security features can include, but are not limited to, a hologram, watermark, laser engraving, embossing, or a combination thereof.
The process 800 includes applying a transformation to images in the video stream to cause the document to appear as if it was captured from a point of view that is different from the actual point of view depicted in the images of the video stream (830). For example, the computing device 102 can apply an image processing filter to images of the video stream that cause the document in the images to appear distorted (e.g., translated, skewed, rotated, etc.). The transformation can alter the document in the image such, as to cause the user to manipulate the position of the document relative to the lens of the image capture device in a manner that attempts to correct the apparent distortion in the document. In so doing, the transformation of the images seeks to prompt the user to move the document in a manner that also changes the reflected light on the document producing the glare in an attempt to reduce or eliminate the glare. In some implementations (e.g., in which a security feature is detected), a particular transformation can be selected that prompts the user to manipulate the document in a manner that increases glare on part of the document so as to amplify an effect of the security feature.
The process 800 includes providing the transformed images as a transformed video stream for display in an image preview window (840). For example, the computing device 102 presents the transformed images, e.g., rather than the actual capture images, of the document for display in an image preview window. Consequently, while the document may actually be held square to the lens of the image capture device, the preview window will display the transformed image of the document giving the appearance that the document is being held in a different orientation relative to the lens of the image capture device.
The process 800 includes capturing at least one of the images from the video stream (850). For example, computing device 102 can capture one of the original (e.g., non-transformed) images of the video stream. For example, computing device 102 can capture the non-transformed image after the user has moved the document to compensate for the transformation applied to the images and displayed in a preview window.
The process 800 includes sending at least one of the images from the video stream to a document authentication system (860). For example, the computing device 102 can send one or more of the actual (e.g., un-transformed) images to a document authentication server to have the authenticity of the document verified. The authentication server can then provide authentication data to the computing device 102 that indicates whether the document in the image(s) is authentic or fraudulent.
In some implementations, step 820, step 860, or both are optional. For example, process 800 can be performed before a lighting artifact is detected or without performing a lighting artifact detection step. As another example, process 800 can be performed without sending an image to a document authentication server. For example, the computing device 102 can store one or more images from the video stream in local memory or send the images to a data storage server (e.g., a cloud server).
The computing device 900 includes a processor 902, a memory 904, a storage device 906, a high-speed interface 908 connecting to the memory 904 and multiple high-speed expansion ports 910, and a low-speed interface 912 connecting to a low-speed expansion port 914 and the storage device 906. In some examples, the computing device 900 includes a camera 926. Each of the processor 902, the memory 904, the storage device 906, the high-speed interface 908, the high-speed expansion ports 910, and the low-speed interface 912, are interconnected using various busses, and can be mounted on a common motherboard or in other manners as appropriate. The processor 902 can process instructions for execution within the computing device 900, including instructions stored in the memory 904 or on the storage device 906 to display graphical information for a graphical user interface (GUI) on an external input/output device, such as a display 916 coupled to the high-speed interface 908. In other implementations, multiple processors and/or multiple buses can be used, as appropriate, along with multiple memories and types of memory. Also, multiple computing devices can be connected, with each device providing portions of the necessary operations (e.g., as a server bank, a group of blade servers, or a multi-processor system).
The memory 904 stores information within the computing device 900. In some implementations, the memory 904 is a volatile or non-volatile memory unit or units. In some implementations, the memory 904 is a non-volatile memory unit or units. The memory 904 can also be another form of computer-readable medium, such as a magnetic or optical disk.
The storage device 906 is capable of providing mass storage for the computing device 900. In some implementations, the storage device 906 can be or contain a computer-readable medium, such as a floppy disk device, a hard disk device, an optical disk device, a tape device, a flash memory or other similar solid state memory device, or an array of devices, including devices in a storage area network or other configurations. Instructions can be stored in an information carrier. The instructions, when executed by one or more processing devices (for example, processor 902) perform one or more methods, such as those described above. The instructions can also be stored by one or more storage devices such as computer- or machine-readable mediums (for example, the memory 904, the storage device 906, or memory on the processor 902).
The high-speed interface 908 manages bandwidth-intensive operations for the computing device 900, while the low-speed interface 912 manages lower bandwidth-intensive operations. Such allocation of functions is an example only. In some implementations, the high-speed interface 908 is coupled to the memory 904, the display 916 (e.g., through a graphics processor or accelerator), and to the high-speed expansion ports 910, which may accept various expansion cards (not shown). In the implementation, the low-speed interface 912 is coupled to the storage device 906 and the low-speed expansion port 914. The low-speed expansion port 914, which may include various communication ports (e.g., USB, Bluetooth, Ethernet, wireless Ethernet) can be coupled to one or more input/output devices, such as a keyboard, a pointing device, a scanner, a camera (e.g., a web camera), or a networking device such as a switch or router, e.g., through a network adapter.
The computing device 900 can be implemented in a number of different forms, as shown in the figure. For example, it can be implemented in a personal computer such as a laptop computer 920. It can also be implemented as a tablet computer 922 or a desktop computer 924. Alternatively, components from the computing device 900 can be combined with other components in a mobile device, such as a mobile computing device 950. Each type of such devices can contain one or more of the computing device 900 and the mobile computing device 950, and an entire system can be made up of multiple computing devices communicating with each other.
The mobile computing device 950 includes a processor 952, a memory 964, an input/output device such as a display 954, a communication interface 966, a transceiver 968, and a camera 976, among other components. The mobile computing device 950 can also be provided with a storage device, such as a micro-drive or other device, to provide additional storage. Each of the processor 952, the memory 964, the display 954, the communication interface 966, and the transceiver 968, are interconnected using various buses, and several of the components can be mounted on a common motherboard or in other manners as appropriate.
The processor 952 can execute instructions within the mobile computing device 950, including instructions stored in the memory 964. The processor 952 can be implemented as a chipset of chips that include separate and multiple analog and digital processors. The processor 952 can provide, for example, for coordination of the other components of the mobile computing device 950, such as control of user interfaces, applications run by the mobile computing device 950, and wireless communication by the mobile computing device 950.
The processor 952 can communicate with a user through a control interface 958 and a display interface 956 coupled to the display 954. The display 954 can be, for example, a TFT (Thin-Film-Transistor Liquid Crystal Display) display or an OLED (Organic Light Emitting Diode) display, or other appropriate display technology. The display interface 956 can include appropriate circuitry for driving the display 954 to present graphical and other information to a user. The control interface 958 can receive commands from a user and convert them for submission to the processor 952. In addition, an external interface 962 can provide communication with the processor 952, so as to enable near area communication of the mobile computing device 950 with other devices. The external interface 962 can provide, for example, for wired communication in some implementations, or for wireless communication in other implementations, and multiple interfaces can also be used.
The memory 964 stores information within the mobile computing device 950. The memory 964 can be implemented as one or more of a computer-readable medium or media, a volatile memory unit or units, or a non-volatile memory unit or units. An expansion memory 974 can also be provided and connected to the mobile computing device 950 through an expansion interface 972, which may include, for example, a SIMM (Single In-Line Memory Module) card interface. The expansion memory 974 may provide extra storage space for the mobile computing device 950, or may also store applications or other information for the mobile computing device 950. Specifically, the expansion memory 974 can include instructions to carry out or supplement the processes described above, and can include secure information also. Thus, for example, the expansion memory 974 can be provided as a security module for the mobile computing device 950, and can be programmed with instructions that permit secure use of the mobile computing device 950. In addition, secure applications can be provided via the SIMM cards, along with additional information, such as placing identifying information on the SIMM card in a non-hackable manner.
The memory can include, for example, flash memory and/or NVRAM memory (non-volatile random access memory), as discussed below. In some implementations, instructions are stored in an information carrier that the instructions, when executed by one or more processing devices (for example, processor 952), perform one or more methods, such as those described above. The instructions can also be stored by one or more storage devices, such as one or more computer- or machine-readable mediums (for example, the memory 964, the expansion memory 974, or memory on the processor 952). In some implementations, the instructions can be received in a propagated signal, for example, over the transceiver 968 or the external interface 962.
The mobile computing device 950 can communicate wirelessly through the communication interface 966, which can include digital signal processing circuitry where necessary. The communication interface 966 can provide for communications under various modes or protocols, such as GSM voice calls (Global System for Mobile communications), SMS (Short Message Service), EMS (Enhanced Messaging Service), or MMS messaging (Multimedia Messaging Service), CDMA (Code Division Multiple Access), TDMA (Time Division Multiple Access), PDC (Personal Digital Cellular), WCDMA (Wideband Code Division Multiple Access), CDMA2000, or GPRS (General Packet Radio Service), among others. Such communication can occur, for example, through the transceiver 968 using a radio-frequency. In addition, short-range communication can occur, such as using a Bluetooth, WiFi, or other such transceiver (not shown). In addition, a GPS (Global Positioning System) receiver module 970 can provide additional navigation- and location-related wireless data to the mobile computing device 950, which can be used as appropriate by applications running on the mobile computing device 950.
The mobile computing device 950 can also communicate audibly using an audio codec 960, which can receive spoken information from a user and convert it to usable digital information. The audio codec 960 can likewise generate audible sound for a user, such as through a speaker, e.g., in a handset of the mobile computing device 950. Such sound can include sound from voice telephone calls, may include recorded sound (e.g., voice messages, music files, etc.) and may also include sound generated by applications operating on the mobile computing device 950.
The mobile computing device 950 can be implemented in a number of different forms, as shown in the figure. For example, it can be implemented as a cellular telephone 980. The mobile computing device 950 can also be implemented as part of a smart-phone 982, tablet computer, personal digital assistant, or other similar mobile device.
While this specification contains many specifics, these should not be construed as limitations on the scope of the disclosure or of what may be claimed, but rather as descriptions of features specific to particular implementations. Certain features that are described in this specification in the context of separate implementations may also be implemented in combination in a single implementation. Conversely, various features that are described in the context of a single implementation may also be implemented in multiple implementations separately or in any suitable sub-combination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination may in some examples be excised from the combination, and the claimed combination may be directed to a sub-combination or variation of a sub-combination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the implementations described above should not be understood as requiring such separation in all implementations, and it should be understood that the described program components and systems may generally be integrated together in a single software product or packaged into multiple software products.
A number of implementations have been described. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the disclosure. For example, various forms of the flows shown above may be used, with steps re-ordered, added, or removed. Accordingly, other implementations are within the scope of the following claims.
Although the present disclosure is described in the context of capturing images of documents, and specifically ID documents, the techniques, systems, and devices described herein can be applicable in other contexts as well. For example, the techniques, systems, and devices described herein may be used for capturing digital images of, other types of documents, bank checks, printed photographs, etc.
This application claims the benefit of the filing date of U.S. Provisional Application No. 62/611,993, filed on Dec. 29, 2017, the contents of which are incorporated herein by reference in their entirety.
Number | Date | Country | |
---|---|---|---|
62611993 | Dec 2017 | US |