Coordinated illumination and image signal capture for enhanced signal detection

Information

  • Patent Grant
  • 11386281
  • Patent Number
    11,386,281
  • Date Filed
    Monday, July 13, 2020
    4 years ago
  • Date Issued
    Tuesday, July 12, 2022
    2 years ago
Abstract
Signal detection and recognition employees coordinated illumination and capture of images under to facilitate extraction of a signal of interest. Pulsed illumination of different colors facilitates extraction of signals from color channels, as well as improved signal to noise ratio by combining signals of different color channels. The successive pulsing of different color illumination appears white to the user, yet facilitates signal detection, even for lower cost monochrome sensors, as in barcode scanning and other automatic identification equipment.
Description
TECHNICAL FIELD

The invention relates to image signal capture and processing, as this processing is used in conjunction with associated image-based signal encoding and decoding, image recognition and object recognition.


BACKGROUND AND SUMMARY

Conventional barcode scanners use single color illumination consisting of a red light emitting diode (LED). This minimizes the cost of the scanner but limits the range of colors that can be used in the printed barcode and be detected with the scanner. A similar problem occurs when using the scanner to read digital watermarks.


While conventional barcode scanners typically use red illumination, newer barcode scanners are increasingly moving to white LEDs as illumination for their sensors as opposed to the traditional red LED illumination. The rationale behind this change is that red illumination can be more stressful on the eyes when used for long periods. Red is also more distracting because it does not blend in with the natural ambient light in the room. However, to save on costs the scanner manufactures maintain a monochrome sensor on their scanning devices. The combination of a monochrome sensor and only white illumination means that only a luminance changes can be detected by the scanner as opposed to changes in chrominance, such as changes in blue color direction in the chrominance plane, when using red illumination.


Thus, when implementing image signal coding in scanning hardware with white illumination and a monochrome sensor, the use of color to convey data signals is more limited, as the monochrome sensor will only capture luminance changes under white illumination. For digital watermarking applications where imperceptibility is important, digital watermarks encoded by modulating luminance tend not to be acceptable as they can be more visible than those encoded by modulating chrominance. Where imperceptibility of the data signal encoded in the image is more important, it is preferred to encode the data signal by modulating one or more colors in a chrominance plane. Even where limited to encoding in luminance, color values in an image can be modulated so as to impart a signal in luminance. These color values have luminance and chrominance components, and luminance is modulated by scaling a color vector to increase or decrease its luminance component. See, for example, U.S. Pat. No. 6,590,996, where color values of image signals are adaptively modulated to have reduced visibility yet yield detectable modulation of luminance.


By increasing the number of color LED's used in image capture, a greater range of printed colors can be used in printed barcodes or digital watermarks. This has the added benefit of enabling encoding of auxiliary data signals in a chrominance plane. In particular, to reduce visibility of digital watermarks in host images, digital watermark encoding is performed in one or more colors within a chrominance plane (also called color planes, color channels or color direction). For example, one such approach modulates a host image primarily in the cyan channel to greatly reduce the visibility of the embedded watermark. Further, encoding signals in multiple chrominance channels provides additional benefits in imperceptibility, robustness and detection. An example is encoding out-of-phase signals in at least two chrominance channels. In CMYK printing, for example, changes for encoding digital signals are introduced in the Cyan and Magenta ink channels, and these changes are detected in red and green channels. Cover image content is reduced by subtracting the chrominance channels in a detector. See U.S. Pat. No. 8,199,969, and US Patent Application Publication 20100150434, which are hereby incorporated by reference. In these types of techniques, the use of color LED's in the scanner enables the watermark signals to be extracted and combined from two or more chrominance channels.


The addition of illumination in other wavelengths enables scanners to be used to read still further types of signals. For example, a color near infra-red (NIR) LED could be added to read signals encoded in the K channel in objects printed with CMYK printers. This allows the scanning equipment to exploit out-of-phase encoding in which one of the signals in encoded in the K channel and an out-of-phase signal is encoded in an opposite direction by scaling luminance of CMY channels to offset the change in luminance in the K channel. This out-of-phase encoding reduces visibility as the luminance changes encoded in the K channel are offset by the luminance changes in the CMY channels. CMY inks are transparent to NIR, so the digital watermark is read from K channel by capturing the image under illumination of the NIR LED. See, for example, U.S. Pat. Nos. 6,721,440 and 6,763,123, which are hereby incorporated by reference.


Scanners that use white illumination and a monochrome sensor normally will not be able to detect signals encoded in these other channels. Instead, only encoding in luminance is detectable. This may be suitable for some applications. However, where the data signaling is preferably implemented to minimize visible changes to the host image, luminance watermarking tends to be inferior to chrominance watermarking. From the standpoint of the sensitivity of the human visual system, changes to certain colors in chrominance channels are less noticeable to humans than changes in luminance.


In order to detect with white illumination, manufacturers need to update their scanners to a color sensor or some other means to separate color components of the captured image. For lower cost scanners, a full color video sensor adds cost to the scanner and triples the bandwidth of data from the sensor to the detector (e.g., every frame typically consists of 3 or more components (such as RGB), as opposed to a single component in monochrome sensors).


To provide a broader range of signal capture, one solution is to have a series of different wavelength light sources (e.g., LEDs) that are synchronized to the capture of frames by a monochrome sensor. This allows frames to be illuminated by a single wavelength. For some types of image based data codes, like digital watermarks that are repeated across the surface of a printed object, it is sufficient to illuminate a chrominance based watermark signal for a portion of a frame, as the data signal is fully recoverable from a portion of the frame. If these light sources are flashed quickly enough, they give the illusion to the user of white illumination. When the combination of different wavelength light sources are flashed fast enough (e.g., 200 Hz or more), the illumination appears white with no visible flashing or blinking perceived by the user. This type of controlled lighting can be used in combination with a monochrome sensor and yet capture chrominance information to detect or recognize signals in chrominance channels. As long as acquisition time can be short, the periods for illuminating the sources of different wavelengths can be configured to synch to multiples of the video rate. Various examples of configurations of lighting and capture are provided below.


While the above discussion primarily provides examples of digital watermark and barcode signaling, the techniques can be applied to other forms of image based coding and scanning of such coding from objects. Further, the techniques also apply to signal recognition in visual media, such as pattern recognition, computer vision, image recognition and video recognition. Various objects, such as goods or packaging for them, may be created so as to be composed of color combinations and/or include various patterns that constitute signals for which these techniques offer enhanced recognition capability. Objects can, for example, be discriminated from background clutter. Likewise, logos can be discriminated from other package or label image content.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a diagram illustrating pulsed illumination and capture of image signals in a color sensor enabling decoding or recognition of signals from chrominance channels.



FIG. 2 is a diagram illustrating pulsed illumination and capture of image signals in a monochrome sensor enabling decoding or recognition of signals from chrominance channels.



FIG. 3 is another diagram illustrating pulsed illumination and capture of image signals, where illumination in different colors are interleaved in time and coordinated with image capture timing.



FIG. 4 is a diagram illustrating a device configuration for coordinated illumination and image capture under control of a processor.



FIG. 5 is a diagram illustrating a device configuration for coordinated illumination and image capture in which light source and sensor communicate signals to each other.



FIG. 6 is a flow diagram illustrating a method for processing of image signals to prepare for data extraction or recognition operations, after coordinated illumination and capture of the image signals.





DETAILED DESCRIPTION


FIG. 1 is a diagram illustrating pulsed illumination and capture of image signals in a sensor to enable decoding of signals from chrominance channels. In this example, pulsed red and green colored illumination is synchronized with a video color sensor. For example, the top two waveforms illustrate pulsed red and green illumination, respectively. This allows red and green color planes to be read out from the video color sensor.


When a package is moved past the capture device, it has low motion blur since a short illumination exposure time is used. The red frame and green frame are synchronized in time, so a signal encoded in chrominance can be detected by subtracting the two color planes. See U.S. Pat. No. 8,199,969, and US Patent Application Publication 20100150434, incorporated above. See also, US Patent Application Publications 20110212717 and 20130195273, and application Ser. No. 13/888,939 by John Lord et al., describing various arrangements for capturing images under different illumination and combining the images to enhance signal detection and recognition, including for the specific case of out-of-phase signals disclosed in 20100150434.


This maximizes the watermark signal and minimizes interference due to the cover image.

wmConv=redFrame−greenFrame  equation 1


A barcode could be detected by adding the color planes as follows:

barcodeConv=redFrame+greenFrame  equation 2


Different variants of this basic design are possible, to get the best compromise between sampling rate and cost using color. To minimize cost and increase sampling rate, a monochrome sensor could be used instead of a color sensor as shown in FIG. 2.



FIG. 2 is a diagram illustrating pulsed illumination and capture of image signals in a monochrome sensor enabling decoding of signals from chrominance channels. One approach to getting color capability with a monochrome sensor is to capture distinct image signals during separate time periods when different color sources are illuminated. This generates image signals comprising at least partial frames, each in a different color of the illumination. A signal detector then reads the data signal encoded in these distinct images (e.g., using techniques of US 20100150434). For example, the monochrome sensor of FIG. 2 captures a red frame followed by a green frame of an object in which a digital watermark is encoded, out-of-phase, in two chrominance channels. The pulse shown with a short dashed line represents red illumination, for example, while the pulse shown with a longer dashed line represents green illumination. If the frames are aligned, then the two planes can be subtracted to reduce image interference and increase signal strength, as in equation 1.


A number of other combinations are possible. The following provides some additional examples.


Method 2A: 3 LEDs—Red, Blue, Green

In this embodiment, the imaging device has red, blue and green LEDs and a controller configured to flash (i.e., turn on and off) LEDs fast enough to appear white and capture one frame at each color. We have observed that flashing at 200 Hz is fast enough to appear white, and thus is satisfactory to avoid annoying the user of the device. This imaging device operation provides image signals from which a detector decodes signals encoded in chrominance channels better than images provided by scanners with just red illumination. The reason for this is that the signal encoded in the object in any color channel is detectable in the resulting image captured of the object under this illumination. For instance, this approach allows the digital watermark signal to be measured in magenta and yellow ink as well as cyan.


Method 2B: 3 LEDs—Red, Blue, Green

In this approach, the controller of the imaging device is configured to turn off blue and green for short period, leaving red on, while an image is captured. This makes the illumination look constant white to the user. It has almost identical performance to red only detection.


Method 2C: 2 LEDs—One White, One Red

In this approach, the controller of the image device is configured to only turn off the white illumination when the frame is to be captured by the sensor. This makes the illumination look constant white to the user. It has almost identical performance to red only illumination.


Another example is to interleave the color illumination on a shorter time scale as shown in FIG. 3. FIG. 3 shows pulsed color illumination and capture with a monochrome sensor. The red illumination and capture are illustrated with single-hatching, and the green illumination and capture are illustrated with cross-hatching. This approach allows the red and green planes to be aligned, since they are captured at the same time while using the monochrome sensor.


In this example, the interleaving of 2 color LEDS at a high temporal frequency minimizes the spatial shift of a moving object captured by the sensor. The implementer should preferably choose 2 of the 3 colors available in the print with maximum magnitude signal but opposite polarity. For example, red and green planes are subtracted to reduce interference due to cover image. Red and green LEDs will illuminate an object so that a digital watermark detector extracts the digital watermark in the red and green color planes captured of an object. This digital watermark is embedded in tweaks of cyan and magenta ink, respectively. The magenta tweak is made in the opposite direction to red to minimize visibility of the digital watermark in the printed image. Cyan ink absorbs red illumination, Magenta ink absorbs green illumination, and Yellow ink absorbs blue illumination. Thus, in each of these pairs of illumination and colors, changes made to encode a signal in the color are readable in images captured with the illumination that the corresponding ink color absorbs.


Ink and printing technology is just one way of applying image information to an object. The same concept applies to other materials bearing pigments, and methods for constructing objects composed of these materials, such as molding, deposition, etching, engraving, laminating layers, etc.


This concept may be extended in various combinations using the absorbance, reflectance and transmittance properties of certain colors and materials in response to illumination sources. By coordinating illumination, filtering, and sensors that operate in particular wavelengths (frequency bands), the image device is configured to detect signals in particular wavelengths (frequency bands), and not others. Objects are constructed to include materials that absorb or reflect light in wavelengths in which signals are conveyed. The concept also applies to fluorescence of materials in response to radiation in certain wavelengths. In this case, illumination sources are selected to cause emission of light bearing signals in wavelengths captured by the sensor. In all of these embodiments, these signals may or may not be visible to the human under normal, ambient lighting conditions, yet are visible to the image device through coordinated capture and combination of channels, as appropriate, to discriminate the signal being sought.


Coordinated capture under these types of configurations also enables the discrimination of signals. In particular, post processing of the image channels amplifies desired signals and removes or suppresses unwanted signals. As explained, subtraction of different image frames under varying capture conditions of the same target reduces highly correlated components and amplifies the out-of-phase components. For example, highly correlated components in video frames captured with object scanners are the background components or static components, whereas a moving foreground object is retained. In addition to lighting paired with color channels, the motion of some elements in relation to the lack of motion of other elements in the image provides another signal discriminator. The weighted addition and subtraction of signal elements under these varying capture conditions discriminates desired from un-desired signals.


Addition of different frames under varying conditions amplifies common components. The selection of the varying conditions provides a means to limit each of the separate frames to capture of particular channels of information, thereby filtering out unwanted image signals. Then, the combination of these remaining channels amplifies the desired signals, such as bar codes, watermarks or objects for object recognition operations.


Within the broader realm of reflectance, absorbance and transmittance, another attribute to exploit in coordinated capture is the transparency of materials to certain wavelengths of light, including infrared or UV wavelengths. Transparency of material to radiation in a range of wavelengths is measured in terms of its transmittance of that radiation. Materials with higher transmittance to radiation are transparent to it. Some inks or plastics are IR transparent, and others not (e.g., ABS plastic is completely IR transparent, even black ABS plastic). Also, certain colored inks are IR transparent. Thus, in various embodiments, a barcode or digital watermark signal is printed or otherwise formed in a material layer hidden underneath a printed layer (such as a layer of an object printed in black ink underneath an overlaid image printed with colored pigmented inks or materials in 3D printing), or inside the casing or plastic packaging of an object.


This technique of placing signals within layers of an object works well with 3D printing. To facilitate reading, the layer with an image based code signal is printed in a flat layer. One or more layers may then be applied over that layer, such as packaging, protective seals, or layers from a 3D printer providing more complex (non-flat) structure. The signal bearing layer is read by illumination through other layers, even where those layers add complex surface structure, packaging, or additional layers of printed information.


In the latter case, the ability to separately image layers of 2D codes in the depth direction relative to the viewpoint of a reader provides a means of encoding and reading 3D codes, based on separately imaging layers with corresponding capture conditions for each layer.


A signal bearing layer in such a 3D configuration may be a metal pattern or alternative plastic layer under a surface finish which is transparent to the wavelength of radiation (light or any electromagnetic) used to read that layer. A hidden inner pattern, particularly on a flat inner layer, allows for object orientation and pose determination of an object, even where that object has a complex surface pattern bearing other types of information and imagery. The flat layer provides a reference plane, and the pattern on that reference plane enables image processing of images captured of the pattern to compute the object orientation and pose relative to the camera. The reference pattern may be a registration grid or more sophisticated digital watermark synchronization pattern as disclosed in U.S. Pat. No. 6,590,996, incorporated above. Pattern matching and detection techniques may be used to discern the orientation of the reference pattern.


This type of reference pattern facilitates a variety of functions. One function is as a reference pattern for a user interface control device, where a camera captures images of the reference pattern. Image processing in a programmed device or dedicated circuitry calculates the position and orientation of the reference pattern to capture gestural movement of the camera relative to the object, or the object relative to a fixed camera, from frames of images captured by a camera during the movement. A related function is for proximity sensing, as the reference pattern may be used to compute distance of a camera from the object. This reference pattern in the hidden layer may be captured using IR illumination and a camera or stereoscopic cameras, for example.


These types of functions for determining object position, distance and object identity and related metadata in a machine readable data code layer, may be used in a variety of machine vision applications. Some examples include robotics, where robots navigate around, and manipulate objects with such hidden identification layers. Additional applications include game console controllers, mobile phone—object communication and interaction, interaction between wearable computing devices and objects, low cost implementation of the Internet of Things in which hidden layer codes of 3D objects provide links to network services through the use of ubiquitous mobile devices with cameras (like smartphones, Tablet PCs wearable computers, etc.).


An IR barcode or watermark scanner can read the hidden layer within an object without any data embedding process required. The scanner illuminates the target with IR illumination, captures images, and extracts the desired signal from those images, optionally employing circuitry or software to produce a weighted combination or subtraction of frames to discriminate the target signal from unwanted signals.


In other embodiments, a pulsed light-source cycles R, G, B then RGB together. The net effect is of white to the user, but with image sensor synchronization, each color is extracted separately from the stream of image frames obtained from the scanner.


Similarly, in related embodiments, the pulsed illumination is generated with partial power and full power for each color, with, for example, LED powers alternating between 1.0*R+0.5*G then 0.5*R+1.0*G.


Subtracting the frames still yields color information:

2*(1.0*R+0.5*G)−(0.5*R+1.0*G)=1.5*R
−(1.0*R+0.5*G)+2*(0.5*R+1.0*G)=1.5*G


In other embodiments, NIR and IR illumination sources are added and modulated in addition to a steady white light source. A typical silicon CMOS or CCD monochrome camera (or color camera) is sensitive to NIR/IR, thus allowing IR responsive materials to be detected in the image signals produced by the camera. In one arrangement, the IR blocking filter is removed from (or not installed) in front of the camera for this application.


Also, often IR can be observed in the blue and red channels of images captured in low-cost color CMOS cameras. In particular, IR energy is captured as blue and/or red in the sensor and manifested in the blue and/or red channels. For instance, a black object that is providing IR energy to the sensor will appear black to humans, but appear purple in the image of the object captured by sensor in which the IR energy is captured, at least in part, in the blue and red sensors. This IR response of “normal” color cameras also proves useful for extraction of IR illuminated image information without needing special camera devices (i.e. with sensors designed particularly for IR). The effect is due to the IR-blocking filter not extending far enough to longer wavelengths, and the individual pixel R, G, and B bandpass filters not having IR blocking response (or limited blocking).


By exploiting these attributes of certain color sensor arrangements, one can construct embodiments of devices and associated image processing circuitry and software that can capture and detect signals in IR layers without a dedicated IR sensor. The above example of a layer of an IR illuminated material that is behind another layer that has high IR transmittance can be implemented using an IR pulsed illumination. Alternating frames, captured with and without IR illumination may be subtracted to discriminate the IR based signal.


Some Si CMOS phone cameras have photo-response outside of the normal visible RGB range. One example is that IR/NIR is not filtered well by the red and blue pixel filters, even when there is the usual IR cut filter in-front of the camera die. For example taking a picture of the lit charcoal shows a very strong blue and red signal even though to the Human Visual System (HVS) it appears almost black. As another example, a BBQ lighting chimney, which again is not visibly emitting photons, appears purple due to the IR/NIR energy being captured in red and blue sensors of the phone camera. This energy is not being filtered by IR/NIR filtering.


This characteristic is useful for IR/NIR signal capture with suitably selected camera devices, including general purpose phone cameras or object scanners, like barcode scanning equipment. The effect varies by make of Si imaging sensor (and thus devices that include them). Some cameras have better bandpass IR-cut filters, others less-so. The latter enable IR signal reading with the color sensors as described above.


Similarly, the same approaches may be implemented in cameras with some UV bleed through the pixel filters to other color channels. Si sensors typically have poorer response at UV and short UV. However, there are extended UV photodiodes, such as those by Hammamatsu, which are direct photon capture, not using phosphors. Such illumination sources may be used in combination with photon capture sensors to detect UV signals.


Table 1 below provides some examples of imaging device configurations to illustrate approaches compatible with detecting signals in chrominance or IR/UV channels (where corresponding illumination sources are integrated in the design as described above). In the case of white light composed a mixture of various colors, imaging device is configured for chroma signal reading using a color filter or a color sensor that enables separation of color channels from the light. Monochrome sensors cannot capture images in separate color channels in this case.


In the case of a single color illumination, like red in barcode scanning devices, the chroma signal may be read from the image captured by a monochrome sensor, as it is red color plane image.


The selective illumination of different colors enables chroma signal compatibility for either color or monochrome sensors. In the latter case, color planes are segmented temporally, enabling chroma signal reading from these temporally segmented image signals. The temporally segmented images may correspond to frames, portions of frames (e.g., rows), or combinations of frames or sub-parts of frames (e.g., rows illuminated with similar illumination aggregated from different frames) etc. as explained further below, in the case of rolling shutter implementations.












TABLE 1





Illumination
Filter
Sensor
Chroma Signal Compatibility







White
Red
Monochrome
Yes


White
None
Color Sensor
Yes


Red
None
Monochrome
Yes


Pulsed Colors
None
Monochrome
Yes, with control to capture


(temporally


image of object illuminated


separate)


under color or colors





corresponding to chroma signal










FIGS. 4-5 are diagrams of imaging devices illustrating configurations for controlling illumination and image capture. There are a variety of ways to implement a controller for coordinating illumination and capture, and we highlight a few examples of controller configurations here. FIG. 4 is a diagram illustrating a configuration of a device 300 in which the controller may be implemented as instructions executing in a processor 302, such as a CPU or other processor (e.g., DSP, GPU, etc.). Under control of software or firmware instructions, the programmed processor communicates instructions to an image sensor 304 and illumination source 306. The CPU, for example, issues instructions to the camera and illumination source, causing it to turn on and off. In one approach, the processor is programmed to instruct the illumination source to turn/on off different color LEDs, as the image sensor captures frames.



FIG. 5 is a diagram illustrating a configuration of another device in which there are direct connections carrying control signals from illumination source to image sensor, and from image sensor to illumination source. These control signals enable the device to turn on/off different illumination sources in coordination with image capture. The control signals may be one-way, two-way, and can originate from either device. For example, the illumination source can provide control signals indicating when it is activating different color LEDs, and the image sensor sets its capture timing to capture image frames or rows of a frame corresponding to the time periods when different color LEDs are illuminated. As another alternative, the image sensor provides control signals indicating its capture timing (e.g., the beginning/end of frame capture), and the illumination source turns on/off different color LEDs to coincide with the illumination periods of these LEDs. Each of the components can also send direct signals to the other, telling it to initiate capture or turn on/off LEDs.


A controller in the configuration of FIG. 5 may also include a programmed processor, as described for FIG. 4, for providing similar control instructions. It has the advantage that the processor can take advantage of the lower lever circuitry between the illumination and sensor for carrying out the synchronization signaling without delays in communication between the programmed processor and lighting and image sensor components.


Alternatively, a controller in a configuration of FIG. 5 may be implemented in digital logic circuitry, such as FPGA, ASIC, etc.


The controller may be integrated into a signal detector or implemented as separate circuitry, software instructions, or a combination.


The signal detector is designed to detect, extract and where applicable, read data from, the particular signal being sought, such as a barcode, digital watermark, pattern or other signal recognition feature set. Like the controller, the signal detector also may be implemented in digital logic circuitry, in software executing on one or more processors, or some combination. Various types of detectors are known and described in the documents incorporated by reference. Thus, we do not elaborate further on their implementation.


The above approaches are illustrated with examples of LEDs. Various types of LEDs may be used. LEDs are most common and offer lower cost options, and can be switched on/off at speeds needed to implement the above approaches. However, other types, such as OLEDs, may be used as well. It is also possible to implement these approaches with other types of light sources, like lasers, multi-spectral lighting, etc.


The image sensors, likewise, can be implemented using a variety of alternative image sensor components. The sensors, for example, may be implemented with 2-dimensional (2D) CCD or CMOS monochrome or color image sensor chips. Also, the above approaches employ video sensors. Timing parameters that impact capture of image frames include the frame rate and shutter speed. The frame rate is the rate at which the sensor produces time-distinct images called frames. The shutter speed is the length of time that a camera's shutter is open.


The above approaches accommodate rolling shutter operation. A rolling shutter is a method of image acquisition in which each frame is recorded by scanning across a 2D frame horizontally or vertically. A rolling shutter is in contrast to a global shutter in which the entire frame is exposed in the same time window. For example, in a rolling shutter, as the 2D image sensor array is being exposed to light reflected from an object (e.g., light from the device's light source), the rolling shutter effect causes light sensed at rows (or columns) of the sensor array to be recorded over different time periods. For example, in the above approaches, if different color light sources are switched over the frame exposure period, different rows receive light from the different illumination sources. This rolling shutter effect can be accommodated by coordinating image capture with a frame rate and shutter control so that a sufficient number of rows of any frame have been illuminated by desired light source or sources.


For more information on illumination and capture in rolling shutter modes and associated signal detectors, please see our application Ser. No. 13/888,939 and US Patent Application Publication 20130195273, which are incorporated by reference above. These documents describe operating LEDs (or other light sources) of differing wavelength ranges in coordination with exposure intervals of sub-parts of video image frames (e.g., rows of each frame exposed under different illumination).


The generic nature of the configurations of FIGS. 4-5 was intended as it is anticipated that coordinated illumination and image capture techniques described and incorporated into this document will be implemented in a variety of imaging devices or multi-purpose devices with imaging capability. In the former category, examples of imaging devices include barcode scanners (either fixed or handheld), cameras, document scanners, wearable cameras, video equipment, etc. In the latter category, examples of multipurpose devices with imaging capability include smartphones/camera phones, tablet PCs, PCs in various form factors (desktop, laptop, etc.), wearable computers, etc.


Re-capping methods described above, FIG. 6 is a flow diagram illustrating a method for processing of image signals to prepare for data extraction or recognition operations, after coordinated illumination and capture of the image signals. The data extraction and recognition operations use image signals captured of an object in different wavelength ranges (e.g., color channels). The frames or parts of frames captured under different illumination, for example, are obtained (400). These frames may be obtained through a software interface (API) to the image sensors, or may be obtained by digital processing circuitry in communication with the image sensors.


Digital logic circuitry, a software application executing in a programmable processor, or some combination of both, then combines these image frames or frame parts using a function that improves signal detection recovery by boosting the signal of interest relative to the other image content (402). Images of an object captured under similar illumination may be added to increase signal to noise ratio. As described above, out-of-phase signal encoding techniques make opposite adjustments to at least two color channels. By subtracting these color channels, the out-of-phase signals constructively combine, while the other image content is reduced, increasing the signal to noise ratio. The operation represented in equation 1 is an example of this type of function. Where the same signal is present in different color channels, the color channels may be combined to boost the signal, as noted in the example of the barcode above in equation 2.


This type of signal combination is not always required where signal recovery can be achieved successfully from a single color channel. For example chroma based digital watermarks may be fully decoded from a single color channel, even if out of phase encoding is used to reduce visibility of the digital watermark. The out-of-phase relationship of the signals causes them to cancel each other from the perspective of the human visual system.


After these operations, signal extraction or recognition operations proceed (404). These operations include, but are not limited to digital watermark detection and data extraction (also called watermark detecting or reading), barcode reading, pattern recognition, feature extraction and matching for object or image recognition, etc.


Additional signal gain can be achieved by pulsing light onto on object and capturing frames under illumination, and in the absence of illumination pertinent to the signal being sought. Unwanted background clutter can be removed by subtracting frames. Absence of illumination pertinent to a signal of interest may include a configuration where there is no supplemental illumination provided by the capture device, or where illumination is applied, but in a color in that the signal of interest does not have.


The above examples primarily cite printed objects as the target of signal detection. Yet the techniques apply to a wide range of objects, and layering of materials within an object. This layer provides an effective means of marking objects at layers below the outer surface of an object. As such, the techniques may be used to include machine readable signals within packaging, documents (including payment and identity tokens or credentials), consumer goods, etc. 3D printing methods may be used to produce objects with internal layers with machine readable codes.


The above techniques also apply to signal detection or recognition of images captured from a display device, such as a video display, TV, monitor or display screen of a mobile phone or tablet PC. In some configurations, the display device may provide controlled illumination to facilitate detection and recognition of signals and objects displayed on the display screen.


Concluding Remarks


Having described and illustrated the principles of the technology with reference to specific implementations, it will be recognized that the technology can be implemented in many other, different, forms. To provide a comprehensive disclosure without unduly lengthening the specification, applicants incorporate by reference the patents and patent applications referenced above.


The methods, processes, and systems described above may be implemented in hardware, software or a combination of hardware and software. For example, the signal processing operations for encoding and associated detector for decoding image based data signals may be implemented as instructions stored in a memory and executed in a programmable computer (including both software and firmware instructions), implemented as digital logic circuitry in a special purpose digital circuit, or combination of instructions executed in one or more processors and digital logic circuit modules. The methods and processes described above may be implemented in programs executed from a system's memory (a computer readable medium, such as an electronic, optical or magnetic storage device). The methods, instructions and circuitry operate on electronic signals, or signals in other electromagnetic forms. These signals further represent physical signals like image signals captured in image sensors, audio captured in audio sensors, as well as other physical signal types captured in sensors for that type. These electromagnetic signal representations are transformed to different states as detailed above to detect signal attributes, perform pattern recognition and matching, encode and decode digital data signals, calculate relative attributes of source signals from different sources, etc.


The particular combinations of elements and features in the above-detailed embodiments are exemplary only; the interchanging and substitution of these teachings with other teachings in this and the incorporated-by-reference patents/applications are also contemplated.

Claims
  • 1. A method for machine readable code detection in images comprising: illuminating an object with pulsed LED illumination, the pulsed LED illumination comprising illumination with plural different LED light sources;obtaining parts of image frames captured under illumination of the plural different LED light sources; andwith plural processors, performing digital watermark detection on the parts of image frames, each part corresponding to an image signal captured under different illumination, wherein the digital watermark signal detection comprises detection of a synchronization pattern followed by extraction of data from a part of an image frame based on detection of the synchronization pattern.
  • 2. The method of claim 1 wherein the parts of image frames are captured under pulsed illumination in which two or more different color light sources are selectively illuminated.
  • 3. The method of claim 1 wherein the pulsed LED illumination comprises pulsing of different color light sources in succession at a rate sufficient for the light source illumination to appear white to a user.
  • 4. The method of claim 2 wherein images captured under different color light sources are combined to boost out-of-phase signals in distinct chrominance channels.
  • 5. The method of claim 3 wherein pulsing comprises selectively turning off light sources while a first light source remains on to capture an image frame under illumination of the first light source.
  • 6. The method of claim 3 wherein the plural different LED light sources include a white light LED and another LED of a first color, and pulsing comprises selectively turning off the white light LED to capture an image frame under illumination of the first color.
  • 7. A method for machine readable code detection from an image comprising: obtaining parts of image frames captured under different wavelength illumination, wherein the different wavelength illumination comprises pulsing of different light sources, including pulsing an infrared light source and visible color light source;from the image signals, separating the parts of image frames for detection operations on plural processors, the parts of image frames corresponding to a channel of illumination corresponding to one of the different light sources, andwith the plural processors, performing digital watermark detection on the parts of image frames, wherein the digital watermark signal detection comprises detection of a synchronization pattern followed by extraction of data from a part of an image frame based on detection of the synchronization pattern.
  • 8. The method of claim 7 wherein a layer of an object illuminated by the infrared (IR) light source is captured and the signal detection is performed on a signal conveyed in an IR channel.
  • 9. The method of claim 8 wherein the IR channel is captured with a visible color sensor.
  • 10. The method of claim 8 wherein the layer is hidden under a layer of material that is transparent to IR illumination.
  • 11. The method of claim 10 wherein the layer comprises a flat layer hidden under a non-flat layer of a 3D object, and provides a reference signal for determining orientation of the 3D object.
  • 12. The method of claim 11 wherein the flat layer provides an image based code signal conveying variable encoded information.
  • 13. An imaging device comprising: an illumination source for providing pulsed LED illumination of different wavelength illumination, the pulsed LED illumination comprising illumination with plural different LED light sources;an image sensor for capturing image signals in coordination with the pulsed LED illumination;a controller for coordinating the pulsed LED illumination and corresponding image capture of parts of image frames under illumination of the plural different LED light sources; andplural processors in communication with the image sensor for obtaining image signals captured under different wavelength illumination, the plural processors are configured to obtain the parts of image frames and are configured to perform digital watermark detection on the parts of image frames, each part corresponding to an image signal captured under different wavelength illumination, wherein the digital watermark signal detection comprises detection of a synchronization pattern followed by extraction of data from a part of an image frame based on detection of the synchronization pattern.
  • 14. The imaging device of claim 13 wherein the controller and the plural processors are integrated in a signal detector device.
  • 15. The imaging device of claim 14 wherein the signal detector device comprises a memory on which is stored instructions executed on the plural processors.
  • 16. The method of claim 1 wherein the parts of image frames are captured under illumination of the plural different LED light sources, including a visible color LED and an infrared LED, and obtaining information about the object from detection based on a part of an image frame under visible color LED illumination and a part of an image frame under infrared LED illumination.
  • 17. The method of claim 1 wherein the parts of image frames comprise rows of pixels from an image frame captured by a camera under illumination from one color of illumination.
  • 18. The method of claim 7 wherein the parts of image frames comprise rows of pixels from an image frame captured by a camera under illumination from one color of illumination.
  • 19. The imaging device of claim 13 wherein the parts of image frames are captured under illumination of the plural different LED light sources, including a visible color LED and infrared LED, and the plural processors are configured to obtain information about an object from detection based on a part of an image frame under visible color LED illumination and a part of an image frame under infrared LED illumination.
  • 20. The imaging device of claim 13 wherein the parts of image frames comprise rows of pixels from an image frame captured by a camera under illumination from one color of illumination.
  • 21. An imaging device comprising: means for generating pulsed illumination of different wavelength illumination;an image sensor for capturing image signals in coordination with generated pulsed illumination;means for coordinating the generated pulsed illumination and corresponding image capture by said image sensor by capturing parts of image frames under illumination of the different wavelength illumination; andmeans for processing image signals captured under different wavelength illumination, said means for processing the image signals operable to obtain the parts of image frames; andmeans for detecting digital watermarking from the parts of image frames, each part of the parts of image frames corresponding to an image signal captured under different wavelength illumination, wherein said means for detecting digital watermarking is operable for synchronization of a part of an image frame and extraction of data from the part of an image frame based the synchronization.
  • 22. The imaging device of claim 21 in which said means for coordinating, said means for processing and said means for detecting digital watermarking are integrated in a signal detector device.
  • 23. The imaging device of claim 22 wherein said means for detecting digital watermarking comprises plural processors and memory on which is stored instructions for execution on said plural processors.
  • 24. The imaging device of claim 21 wherein the parts of image frames are captured under the different wavelength illumination, including visible color illumination and infrared illumination, and said means for detecting digital watermarking is operable to obtain encoded information about an object from detection based on a part of an image frame under the visible color illumination and a part of an image frame under infrared illumination.
  • 25. The imaging device of claim 21 wherein the parts of image frames comprise rows of pixels from an image frame captured by said image sensor under illumination from one wavelength of illumination.
RELATED APPLICATION DATA

This application is a continuation of U.S. patent application Ser. No. 16/291,366, filed Mar. 4, 2019 (now U.S. Pat. No. 10,713,456) which is a continuation of U.S. patent application Ser. No. 15/687,153, filed Aug. 25, 2017 (now U.S. Pat. No. 10,223,560) which is a continuation of U.S. patent application Ser. No. 13/964,014, filed Aug. 9, 2013 (now U.S. Pat. No. 9,749,607) which is a continuation in part of U.S. patent application Ser. No. 13/011,618, filed Jan. 21, 2011 (now U.S. Pat. No. 8,805,110), which is a continuation of PCT application PCT/US09/54358, filed Aug. 19, 2009 (published as WO2010022185). Application PCT/US09/54358 claims priority benefit to 61/226,195, filed 16 Jul. 2009. application Ser. No. 13/964,014 is also a continuation in part of U.S. patent application Ser. No. 13/888,939, filed May 7, 2013 (now U.S. Pat. No. 9,008,315), which is a continuation-in-part of application Ser. No. 13/745,270, filed Jan. 18, 2013 (now U.S. Pat. No. 8,879,735). These applications are hereby incorporated by reference.

US Referenced Citations (336)
Number Name Date Kind
3628271 Carrell Dec 1971 A
5206490 Petigrew Apr 1993 A
5383995 Phillips Jan 1995 A
5396559 McGrew Mar 1995 A
5444779 Daniele Aug 1995 A
5453605 Hecht Sep 1995 A
5481377 Udagawa Jan 1996 A
5492222 Weaver Feb 1996 A
5521372 Hecht May 1996 A
5542971 Auslander Aug 1996 A
5576532 Hecht Nov 1996 A
5636292 Rhoads Jun 1997 A
5745604 Rhoads Apr 1998 A
5752152 Gasper May 1998 A
5790703 Wang Aug 1998 A
5832119 Rhoads Nov 1998 A
5843564 Gasper Dec 1998 A
5859920 Daly Jan 1999 A
5862260 Rhoads Jan 1999 A
5919730 Gasper Jul 1999 A
5998609 Aoki Dec 1999 A
6011857 Sowell Jan 2000 A
6069696 McQueen May 2000 A
6076738 Bloomberg Jun 2000 A
6122392 Rhoads Sep 2000 A
6122403 Rhoads Sep 2000 A
6149719 Houle Nov 2000 A
6168081 Urano Jan 2001 B1
6177683 Kolesar Jan 2001 B1
6246778 Moore Jun 2001 B1
6345104 Rhoads Feb 2002 B1
6361916 Chen Mar 2002 B1
6363366 Henty Mar 2002 B1
6373965 Liang Apr 2002 B1
6441380 Lawandy Aug 2002 B1
6449377 Rhoads Sep 2002 B1
6456729 Moore Sep 2002 B1
6466961 Miller Oct 2002 B1
6522767 Moskowitz Feb 2003 B1
6567532 Honsinger May 2003 B1
6567534 Rhoads May 2003 B1
6590996 Reed Jul 2003 B1
6603864 Matsunoshita Aug 2003 B1
6614914 Rhoads Sep 2003 B1
6625297 Bradley Sep 2003 B1
6683966 Tian Jan 2004 B1
6692031 McGrew Feb 2004 B2
6694041 Brunk Feb 2004 B1
6698860 Berns Mar 2004 B2
6706460 Williams Mar 2004 B1
6718046 Reed Apr 2004 B2
6721440 Reed Apr 2004 B2
6760464 Brunk Jul 2004 B2
6763123 Reed Jul 2004 B2
6775391 Hosaka Aug 2004 B2
6775394 Yu Aug 2004 B2
6786397 Silverbrook Sep 2004 B2
6804377 Reed Oct 2004 B2
6829063 Allebach Dec 2004 B1
6839450 Yen Jan 2005 B2
6912674 Trelewicz Jun 2005 B2
6940993 Jones Sep 2005 B2
6947571 Rhoads Sep 2005 B1
6948068 Lawandy Sep 2005 B2
6961442 Hannigan Nov 2005 B2
6987861 Rhoads Jan 2006 B2
6993152 Patterson Jan 2006 B2
6995859 Silverbrook Feb 2006 B1
6996252 Reed Feb 2006 B2
7072490 Stach Jul 2006 B2
7076082 Sharma Jul 2006 B2
7114657 Auslander Oct 2006 B2
7127112 Sharma Oct 2006 B2
7152021 Alattar Dec 2006 B2
7184569 Lawandy Feb 2007 B2
7213757 Jones May 2007 B2
7218750 Hiraishi May 2007 B1
7231061 Bradley Jun 2007 B2
7280672 Powell Oct 2007 B2
7319990 Henty Jan 2008 B1
7321667 Stach Jan 2008 B2
7340076 Stach Mar 2008 B2
7352878 Reed Apr 2008 B2
7364085 Jones Apr 2008 B2
7393119 Lebens Jul 2008 B2
7412072 Sharma Aug 2008 B2
7420663 Wang Sep 2008 B2
7529385 Lawandy May 2009 B2
7532741 Stach May 2009 B2
7536553 Auslander May 2009 B2
7555139 Rhoads Jun 2009 B2
7559983 Starling Jul 2009 B2
7667766 Lee Feb 2010 B2
7684088 Jordan Mar 2010 B2
7721879 Weaver May 2010 B2
7738673 Reed Jun 2010 B2
7757952 Tuschel Jul 2010 B2
7800785 Bala Sep 2010 B2
7831062 Stach Nov 2010 B2
7856143 Abe Dec 2010 B2
7892338 Degott Feb 2011 B2
7926730 Auslander Apr 2011 B2
7938331 Brock May 2011 B2
7963450 Lawandy Jun 2011 B2
7965862 Jordan Jun 2011 B2
7986807 Stach Jul 2011 B2
7995911 Butterworth Aug 2011 B2
8009893 Rhoads Aug 2011 B2
8027509 Reed Sep 2011 B2
8064100 Braun Nov 2011 B2
8144368 Rodriguez Mar 2012 B2
8157293 Bhatt Apr 2012 B2
8159657 Degott Apr 2012 B2
8180174 Di May 2012 B2
8194919 Rodriguez Jun 2012 B2
8223380 Lapstun Jul 2012 B2
8224018 Rhoads Jul 2012 B2
8227637 Cohen Jul 2012 B2
8284279 Park Oct 2012 B2
8301893 Brundage Oct 2012 B2
8345315 Sagan Jan 2013 B2
8358089 Hsia Jan 2013 B2
8360323 Widzinski Jan 2013 B2
8364031 Geffert Jan 2013 B2
8385971 Rhoads Feb 2013 B2
8412577 Rodriguez Apr 2013 B2
8515121 Stach Aug 2013 B2
8593696 Picard Nov 2013 B2
8620021 Knudson Dec 2013 B2
8675987 Agarwaia Mar 2014 B2
8687839 Sharma Apr 2014 B2
8699089 Eschbach Apr 2014 B2
8730527 Chapman May 2014 B2
8805110 Rodriguez Aug 2014 B2
8840029 Lawandy Sep 2014 B2
8867782 Kurtz Oct 2014 B2
8879735 Lord Nov 2014 B2
8888207 Furness Nov 2014 B2
8913299 Picard Dec 2014 B2
8947744 Kurtz Feb 2015 B2
9008315 Lord Apr 2015 B2
9013501 Scheibe Apr 2015 B2
9055239 Tehranchi Jun 2015 B2
9064228 Woerz Jun 2015 B2
9070132 Durst Jun 2015 B1
9087376 Rodriguez Jul 2015 B2
9179033 Reed Nov 2015 B2
9269022 Rhoads Feb 2016 B2
9275428 Chapman Mar 2016 B2
9319557 Chapman Apr 2016 B2
9380186 Reed Jun 2016 B2
9400951 Yoshida Jul 2016 B2
9401001 Reed Jul 2016 B2
9449357 Lyons Sep 2016 B1
9562998 Edmonds Feb 2017 B2
9593982 Rhoads Mar 2017 B2
9635378 Holub Apr 2017 B2
9658373 Downing May 2017 B2
9690967 Brundage Jun 2017 B1
9692984 Lord Jun 2017 B2
9727941 Falkenstern Aug 2017 B1
9747656 Stach Aug 2017 B2
9749607 Boles Aug 2017 B2
9754341 Falkenstern Sep 2017 B2
9847976 Lord Dec 2017 B2
10204253 Long Feb 2019 B1
10223560 Boles Mar 2019 B2
10304151 Falkenstern May 2019 B2
10424038 Holub Sep 2019 B2
10455112 Falkenstern Oct 2019 B2
10594689 Weaver Mar 2020 B1
20010037455 Lawandy Nov 2001 A1
20020001080 Miller Jan 2002 A1
20020012461 MacKinnon Jan 2002 A1
20020054356 Kurita May 2002 A1
20020080396 Silverbrook Jun 2002 A1
20020085736 Kalker Jul 2002 A1
20020121590 Yoshida Sep 2002 A1
20020136429 Stach Sep 2002 A1
20020147910 Brundage Oct 2002 A1
20020169962 Brundage Nov 2002 A1
20030005304 Lawandy Jan 2003 A1
20030012548 Levy Jan 2003 A1
20030012569 Lowe Jan 2003 A1
20030021437 Hersch Jan 2003 A1
20030039376 Stach Feb 2003 A1
20030053654 Patterson Mar 2003 A1
20030063319 Umeda Apr 2003 A1
20030083098 Yamazaki May 2003 A1
20030116747 Lem Jun 2003 A1
20030156733 Zeller Aug 2003 A1
20030174863 Brundage Sep 2003 A1
20040023397 Vig Feb 2004 A1
20040032972 Stach Feb 2004 A1
20040037448 Brundage Feb 2004 A1
20040046032 Urano Mar 2004 A1
20040146177 Kalker Jul 2004 A1
20040149830 Allen Aug 2004 A1
20040197816 Empedocles Oct 2004 A1
20040239528 Luscombe Dec 2004 A1
20040263911 Rodriguez Dec 2004 A1
20050030416 Kametani Feb 2005 A1
20050030533 Treado Feb 2005 A1
20050127176 Dickinson Jun 2005 A1
20050156048 Reed Jul 2005 A1
20060008112 Reed Jan 2006 A1
20060017957 Degott Jan 2006 A1
20060022059 Juds Feb 2006 A1
20060078159 Hamatake Apr 2006 A1
20060115110 Rodriguez Jun 2006 A1
20060133061 Maeda Jun 2006 A1
20060147082 Jordan Jul 2006 A1
20060161788 Turpin Jul 2006 A1
20060165311 Watson Jul 2006 A1
20060198551 Abe Sep 2006 A1
20060202028 Rowe Sep 2006 A1
20060251408 Konno Nov 2006 A1
20070102920 Bi May 2007 A1
20070108284 Pankow May 2007 A1
20070143232 Auslander Jun 2007 A1
20070152032 Tuschel Jul 2007 A1
20070152056 Tuschel Jul 2007 A1
20070192872 Rhoads Aug 2007 A1
20070210164 Conlon Sep 2007 A1
20070217689 Yang Sep 2007 A1
20070221732 Tuschel Sep 2007 A1
20070262154 Zazzu Nov 2007 A1
20070262579 Bala Nov 2007 A1
20070268481 Raskar Nov 2007 A1
20080101657 Durkin May 2008 A1
20080112590 Stach May 2008 A1
20080112596 Rhoads May 2008 A1
20080133389 Schowengerdt Jun 2008 A1
20080149820 Jordan Jun 2008 A1
20080159615 Rudaz Jul 2008 A1
20080164689 Jordan Jul 2008 A1
20080177185 Nakao Jul 2008 A1
20080277626 Yang Nov 2008 A1
20080297644 Farchtchian Dec 2008 A1
20090040022 Finkenzeller Feb 2009 A1
20090059299 Yoshida Mar 2009 A1
20090067695 Komiya Mar 2009 A1
20090086506 Okumura Apr 2009 A1
20090112101 Furness Apr 2009 A1
20090129592 Swiegers May 2009 A1
20090158318 Levy Jun 2009 A1
20090243493 Bergquist Oct 2009 A1
20090266877 Vonwiller Oct 2009 A1
20100025476 Widzinski Feb 2010 A1
20100042004 Dhawan Feb 2010 A1
20100048242 Rhoads Feb 2010 A1
20100062194 Sun Mar 2010 A1
20100073504 Park Mar 2010 A1
20100119108 Rhoads May 2010 A1
20100142003 Braun Jun 2010 A1
20100150396 Reed Jun 2010 A1
20100150434 Reed Jun 2010 A1
20100200658 Olmstead Aug 2010 A1
20100208240 Schowengerdt Aug 2010 A1
20100317399 Rodriguez Dec 2010 A1
20110007092 Ihara Jan 2011 A1
20110007935 Reed Jan 2011 A1
20110008606 Sun Jan 2011 A1
20110037873 Hu Feb 2011 A1
20110051989 Gao Mar 2011 A1
20110085209 Man Apr 2011 A1
20110091066 Alattar Apr 2011 A1
20110098029 Rhoads Apr 2011 A1
20110110555 Stach May 2011 A1
20110111210 Matsunami May 2011 A1
20110123185 Clark May 2011 A1
20110127331 Zhao Jun 2011 A1
20110212717 Rhoads Sep 2011 A1
20110214044 Davis Sep 2011 A1
20110249051 Chretien Oct 2011 A1
20110249332 Merrill Oct 2011 A1
20110255163 Merrill Oct 2011 A1
20110304705 Kantor Dec 2011 A1
20120014557 Reed Jan 2012 A1
20120065313 Demartin Mar 2012 A1
20120074220 Rodriguez Mar 2012 A1
20120078989 Sharma Mar 2012 A1
20120133954 Takabayashi May 2012 A1
20120205435 Woerz Aug 2012 A1
20120214515 Davis Aug 2012 A1
20120218608 Maltz Aug 2012 A1
20120224743 Rodriguez Sep 2012 A1
20120229467 Czerwinski Sep 2012 A1
20120243009 Chapman Sep 2012 A1
20120243797 Di Sep 2012 A1
20120275642 Aller Nov 2012 A1
20120311623 Davis Dec 2012 A1
20120321759 Marinkovich Dec 2012 A1
20130001313 Denniston, Jr. Jan 2013 A1
20130114876 Rudaz May 2013 A1
20130126618 Gao May 2013 A1
20130195273 Lord Aug 2013 A1
20130223673 Davis Aug 2013 A1
20130259297 Knudson Oct 2013 A1
20130260727 Knudson Oct 2013 A1
20130286443 Massicot Oct 2013 A1
20130308045 Rhoads Nov 2013 A1
20130329006 Boles Dec 2013 A1
20130335783 Kurtz Dec 2013 A1
20140022603 Eschbach Jan 2014 A1
20140052555 MacIntosh Feb 2014 A1
20140057676 Lord Feb 2014 A1
20140084069 Mizukoshi Mar 2014 A1
20140085534 Bergquist Mar 2014 A1
20140108020 Sharma Apr 2014 A1
20140245463 Suryanarayanan Aug 2014 A1
20140293091 Rhoads Oct 2014 A1
20140325656 Sallam Oct 2014 A1
20140339296 McAdams Nov 2014 A1
20150002928 Kiyoto Jan 2015 A1
20150071485 Rhoads Mar 2015 A1
20150153284 Naya Jun 2015 A1
20150156369 Reed Jun 2015 A1
20150168620 Hakuta Jun 2015 A1
20150187039 Reed Jul 2015 A1
20150286873 Davis Oct 2015 A1
20150317923 Edmonds Nov 2015 A1
20160000141 Nappi Jan 2016 A1
20160180207 Rodriguez Jun 2016 A1
20160196630 Blesser Jul 2016 A1
20160217546 Ryu Jul 2016 A1
20160217547 Stach Jul 2016 A1
20160225116 Tehranchi Aug 2016 A1
20160267620 Calhoon Sep 2016 A1
20160275639 Holub Sep 2016 A1
20160291207 Yasuda Oct 2016 A1
20170024840 Holub Jan 2017 A1
20170024845 Filler Jan 2017 A1
20170230533 Holub Aug 2017 A1
20190266369 Boles Aug 2019 A1
20190378235 Kamath Dec 2019 A1
Foreign Referenced Citations (18)
Number Date Country
0638614 Feb 1995 EP
1367810 Dec 2003 EP
1370062 Dec 2003 EP
3016062 May 2016 EP
2905185 Feb 2008 FR
2017073696 Apr 2017 JP
2006048368 May 2006 WO
2008152922 Dec 2008 WO
2010075357 Jul 2010 WO
2010075363 Jul 2010 WO
2011029845 Mar 2011 WO
2012047340 Apr 2012 WO
2013109934 Jul 2013 WO
2015077493 May 2015 WO
2016153911 Sep 2016 WO
2016153936 Sep 2016 WO
2018111786 Jun 2018 WO
2019165364 Aug 2019 WO
Non-Patent Literature Citations (53)
Entry
Ando, et al, Image Recognition Based Digital Watermarking Technology for Item Retrieval in Convenience Stores, NTT Technical Review, vol. 15, No. 8, Aug. 2017. (2 pages).
Bolle, et al., ‘VeggieVision: A Produce Recognition System’, Proceedings of the Third IEEE Workshop on 30 Applications of Computer Vision, pp. 224-251, 1996.
Caldelli et al., “Geometric-Invariant Robust Watermarking Through Constellation Matching in the Frequency Domain,” IEEE Proc. Int. Conf. on Image Processing, vol. 2, Sep. 2000, pp. 65-68.
Chapter II demand in PCT/US2019/019410 (published as WO2019165364), dated Sep. 23, 2019, including earlier Article 19 amendments. (24 pages).
Cheng, et al., “Colloidal silicon quantum dots: from preparation to the modification of self-assembled monolayers (SAMs) for bio-applications,” Chem. Soc. Rev., 2014, 43, 2680-2700. (21 pgs.).
Chi, et al, Multi-spectral imaging by optimized wide band illumination, International Journal of Computer Vision 86.2-3 (2010), pp. 140-151.
Chu, et al, Halftone QR codes, ACM Transactions on Graphics, vol. 32, No. 6, Nov. 1, 2013, p. 217. (8 pages).
Davis B, Signal rich art: enabling the vision of ubiquitous computing. In Media Watermarking, Security, and Forensics III Feb. 8, 2011 (vol. 7880, p. 788002). International Society for Optics and Photonics. (11 pages).
European Patent Office Communication pursuant to Article 94(3) EPC for Application No. 16769366.2, which is the regional phase of PCT/US2016/022836 (published as WO 2016/153911), dated May 24, 2019, 7 pages.
Everdell, et al, Multispectral imaging of the ocular fundus using LED illumination, European Conference on Biomedical Optics, Optical Society of America, 2009. (6 pages).
Feb. 26, 2018 Response and Claim amendments in European patent application No. 16769366.2, which is the regional phase of PCT/US2016/022836 (published as WO 2016/153911) (8 pages).
Hayes et al., “Generating Steganographic Images Via Adversarial Training”, Proceedings of the 31st annual conference on advances in Neural Information Processing Systems, Mar. 2017, pp. 1951-1960.
International Preliminary Report on Patentability for PCT/US2019/036126, dated May 22, 2020. (8 pages).
International Search Report and Written Opinion dated Nov. 4, 2014 from PCT/US2014/050573. (13 pages).
International Search Report and Written Opinion dated Oct. 24, 2014 from PCT/US2014/041417. (9 pages).
International Search Report and Written Opinion in PCT/US2016/22967 dated Jul. 11, 2016. (17 pgs.) (published as WO2016/153936).
Invitation to Pay Additional Fees including Communication Relating to the Results of the Partial International Search in PCT/US2018/064516, dated Apr. 5, 2019. 17 pages.
J. Collins et al., “Intelligent Material Solutions, Covert Tagging and Serialization Systems”, Proc. IS&T's NIP 29 International Conference on Digital Printing Technologies, pp. 153-157 (2013). (5 pgs.).
Japanese Patent Application JP2017022489, with machine translation, Jan. 26, 2017. (51 pages).
Japanese Patent Application JP2017183948, with machine translation, Oct. 5, 2017. (36 pages).
Katayama, et al, New High-speed Frame Detection Method: Side Trace Algorithm (STA) for i-appli on Cellular Phones to Detect Watermarks, Proceedings of the ACM 3rd International Conference on Mobile and Ubiquitous Multimedia, pp. 109-116, 2004.
Ke et al., “Kernel Target Alignment for Feature Kernel Selection in Universal Steganographic Detection based on Multiple Kernel SVM”, International Symposium on Instrumentation & Measurement, Sensor Network and Automation, Aug. 2012, pp. 222-227.
Kiyoto et al, Development of a Near-Infrared Reflective Film Using Disk-Shaped Nanoparticles, Fujifilm Research and Development Report No. 58-2013, 2013. (4 pgs.).
Konstantinos A Raftopoulos et al., “Region-Based Watermarking for Images,” Mar. 15, 2017, Operations Research, Engineering, and Cyber Security, Springer, pp. 331-343, XP009512871, ISBN: 978-3-319-51498-7.
Lin, et al, Artistic QR code embellishment. Computer Graphics Forum, Oct. 1, 2013, vol. 32, No. 7, pp. 137-146.
Lin, et al, Efficient QR code beautification with high quality visual content, IEEE Transactions on Multimedia, vol. 17, No. 9, Sep. 2015, pp. 1515-1524.
Liu, et al, Line-based cubism-like image—A new type of art image and its application to lossless data hiding, IEEE Transactions on Information Forensics and Security, vol. 7, No. 5, Oct. 2012, pp. 1448-1458.
Machine Translation of JP2017-073696A, generated Aug. 28, 2018. (54 pages).
Nakamura, et al, Fast Watermark Detection Scheme for Camera-Equipped Cellular Phone, Proceedings of the ACM 3rd International Conference on Mobile and Ubiquitous Multimedia, pp. 101-108, 2004.
Nieves, et al., Multispectral synthesis of daylight using a commercial digital CCD camera Aoolied Ootics 44.27 (2005V 5696-5703.
Notice of Allowance dated Feb. 10, 2017 in U.S. Appl. No. 14/681,832, filed Apr. 8, 2015. (9 pages).
Notice of Allowance dated Nov. 22, 2016 in U.S. Appl. No. 14/946,180, filed Nov. 19, 2015. (6 pages).
Office Action dated Jul. 1, 2016 in U.S. Appl. No. 14/681,832. (16 pages).
Office Action dated Jul. 17, 2013, in U.S. Appl. No. 13/444,521. (15 pages).
Office Action dated Jul. 29, 2016 in U.S. Appl. No. 14/946,180. (12 pages).
Park et al.; Invisible Marker Based Augmented Reality System, SPIE Proc., vol. 5960, 2005, pp. 501-508. (9 pgs.).
Park, et al, “Multispectral Imaging Using Multiplexed Illumination,” in Computer Vision, 2007. ICCV 2007. IEEE 11th International Conference on , vol., No., pp. 1-8.
PCT Patent Application No. PCT/US2016/22967, filed Mar. 17, 2016. (99 pgs.).
Petersen et al., “Upconverting Nanoparticle Security Inks Based on Hansen Solubility Parameters”, Proc. IS&T's NIP 29 International Conference on Digital Printing Technologies, pp. 383-385 (2014). (3 pgs.).
Preston, et al, Enabling hand-crafted visual markers at scale, Proceedings of the 2017 ACM Conference on Designing Interactive Systems, Jun. 10, 2017, pp. 1227-1237.
Puyang, et al, Style Transferring Based Data Hiding for Color Images, International Conference on Cloud Computing and Security, Jun. 8, 2018, pp. 440-449.
R. Steiger et al., “Photochemical Studies on the Lightfastness of Ink-Jet Systems,” Proc. IS&T's NIP 14 conference, pp. 114-117 (1998). (4 pgs.).
Reply to written opinion of IPEA in PCT/US2019/019410 (published as WO2019165364), dated Apr. 7, 2020. (17 pages).
Rongen et al., ‘Digital Image Watermarking by Salient Point Modification Practical Results,’ Proc. SPIE vol. 3657: Security and Watermarking of Multimedia Contents, Jan. 1999, pp. 273-282.
Schockling, et al, Visualization of hyperspectral images, SPIE Defense, Security, and Sensing, 2009. (3 pages).
Silverbrook Research U.S. Appl. No. 61/350,013, filed May 31, 2010. (44 pages).
Simonyan et al, Very Deep Convolutional Networks for Large-Scale Image Recognition, arXiv preprint 1409.1556v6, Apr. 10, 2015. 14 pages.
Ulyanov, et al, Improved texture networks: Maximizing quality and diversity in feed-forward stylization and texture synthesis, Proc. 2017 IEEE Conference on Computer Vision and Pattern Recognition, pp. 6924-6932.
Willis et al., ‘InfraStructs: Fabricating Information Inside Physical Objects for Imaging in the Terahertz Region’, ACM Transactions on Graphics, vol. 32, No. 4, Jul. 1, 2013. (10 pages).
Written opinion by IPEA in PCT/US2019/019410 (published as WO2019165364), dated Feb. 11, 2020. (7 pages).
Yang, et al, ARTcode: Preserve art and code in any image, Proc. 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing, pp. 904-915.
Yousaf et al.; Formulation of an Invisible Infrared Printing Ink, Dyes and Pigments, vol. 27, No. 4, 1995, pp. 297-303. (7 pgs.).
Zhu et al., “Unpaired Image to Image Translation using Cycle-Consistent Adversarial Networks”, 2017 IEEE International Conference on Computer Vision, Mar. 2017, pp. 2242-2251.
Related Publications (1)
Number Date Country
20210042483 A1 Feb 2021 US
Provisional Applications (1)
Number Date Country
61226195 Jul 2009 US
Continuations (4)
Number Date Country
Parent 16291366 Mar 2019 US
Child 16927730 US
Parent 15687153 Aug 2017 US
Child 16291366 US
Parent 13964014 Aug 2013 US
Child 15687153 US
Parent PCT/US2009/054358 Aug 2009 US
Child 13011618 US
Continuation in Parts (3)
Number Date Country
Parent 13888939 May 2013 US
Child 13964014 US
Parent 13745270 Jan 2013 US
Child 13888939 US
Parent 13011618 Jan 2011 US
Child 13964014 US