Gaze correction of multi-view images

Information

  • Patent Grant
  • 10750160
  • Patent Number
    10,750,160
  • Date Filed
    Tuesday, May 7, 2019
    5 years ago
  • Date Issued
    Tuesday, August 18, 2020
    4 years ago
Abstract
Gaze is corrected by adjusting multi-view images of a head. Image patches containing the left and right eyes of the head are identified and a feature vector is derived from plural local image descriptors of the image patch in at least one image of the multi-view images. A displacement vector field representing a transformation of an image patch is derived, using the derived feature vector to look up reference data comprising reference displacement vector fields associated with possible values of the feature vector produced by machine learning. The multi-view images are adjusted by transforming the image patches containing the left and right eyes of the head in accordance with the derived displacement vector field.
Description
TECHNICAL FIELD

This application relates to the image processing of multi-view images of head, for example a stereoscopic pair of images of a head, having regard to the perceived gaze of the eyes of the head.


BACKGROUND

In many systems, stereoscopic pair of images, or more generally multi-view images, of a head may be captured in one device and displayed on a different device for viewing by an observer. One non-limiting example is a system for performing teleconferencing between two telecommunications devices. In that case, each device may capture a stereoscopic pair of images, or more generally multi-view images, of a head of the observer of that device and transmit it to the other device over a telecommunications network for display and viewing by the observer of the other device.


When a stereoscopic pair of images, or more generally multi-view images, of a head is captured and displayed, the gaze of the head in the displayed stereoscopic pair of images, or more generally multi-view images, may not be directed at the observer. This may be caused for example by the gaze of the head not being directed at the camera system used to capture the stereoscopic pair of images, for example because the user whose head is imaged is observing a display in the same device as the camera system and the camera system is offset above (or below) that display. In that case, the gaze in the displayed images will be perceived to be downwards (or upwards). The human visual system has evolved high sensitivity to gaze during social interaction, using cues gained from the relative position of the iris and white sclera of other observers. As such errors in the perceived gaze are disconcerting. For example in a system for performing teleconferencing, errors in the perceived gaze can create unnatural interactions between the users.


BRIEF SUMMARY

The present disclosure is concerned with an image processing technique for adjusting the stereoscopic pair of images, or more generally multi-view images, of a head to correct the perceived gaze.


According to a first aspect of the present disclosure, there is provided a method of adjusting multi-view images of a head to correct gaze, the method comprising: in each image of the multi-view images, identifying image patches containing the left and right eyes of the head, respectively; in respect of the image patches containing the left eyes of the head in each image of the multi-view images, and also in respect of the image patches containing the right eyes of the head in each image of the multi-view images, performing the steps of: deriving a feature vector from plural local image descriptors of the image patch in at least one image of the multi-view images, and deriving a displacement vector field representing a transformation of an image patch, using the derived feature vector to look up reference data comprising reference displacement vector fields associated with possible values of the feature vector; and adjusting each image of the multi-view images by transforming the image patches containing the left and right eyes of the head in accordance with the derived displacement vector field.


In this method, image patches containing the left and right eyes of the head are identified and transformed. To derive a displacement vector field that represents the transformation, a feature vector is derived from plural local image descriptors of the image patch in at least one image of the multi-view images and used to look up reference data comprising reference displacement vector fields associated with possible values of the feature vector. The form of the feature vector may be derived in advance from the reference data using machine learning. This method allows the gaze to be corrected, thereby reducing the disconcerting effect of incorrect gaze when the multi-view images are subsequently displayed.


Various approaches to deriving and using displacement vector fields are possible as follows.


In a first approach, displacement vector fields may be derived in respect of the image patches in each image of the multi-view images independently. This allows for correction of gaze, but there is a risk that the displacement vector fields in respect of each image may be inconsistent with each other, with the result that conflicting transformations are performed which can distort the stereoscopic effect and/or reduce the quality of the image.


However, the following alternative approaches overcome this problem.


A second possible approach is as follows. In the second approach, the plural local image descriptors used in the method are plural local image descriptors in both images of the multi-view images. In this case, the reference data comprises reference displacement vector fields for each image of the multi-view images, which reference displacement vector fields are associated with possible values of the feature vector. This allows a displacement vector field to be derived from the reference data for each image of the multi-view images. As such, the derived displacement vector fields for each image of the multi-view images are inherently consistent.


A potential downside of this second approach is that it may require the reference data to be derived from stereoscopic or more generally multi-view imagery, which may be inconvenient to derive. However, the following approaches allow the reference data to be derived from monoscopic imagery.


A third possible approach is as follows. In the third approach, the plural local image descriptors are plural local image descriptors in one image of the multi-view images, and the displacement vector fields are derived as follows. A displacement vector field representing a transformation of the image patch in said one image of the multi-view images is derived, using the derived feature vector to look up reference data comprising reference displacement vector fields associated with possible values of the feature vector. Then, a displacement vector field representing a transformation of the image patch in the other multi-view image or images is derived by transforming the derived displacement vector field representing a transformation of the image patch in said one image of the multi-view images in accordance with an estimate of the optical flow between the image patches in the one image and the other multi-view image or images.


Thus, in the third approach, the displacement vector fields derived in respect of each image are consistent, because only one displacement vector field is derived from the reference data, and the other displacement vector field is derived therefrom using a transformation in accordance with an estimate of the optical flow between the image patches in the images of the multi-view images.


A fourth possible approach is as follows. In the fourth approach, the plural local image descriptors are plural local image descriptors in both images of the multi-view images, and the displacement vector fields are derived as follows. An initial displacement vector field representing a notional transformation of a notional image patch in a notional image having a notional camera location relative to the camera locations of the images of the multi-view images, using the derived feature vector to look up reference data comprising reference displacement vector fields associated with possible values of the feature vector. Then, displacement vector fields representing a transformation of the image patches in each image of the multi-view images are derived by transforming the initial displacement vector field in accordance with an estimate of the optical flows between the notional image patches in the notional images and the image patches in the images of the multi-view images.


Thus, in the fourth approach, the displacement vector fields derived in respect of each image are consistent, because only one displacement vector field is derived from the reference data, this representing a notional transformation of a notional image patch in a notional image having a notional camera location relative to the camera locations of the images of the multi-view images. The respective displacement vector fields used to transform the two images of the multi-view images are derived therefrom using a transformation in accordance with an estimate of the optical flow between the notional image patches in the notional images and the images of the multi-view images.


A fifth possible approach is as follows. In the fifth approach, displacement vector fields in respect of the image patches in each image of the multi-view images are derived, but then a merged displacement vector field is derived therefrom and used to transform the image patches containing both the left and right eyes of the head. In this case, the displacement vector fields for each image are consistent because they are the same.


The merging may be performed in any suitable manner. For example, the merging may be a simple average or may be an average that is weighted by a confidence value associated with each derived displacement vector field. Such a confidence value may be derived during the machine learning.


According to a second aspect of the present disclosure, there is provided an apparatus configured to perform a similar method to the first aspect of the disclosure.





BRIEF DESCRIPTION OF THE DRAWINGS

Non-limitative embodiments are illustrated by way of example in the accompanying figures, in which like reference numbers indicate similar parts, and in which:



FIG. 1 is a schematic perspective view of a device that captures a stereoscopic pair of images,



FIG. 2 is a schematic perspective view of a device that displays the stereoscopic pair of images;



FIG. 3 is a flow chart of a method of adjusting a stereoscopic pair of images;



FIG. 4 is a diagram illustrating the processing of the stereoscopic pair of images in the method of FIG. 3;



FIG. 5 is a flow chart of a step of extracting an image patch;



FIG. 6 and FIG. 7 are flow charts of steps of deriving displacement vector fields according to two alternative approaches;



FIG. 8 and FIG. 9 are flow charts of two alternatives for a step of adjusting an image;



FIG. 10 is a flow chart of a transformation step within the step of adjusting an image in the methods shown in FIG. 8 and FIG. 9; and



FIG. 11 is a diagram of a telecommunications system in which the method may be implemented.





DETAILED DESCRIPTION


FIG. 1 and FIG. 2 illustrate how incorrect gaze is perceived when a stereoscopic pair of images of a head is captured by the device 10 shown in FIG. 1 which will be referred to as the source device 10 and displayed on a different device 20 shown in FIG. 2 which will be referred to as the destination device 20.


The capture device 10 includes a display 11 and a camera system 12 comprises two cameras 13 used to capture the stereoscopic pair of images of the head of a source observer 14. The source observer 14 views the display 11, along line 15. The cameras 13 of the camera system 12 are offset from the display 11, in this case being above the display 11. Thus, the cameras 13 effectively look down on the source observer 14 along line 16.


The display device 20 includes a display 21 which is a stereoscopic display of any known type, for example an autostereoscopic display of any known type. The display 21 displays the stereoscopic pair of images is captured by the capture device 10. A destination observer 24 views the display 21. If the destination observer 24 is located in a normal viewing position perpendicular to the center of the display 21, as shown by the hard outline of the destination observer 24, then the gaze of the source observer 14 is perceived by the destination observer 24 to be downwards, rather than looking at the destination observer 24, because the cameras 13 of the source device 10 look down on the source observer 14.


Although the cameras 13 are above the display 11 in this example, the cameras 13 could in general could be in any location adjacent the display 11, and the gaze of the source observer 14 perceived by the destination observer 24 would be correspondingly incorrect.


If the destination observer 24 is located in an offset viewing position, as shown by the dotted outline of the destination observer 24 so that the destination observer 24 views the display 21 along line 26, then the offset of the destination observer 24 creates an additional error in the gaze of the source observer 14 perceived by the destination observer 24. A similar additional error in the perceived gaze of the source observer 14 occurs if the destination observer 24 is located in the normal viewing position along line 25, but the stereoscopic pair of images is displayed on the display 25 in a position offset from the center of the display 25.


A stereoscopic pair of images is an example of multi-view images where there are two images. Although FIG. 1 illustrates an example where the camera system 12 includes two cameras 13 that capture of a stereoscopic pair of images, alternatively the camera system may include more than two cameras 13 that capture more than two multi-view images, in which case similar issues of incorrect perceived gaze exist on display.



FIG. 3 illustrates a method of adjusting multi-view images to correct such errors in the perceived gaze. For simplicity, this method will be described with respect to the adjustment of multi-view images comprising a stereoscopic pair of images. The method may be generalized to multi-view images comprising more than two images, simply by performing the similar processing on a larger number of images.


The method may be performed in an image processor 30. The image processor 30 may be implemented by a processor executing a suitable computer program or by dedicated hardware or by some combination of software and hardware. Where a computer program is used, the computer program may comprise instructions in any suitable language and may be stored on a computer readable storage medium, which may be of any type, for example: a recording medium which is insertable into a drive of the computing system and which may store information magnetically, optically or opto-magnetically; a fixed recording medium of the computer system such as a hard drive; or a computer memory.


The image processor 30 may be provided in the source device 10, the destination device 10 or in any other device, for example a server on a telecommunications network, which may be suitable in the case that the source device 10 and the destination device 10 communicate over such a telecommunications network.


The stereoscopic pair of images 31 are captured by the camera system 12. Although the camera systems 12 is illustrated in FIG. 1 as including two cameras 13, this is not limitative and more generally the camera system 13 may have the following properties.


The camera system comprises a set of cameras 13, with at least two cameras 13. The cameras are typically spaced apart by a distance less than the average human intrapupilar distance. In the alternative that the method is applied to more than two multi-view images, then there are more than two cameras 13, that is one camera 13 image.


The cameras 13 are spatially related to each other and the display 11. The spatial relationship between the cameras 13 themselves and between the cameras 13 and the display 11 is known in advance. Known methods for finding the spatial relationship may be applied, for example a calibration method using a reference image, or specification a priori.


The cameras 13 face in the same direction as the display 11. Thus, when the source observer 14 is viewing the display 11, then the cameras 13 face the source observer 14 and the captured. stereoscopic pair of images are images of the head of the source observer 14. The cameras in the camera system can have different fields of view.


The camera system 12 may include cameras 13 having different sensing modalities, including visible light and infrared.


The main output of the camera system 13 is a stereoscopic pair of images 31 which are typically video images output at a video rate. The output of the camera system 13 may also include data representing the spatial relationship between the cameras 13 and the display 11, the nature of the sensing modalities and internal parameters of the cameras 13 (for example focal length, optical axis) which may be used for angular localization.


The method performed on the stereoscopic pair of images 31 is as follows. To illustrate the method, reference is also made to FIG. 4 which shows an example of the stereoscopic pair of images 31 at various stages of the method.


In step S1, the stereoscopic pair of images 31 are analyzed to detect the location of the head and in particular the eyes of the source observer 14 within the stereoscopic pair of images 31. This is performed by detecting presence of a head, tracking the head, and localizing the eyes of the head. Step S1 may be performed using a variety of techniques that are known in the art.


One possible technique for detecting the presence of the head is to use Haar feature cascades, for example as disclosed in Viola and Jones, “Rapid Object Detection using a Boosted Cascade of Simple Features”, CVPR 2001, pp 1-9 (incorporated herein by reference).


One possible technique for tracking the head is to use the approach of Active Appearance Models to provide the position of the head of the subject, as well as the location of the eyes, for example as disclosed in Cootes et al., “Active shape models—their training and application”, Computer Vision and Image Understanding, 61(1):38-59, January 1995 and in Cootes et al. “Active appearance models”, IEEE Trans. Pattern Analysis and Machine Intelligence, 23(6):681-685, 2001 (incorporated herein by reference).


In step S1, typically, a set of individual points (“landmarks”) are set to regions of the face, typically the eyes, for example corners of the eye, upper and lower lid locations, etc, thereby localizing the eyes.


In step S2, image patches containing the left and right eyes of the head, respectively are identified in each image 31 of the stereoscopic pair. FIG. 4 shows the identified image patches 32 of the right eye in each image 31 (the image patches for the left eye being omitted in FIG. 4 for clarity).


Step S2 may be performed as shown in FIG. 5, as follows.


In step S2-1, image patches 32 containing the left and right eyes of the head are identified in each image 31 of the stereoscopic pair. This is done by identifying an image patch 39 in each image 31 located around the identified points (“landmarks”) corresponding to features of an eye, as shown for example in FIG. 4.


In step S2-2, the image patches 32 identified in step S2-1 are transformed into a normalized coordinate system, being the same normalized coordinate system as used in the machine learning process which is described further below. The transformation is chosen to align the points (“landmarks”) of the eye within the image patch that were identified in step S1, with predetermined locations in the normalized coordinate system. The transformation may include translation, rotation and scaling, to appropriate extents to achieve that alignment. The output of step S2-2 is identified image patches 33 of the right eye in each image in the normalized coordinate system as shown for example in FIG. 4.


The following steps are performed separately (a) in respect of the image patches containing the left eyes of the head in each image 31 of the stereoscopic pair, and (b) in respect of the image patches containing the right eyes of the head in each image 31 of the stereoscopic pair. For brevity, the following description will refer merely to image patches and eyes without specifying the left or right eye, but noting the same steps are performed for both left and right eyes.


In step S3, a feature vector 34 is derived from plural local image descriptors of an image patch 33 in at least one image 31 of the stereoscopic pair. Depending on the approach and as described further below, this may be an image patch in a single image 31 of the stereoscopic pair or may be both images 31 of the stereoscopic pair. Thus, the local image descriptors are local image descriptors derived in the normalized coordinate system.


The feature vectors 34 are representations of the image patches 33 that are suitable for use in looking up reference data 35 comprising reference displacement vector fields that represent transformations of the image patch and are associated with possible values of the feature vector.


The reference data 35 is obtained and analyzed in advance using a machine learning technique which derives the form of the feature vectors 34 and associates the reference displacement vector fields with the possible values of the feature vector. Accordingly, the machine learning technique will now be described before reverting to the method of FIG. 3.


The training input to the machine learning technique is two sets of images, which may be stereoscopic pairs of images or monoscopic images, as discussed further below. Each set comprises images of the head of the same group of individuals but captured from cameras in different locations relative to the gaze so that the perceived gaze differs as between them.


The first set are input images, being images of each individual with an incorrect gaze where the error is known a priori. In particular, the images in the first set may be captured by at least one cameras in a known camera location where the gaze of the individual which is in a different known direction. For example in the case of the source device of FIG. 1, the camera location may be the location of a camera 13 and while the gaze of the imaged individual is towards the center of the display 11.


The second set are output images, being images of each individual with correct gaze for a predetermined observer location relative to a display location in which the image is to be displayed. In the simplest case, the observer location is a normal viewing position perpendicular to the center of the display location, for example as shown by the hard outline of the destination observer 24 in the case of the destination device 20 of FIG. 2.


For each image in the two sets, the image is analyzed to detect the location of the head and in particular the eyes using the same technique as used in step S1 described above, and then image patches containing the left and right eyes of the head, respectively, are identified using the same technique as used in step S2 described above. The following steps are then performed separately (a) in respect of the image patches containing the left eyes of the head in each image, and (b) in respect of the image patches containing the right eyes of the head in each image. For brevity, the following description will refer merely to image patches and eyes without specifying the left or right eye, but noting the same steps are performed for both left and right eyes.


Each image patch is transformed into the same normalized coordinate system as used in step S2 described above. As described above, the transformation is chosen to align points (“landmarks”) of the eye with predetermined locations in the normalized coordinate system. The transformation may include translation, rotation and scaling, to appropriate extents to achieve that alignment.


Thus, the image patches input and output images of each individual are aligned in the normalized coordinate system.


From an input and output image of each individual, there is derived a displacement vector field that represents the transformation of the image patch in the input image required to obtain the image patch of the output image, for example as follows. Defining positions in the image patches by (x,y), the displacement vector field F is given by

F={u(x,y),v(x,y)}

where u and v define the horizontal and vertical components of the vector at each position (x,y).


The displacement vector field F is chosen so that the image patch of the output image O(x,y) is derived from the image patch of the input image I(x,y) as

O(x,y)=I(x+u(x,y),y+v(x,y))


For image data from more than one camera, the system delivers a displacement vector field for the input image from each camera.


The displacement vector field F for an input and output image of an individual may be derived using a process in which a trial feature vector F′={u′,v′} is modified to minimize error, optionally in an iterative process, for example in accordance with:

Σ|O(x,y)−I(x+u′(x,y),y+v′(x,y))|=min!


By way of non-limitative example, the displacement vector field F may be derived as disclosed in Kononenko et al., “Learning To Look Up: Realtime Monocular Gaze Correction Using Machine Learning”, Computer Vision and Pattern Recognition, 2015, pp. 4667-4675 (incorporated herein by reference), wherein the displacement vector field F is referred to as a “flow field”.


A machine learning technique is used to obtain a map from the displacement vector field F of each individual to respective feature vectors derived from plural local image descriptors of the image patch of the input image.


The local descriptors capture relevant information of a local part of the image patch of the input image and the set of descriptors usually form a continuous vectorial output.


The local image descriptors input into the machine learning process are of types expected to provide discrimination between different individuals, although the specific local image descriptors are selected and optimized by the machine learning process itself. In general, the local image descriptors may be of any suitable type, some non-limitative examples which may be applied in any combination being as follows.


The local image descriptors may include values of individual pixels or a linear combination thereof. Such a linear combination may be, for example, a difference between the pixels at two points, a kernel derived within a mask at an arbitrary location, or a difference between two kernels at different locations.


The local image descriptors may include distances of a pixel location from the position of an eye point (“landmark”).


The local image descriptors may include SIFT features (Scale-invariant feature transform features), for example as disclosed in Lowe, “Distinctive Image Features from Scale-Invariant Keypoints”, International Journal of Computer Vision 60 (2), pp 91-110 (incorporated herein by reference).


The local image descriptors may include HOG features (Histogram of Oriented Gradients features), for example as disclosed in Dalal et al. “Histograms of Oriented Gradients for Human Detection”, Computer Vision and Pattern Recognition, 2005, pp. 886-893 (incorporated herein by reference).


The derivation of the feature vector from plural local image descriptors depends on the type of machine learning applied.


In a first type of machine learning technique, the feature vector may comprise features that are values derived from the local image descriptors in a discrete space, being binary values or values discretized into more than two possible values. In this case, the machine learning technique associates a reference displacement vector field F derived from the training input with each possible value of the feature vector in the discrete space, so the reference data 35 is essentially a look-up table. This allows a reference displacement vector field F to be simply selected from the reference data 35 on the basis of the feature vector 34 derived in step S3, as described below.


In the case that the feature vector comprises features that are binary values derived from the local image descriptors, the feature vector has a binary representation. Such binary values may be derived in various ways from the values of descriptors, for example by comparing the value of a descriptor with a threshold, comparing the value of two descriptors, or by comparing the distance of a pixel location from the position of an eye point (“landmark”).


Alternatively, the feature vector may comprise features that are discretized values of the local image descriptors. In this case, more than two discrete values of each feature are possible.


Any suitable machine learning technique may be applied, for example using a decision tree, a decision forest, a decision fern or an ensemble or combination thereof.


By way of example, a suitable machine learning technique using a feature vector comprising features that are binary values derived by comparing a set of individual pixels or a linear combination thereof against a threshold, is disclosed in Ozuysal et al. “Fast Keypoint Recognition in Ten Lines of Code”, Computer Vision and Pattern Recognition, 2007, pp. 1-8 (incorporated herein by reference).


By way of further example, a suitable machine learning technique using a distance of a pixel location with the position of an eye landmark is disclosed in Kononenko et al., “Learning To Look Up: Realtime Monocular Gaze Correction Using Machine Learning”, Computer Vision and Pattern Recognition, 2015, pp. 4667-4675 (incorporated herein by reference).


By way of further example, a suitable machine learning technique using a random decision forest is disclosed in Ho, “Random Decision Forests”, Proceedings of the 3rd International Conference on Document Analysis and Recognition, Montreal, QC, 14-16 Aug. 1995, pp. 278-282 (incorporated herein by reference).


In a second type of machine learning technique, the feature vector may comprise features that are discrete values of the local image descriptors in a continuous space. In this case, the machine learning technique associates a reference displacement vector field F derived from the training input with possible discrete values of the feature vector in the continuous space. This allows a displacement vector field F to be derived from the reference data 35 by interpolation from the reference displacement vector fields based on the relationship between the feature vector 34 derived in step S3 and the values of the feature vector associated with the reference displacement vector fields.


Any suitable machine learning technique may be applied, for example using support vector regression.


By way of example, a suitable machine learning technique using support vector regression is disclosed in Drucker et al. “Support Vector Regression Machines”, Advances in Neural Information Processing Systems 9, NIPS 1996, 155-161, (incorporated herein by reference). The output of the technique is a continuously varying set of interpolation directions that form part of the reference data 35 and are used in the interpolation.


The machine learning technique, regardless of its type, inherently also derives the form of the feature vectors 34 that is used to derive the reference displacement vector fields F. This is the form of the feature vectors 34 that is derived in step S3.


Optionally, the output of the machine learning technique may be augmented to provide confidence values associated with derivation of a displacement vector field from the reference data 35.


In the case that the feature vector comprises features that are values in a discrete space, a confidence value is derived for each reference displacement vector field.


One example of deriving a confidence value is to keep, for each resulting index (value of the feature vector) in the resulting look-up table, a distribution of the corresponding part of the input image in the training data. In this case, the confidence value may be the amount of training data that ended up with the same index, divided by the total number of training data exemplars.


Another example of deriving a confidence value is to fit a Gaussian to the distribution of input images in the training data in each bin indexed, and to use the trace of the covariance matrix around the mean value as the confidence value.


In the case that the feature vector comprises features that are discrete values of the local image descriptors in a continuous space, the confidence values may be derived according to the machine learning method used. For example, when using support vector regression, the confidence values may be the inverse of the maximum distance to the support vectors.


Where used, the confidence values are stored as part of the reference data.


The description now reverts to the method of FIG. 3.


In step S4, at least one displacement vector field 37 representing a transformation of an image patch is derived by using the feature vector 34 derived in step S3 is to look up the reference data 35. Due to the derivation of the displacement vector field 37 from the reference data 35, the transformation represented thereby corrects the gaze that will be perceived when the stereoscopic pair of images 31 are displayed.


In the case that the feature vector 34 comprises features that are values in a discrete space and the reference displacement vector fields of the reference data 35 comprise a reference displacement vector field associated with each possible value of the feature vector in the discrete space, then the displacement vector field for the image patch is derived by selecting the reference displacement field associated with the actual value of the derived feature vector 34.


In the case that the feature vector 34 comprises features that are discrete values of the local image descriptors in a continuous space, then then the displacement vector field for the image patch is derived by interpolating a displacement vector field from the reference displacement vector fields based on the relationship between the actual value of the derived feature vector 34 and the values of the feature vectors associated with the reference displacement vector fields. In the case that the machine learning technique was support vector regression, This may be done using the interpolation directions that form part of the reference data 35.


Some different approaches to the derivation of the displacement vector field 37 in step S4 will now be described.


In a first approach, in step S4, a displacement vector field 37 is derived in respect of the image patches in each image 31 of the stereoscopic pair independently. This first approach may be applied when the reference data 35 was derived from monoscopic images. This approach provides for correction of gaze, but there is a risk that the displacement vector fields 37 in respect of each image may be inconsistent with each other, with the result that conflicting transformations are subsequently performed which can distort the stereoscopic effect and/or reduce the quality of the image.


Other approaches which overcome this problem are as follows.


In a second possible approach, the plural local image descriptors used in deriving the feature vector 34 in step S3 are plural local image descriptors in both images of the stereoscopic pair. In this case, the reference data 35 similarly comprises pairs of reference displacement vector fields for each image 31 of the stereoscopic image pair, it being the pairs of reference displacement vector fields that are associated with possible values of the feature vector 34.


This second approach allows a pair of displacement vector fields 35 to be derived from the reference data 35, that is one displacement vector field for each image 31 of the stereoscopic pair. As such, the derived displacement vector fields for each image 31 of the stereoscopic pair are inherently consistent since they are derived together from the consistent pairs of reference displacement vector fields in the reference data 35.


The downside of this second approach is that it requires the reference data 35 to be derived from training input to the machine learning technique that is stereoscopic pairs of images. This does not create any technical difficulty, but may create some practical inconvenience as monoscopic imagery is more commonly available. Accordingly, the following approaches may be applied when the reference data 35 is derived from training input to the machine learning technique that is monoscopic images.


In a third possible approach, a feature vector 34 is derived from plural local image descriptors that are plural local image descriptors derived from one image of the stereoscopic pair. In that case, the displacement vector fields 37 are derived as shown in FIG. 6 as follows.


In step S4-1, a first displacement vector field 37 representing a transformation of the image patch in said one image 31 of the stereoscopic pair (which may be either image 31) is derived. This is done using the derived feature vector 34 to look up the reference data 35.


In step S4-2, a displacement vector field 37 representing a transformation of the image patch in the other image 31 of the stereoscopic pair is derived. This is done by transforming the displacement vector field derived in step S4-1 in accordance with an estimate of the optical flow between the image patches in the images 31 of the stereoscopic pair.


The optical flow represents the effect of the different camera locations as between the images 31 of the stereoscopic pair. Such an optical flow is known in itself and may be estimated using known techniques for example as disclosed in Zach et al., “A Duality Based Approach for Realtime TV-L1 Optical Flow”, Pattern Recognition (Proc. DAGM), 2007, pp. 214-223 (incorporated herein by reference).


By way of example, if the first displacement vector field 37 derived in step S4-1 is for the left image Lo. Li (where the subscripts o and i represent the output and input images respectively), and the optical flow to the right image Ro is represented by a displacement vector field G given by

G={s(x,y),t(x,y)}

then the second displacement vector field 37 may be derived in accordance with

Ro(x,y)=Lo(x+s(x,y),y+t(x,y))=Li(x+s+u(x+s,y+t,y+t+v(x+s,y+t)


Thus, in the third approach, the displacement vector fields 37 derived in respect of each image 31 of the stereoscopic pair are consistent, because only one displacement vector field is derived from the reference data 35, and the other displacement vector field is derived therefrom using a transformation which maintains consistency because it is derived in accordance with an estimate of the optical flow between the image patches in the images 31 of the stereoscopic pair.


In a fourth possible approach, a feature vector 34 is derived from plural local image descriptors that are plural local image descriptors derived from both images of the stereoscopic pair. In that case, the displacement vector fields 37 are derived as shown in FIG. 7 as follows.


In step S4-3, an initial displacement vector field representing a notional transformation of a notional image patch in a notional image having a notional camera location in a predetermined location relative to the camera locations of the images 31, in this example between the camera locations of the images 31. This may be thought of as a Cyclopean eye. This is done using the derived feature vector 34 to look up the reference data 35 which comprises reference displacement vector fields associated with possible values of the feature vector. This means that the reference data 35 is correspondingly structured, but may still be derived from training input that comprises monoscopic images.


In step S4-4, a displacement vector fields 37 representing transformations of the image patches in each image 31 of the stereoscopic pair are derived. This is done by transforming the initial displacement vector field derived in step S4-3 in accordance with an estimate of the optical flow between the notional image patches in the notional images and the image patches in the images 31 of the stereoscopic pair.


The optical flow represents the effect of the different camera locations as between the notional image and the images 31 of the stereoscopic pair. Such an optical flow is known in itself and may be estimated using known techniques for example as disclosed in Zach et al., “A Duality Based Approach for Realtime TV-L1 Optical Flow”, Pattern Recognition (Proc. DAGM), 2007, pp. 214-223 (as cited above and incorporated herein by reference).


By way of example, if the optical flow from the left image L to the right image R is represented by a displacement vector field G given by

G={s(x,y),t(x,y)}

then the transformation deriving the notional image C is given by







C


(

x
,
y

)


=


R


(


x
-


s


(

x
,
y

)


2


,

y
-


t


(

s
,
y

)


2



)


=

L


(


x
+


s


(

x
,
y

)


2


,

y
+


t


(

x
,
y

)


2



)







Thus, in this example, the initial displacement vector field F derived in step S4-3 for this notional image C is transformed in step S4-4 to derive the flow fields Frc and Fic for the right and left images in accordance with








F
rc



(

x
,
y

)


=

{



u


(

x
,
y

)


+


s


(

x
,
y

)


2


,


v


(

x
,
y

)


+


t


(

x
,
y

)


2



}









F
lc



(

x
,
y

)


=

{



u


(

x
,
y

)


-


s


(

x
,
y

)


2


,


v


(

x
,
y

)


-


t


(

x
,
y

)


2



}





Thus, in the fourth approach, the displacement vector fields 37 derived in respect of each image 31 of the stereoscopic pair are consistent, because only one displacement vector field is derived from the reference data 35, this representing a notional transformation of a notional image patch in a notional image, and the displacement vector fields for the left and right images are derived therefrom using a transformation which maintains consistency because it is derived in accordance with an estimate of the optical flow between the image patches in the notional image and in the images 31 of the stereoscopic pair.


In step S5, each image 31 of the stereoscopic pair is adjusted by transforming the image patches containing the left and right eyes of the head in accordance with the derived displacement vector fields 37. This produces an adjusted stereoscopic pair of images 38 as shown in FIG. 4, in which the gaze has been corrected. In particular, the adjustment may be performed using two alternative methods, as follows.


A first method for performing step S5 is shown in FIG. 8 and performed as follows.


In step S5-1, the image patch is transformed in the normalised coordinate system in accordance with the corresponding displacement vector field 37 in respect of the same image, thereby correcting the gaze. As described above, for a displacement vector field F the transformation of the image patch of the input image I(x,y) provides the output image O(x,y) in accordance with

O(x,y)=I(x+u(x,y),y+v(x,y))


In step S5-2, the transformed image patch output from step S5-1 is transformed out of the normalised coordinate system, back into the original coordinate system of the corresponding image 31. This is done using the inverse transformation from that applied in step S2-2.


In step S5-3, the transformed image patch output from step S5-2 is superimposed on the corresponding image 31. This may be done with a full replacement within an eye region corresponding to the eye itself, and a smoothed transition between the transformed image patch and the original image 31 over a boundary region around the eye region. The width of the boundary region may be of fixed size or a percentage of the size of the image patch in the original image 31.


A second method for performing step S5 is shown in FIG. 9 and performed as follows.


In this second, alternative method, the transformation back into the coordinate system of the corresponding image 31 occurs before the transformation of the image patch in accordance with the transformed displacement vector field F.


In step S5-4, the displacement vector field F is transformed out of the normalised coordinate system, back into the original coordinate system of the corresponding image 31. This is done using the inverse transformation from that applied in step S2-2.


In step S5-5, the image patch 32 in the coordinate system of the image 31 is transformed in accordance with the displacement vector field F that has been transformed into the same coordinate system in step S5-4. As described above, for a displacement vector field F the transformation of the image patch of the input image I(x,y) provides the output image O(x,y) in accordance with

O(x,y)=I(x+u(x,y),y+v(x,y))

but this is now performed in the coordinate system of the original image 31.


Step S5-6 is the same as S5-3. Thus, in step S5-6, the transformed image patch output from step S5-5 is superimposed on the corresponding image 31. This may be done with a full replacement within an eye region corresponding to the eye itself, and a smoothed transition between the transformed image patch and the original image 31 over a boundary region around the eye region. The width of the boundary region may be of fixed size or a percentage of the size of the image patch in the original image 31.


The displacement vector fields 37 used in step S5 will now be discussed.


One option is that the displacement vector fields 37 derived in step S4 in respect of the left and right images are used directly in step S5. That is, the image patch in respect of each image 31 of the stereoscopic patch is transformed in accordance with the displacement vector field 37 in respect of that image 31. This is appropriate if the displacement vector fields 37 are sufficiently accurate, for example because they have been derived from reference data 35 that is itself derived from stereoscopic imagery in accordance with the second approach described above.


An alternative option in accordance with a fifth approach is that a merged displacement vector field 39 is derived and used. This may be applied in combination with any of the first to fourth approaches discussed above. In this case, step S5 additionally includes step S5-a as shown in FIG. 10 which is performed before step S5-1 in the first method of FIG. 8 or before step S5-4 in the second method of FIG. 9. In step S5-a, a merged displacement vector field 39 is derived from the displacement vector fields 37 derived in step S4 in respect of the image patches in each image 31 of the stereoscopic pair.


The rest of step S5 is then performed using the merged displacement vector field 39 in respect of each image 31. That is, in the first method of FIG. 8, the image patch 33 in respect of each image 31 of the stereoscopic pair is transformed in step S5-1 of in accordance with the merged displacement vector field 39. Similarly, in the second method of FIG. 9, in step S5-4 the merged displacement vector field 39 is transformed and in step S5-5 the image patch 33 in respect of each image 31 of the stereoscopic pair is transformed in accordance with that merged displacement vector field 39.


In this case, the displacement vector fields for each image are consistent because they are the same.


The merging in step S5-1a may be performed in any suitable manner.


In one example, the merging in step S5-1a may be a simple average of the displacement vector fields 37 derived in step S4


In another example, the merging in step S5-1a may be an average that is weighted by a confidence value associated with each derived displacement vector field 37. In this case, confidence values form part of the reference data 35, in the manner described above, and in step S4 the confidence values are derived from the reference data 35, together with the derived displacement vector field 37.


By way of example, denoting the derived displacement vector field 37 as Fi, the merged displacement vector field 39 as Favg, and the confidence values as ai, then the merged displacement vector field 39 may be derived as







F
avg

=





a
i



F
i






a
i







In the example described above, gaze is corrected for a destination observer 24 in an observer location that is a normal viewing position perpendicular to the center of the display location, for example as shown by the hard outline of the destination observer 24 in the case of the destination device 20 of FIG. 2. This is sufficient in many situations. However, there will now be described an optional modification which allows gaze is corrected for a destination observer 24 in a different observer location, for example as shown by the dotted outline of the destination observer 24 in the case of the destination device 20 of FIG. 2.


In this case, the method further comprises using location data 40 representing the observer location relative to a display location of the stereoscopic pair of images 31. This location data 40 may be derived in the destination device 20, for example as described below. In that case, if the method is not performed in the destination device 20, then the location data 40 is transmitted to the device in which the method is performed.


The relative observer location may take into account the location of the observer with respect to the display 21. This may be determined using a camera system in the destination device 20 and an appropriate head tracking module to detect the location of the destination observer 24.


The relative observer location may assume that the image is displayed centrally on the display 21. Alternatively, relative observer location may take into account both the location of the observer with respect to the display 21 and the location of the image displayed on the display 21. In this case, the location of the image displayed on the display 21 may be derived from the display geometry (for example, the position and area of a display window and the size of the display 21).


To account for different observer locations, the reference data 34 comprises plural sets of reference displacement vector fields, each set being associated with different observer locations. This is achieved by the training input to the machine learning technique comprising plural second sets of output images, each second set being images of each individual with correct gaze for a respective, predetermined observer location relative to a display location in which the image is to be displayed. Thus, in step S4, the displacement vector fields 37 are derived by looking up the set of reference displacement vector fields associated with the observer location represented by the location data.


As described above, the method may be implemented in an image processor 30 provided in various different devices. By way of non-limitative example, there will now be described an particular implementation in a telecommunications system which is shown in FIG. 11 and arranged as follows.


In this implementation, the source device 10 and the destination device 10 communicate over such a telecommunications network 50. For communication over the telecommunications network 50, the source device 10 includes a telecommunications interface 17 and the destination device 20 includes a telecommunications interface 27.


In this implementation, the image processor 30 is provided in the source device 10 and is provided with the stereoscopic pair of images directly from the camera system 12. The telecommunications interface 17 is arranged to transmit the adjusted stereoscopic pair of images 38 over the telecommunications network 50 to the destination device 20 for display thereon.


The destination device 20 includes an image display module 28 that controls the display 26. The adjusted stereoscopic pair of images 38 are received in the destination device 20 by the telecommunications interface 27 and supplied to the image display module 28 which causes them to be displayed on the display 26.


The following elements of the destination device 20 are optionally included in the case that the method corrects gaze for a destination observer 24 in an observer location other than a normal viewing position perpendicular to the center of the display location. In this case, the destination device 20 includes a camera system 23 and an observer location module 29. The camera system 23 captures an image of the destination observer 24. The observer location module 29 derives the location data 40. The observer location module 29 includes a head tracking module that uses the output of the camera system 23 to detect the location of the destination observer 24. The observer location module 29. Where the relative observer location also takes into account the location of the image displayed on the display 21, the observer location module 29 obtains the location of the image displayed on the display 21 from the image display module 28. The telecommunications interface 17 is arranged to transmit the location data 40 over the telecommunications network 50 to the source device 10 for use thereby.


Although the above description refers to a method applied to images supplied from a source device 10 to a destination device 20, the method may equally be applied to images supplied from in the opposite direction from the destination device 20 to the source device 10, in which case the destination device 20 effectively becomes the “source device” and the source device 10 effectively becomes the “destination device”. Where images are supplied bi-directionally, the labels “source” and “destination” may be applied to both devices, depending on the direction of communication being considered.


While various embodiments in accordance with the principles disclosed herein have been described above, it should be understood that they have been presented by way of example only, and not limitation. Thus, the breadth and scope of this disclosure should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with any claims and their equivalents issuing from this disclosure. Furthermore, the above advantages and features are provided in described embodiments, but shall not limit the application of such issued claims to processes and structures accomplishing any or all of the above advantages.


Additionally, the section headings herein are provided for consistency with the suggestions under 37 CFR 1.77 or otherwise to provide organizational cues. These headings shall not limit or characterize the embodiment(s) set out in any claims that may issue from this disclosure. Specifically and by way of example, although the headings refer to a “Technical Field,” the claims should not be limited by the language chosen under this heading to describe the so-called field. Further, a description of a technology in the “Background” is not to be construed as an admission that certain technology is prior art to any embodiment(s) in this disclosure. Neither is the “Summary” to be considered as a characterization of the embodiment(s) set forth in issued claims. Furthermore, any reference in this disclosure to “invention” in the singular should not be used to argue that there is only a single point of novelty in this disclosure. Multiple embodiments may be set forth according to the limitations of the multiple claims issuing from this disclosure, and such claims accordingly define the embodiment(s), and their equivalents, that are protected thereby. In all instances, the scope of such claims shall be considered on their own merits in light of this disclosure, but should not be constrained by the headings set forth herein.

Claims
  • 1. A method of adjusting multi-view images of a head to correct gaze, the method comprising capturing multi-view images with an electronic device having a camera system, the camera system being located in a position that is offset from a display of the electronic device;in each image, identifying image patches containing the left and right eyes of the head, respectively;in respect of the image patches containing the left eyes of the head in each image of the multi-view images, and also in respect of the image patches containing the right eyes of the head in each image of the multi-view images, performing the steps of:deriving a feature vector from plural local image descriptors of the image patch in at least one image of the multi-view images; andderiving a displacement vector field representing a transformation of an image patch, using the derived feature vector to look up reference data comprising reference displacement vector fields associated with possible values of the feature vector;adjusting each image of the multi-view images by transforming the image patches containing the left and right eyes of the head in accordance with the derived displacement vector field; andpresenting the adjusted image on the display of the electronic device.
  • 2. A method according to claim 1, wherein said plural local image descriptors are plural local image descriptors in each images of the multi-view images, andsaid reference data comprising pairs of reference displacement vector fields for each image of the multi-view images, which pairs of reference displacement vector fields are associated with possible values of the feature vector.
  • 3. A method according to claim 1, wherein said plural local image descriptors are plural local image descriptors in one image of the multi-view images,said step of deriving displacement vector fields comprises:deriving a displacement vector field representing a transformation of the image patch in said one image of the multi-view images, using the derived feature vector to look up reference data comprising reference displacement vector fields associated with possible values of the feature vector; andderiving a displacement vector field representing a transformation of the image patch in the other multi-view image or images by transforming the derived displacement vector field representing a transformation of the image patch in said one image of the multi-view images in accordance with an estimate of the optical flow between the image patches in said one image of the multi-view images and the other multi-view image or images.
  • 4. A method according to claim 1, wherein said plural local image descriptors are plural local image descriptors in both images of the stereoscopic pair,said step of deriving displacement vector fields comprises:deriving an initial displacement vector field representing a notional transformation of a notional image patch in a notional image having a notional camera location relative to the camera locations of the multi-view images, using the derived feature vector to look up reference data comprising reference displacement vector fields associated with possible values of the feature vector; andderiving displacement vector fields representing a transformation of the image patches in each image of the multi-view images by transforming the initial displacement vector field in accordance with an estimate of the optical flows between the notional image patches in the notional images and the image patches in the images of the multi-view images.
  • 5. A method according to claim 4, wherein the multi-view images are a stereoscopic pair of images and the notional camera location is between the camera locations of the images of the stereoscopic pair.
  • 6. A method according to claim 1, wherein the step of deriving a displacement vector field comprises deriving displacement vector fields in respect of the image patches in each image of the multi-view images, andthe step of transforming the image patches containing the left and right eyes of the head is performed in accordance with the displacement vector fields derived in respect of the image patches in each image of the multi-view images.
  • 7. A method according to claim 1, wherein the step of deriving a displacement vector field comprises deriving displacement vector fields in respect of the image patches in each image of the multi-view images, and further deriving a merged displacement vector field from the displacement vector fields derived in respect of the image patches in each image of the multi-view images, andthe step of transforming the image patches containing the left and right eyes of the head is performed in accordance with the merged displacement vector field.
  • 8. A method according to claim 7, wherein the reference displacement vector fields are further associated with confidence values,the step of deriving displacement vector fields in respect of the image patches in each image of the multi-view images further comprises deriving a confidence value associated with each derived displacement vector field, andthe merged displacement vector field is an average of the displacement vector fields derived in respect of the image patches in each image of the multi-view images weighted by the derived confidence values.
  • 9. A method according to claim 1, wherein the method uses location data representing an observer location relative to a display location of the multi-view images,said reference data comprises plural sets of reference displacement vector fields associated with possible values of the feature vector, which sets are associated with different observer locations, andsaid step of deriving displacement vector fields representing a transformation of an image patch is performed using the derived feature vector to look up the set of reference displacement vector fields associated with the observer location represented by the location data.
  • 10. A method according to claim 1, wherein the local image descriptors are local image descriptors derived in a normalized coordinate system, and the reference and derived displacement vector fields are displacement vector fields in the same normalized coordinate system.
  • 11. A method according to claim 1, wherein the feature vector comprises features that are values derived from the local image descriptors in a discrete space,the reference displacement vector fields comprise a reference displacement vector field associated with each possible value of the feature vector in the discrete space, andthe step of deriving a displacement vector field for the image patch comprises selecting the reference displacement field associated with the actual value of the derived feature vector.
  • 12. A method according to claim 1, wherein the feature vector comprises features that are discrete values of the local image descriptors in a continuous space, andthe step of deriving a displacement vector field for the image patch comprises interpolating a displacement vector field from the reference displacement vector fields based on the relationship between the actual value of the derived feature vector and the values of the feature vector associated with the reference displacement vector fields.
  • 13. A method according to claim 1, wherein the multi-view images are a stereoscopic pair of images.
  • 14. A non-transitory computer readable storage medium storing a computer program capable of execution by a processor and arranged on execution to perform or cause the processor to perform a method according to claim 1.
  • 15. A device comprising: a display;a camera system configured to capture multi-view images of a head, the camera system being located in a position offset from the display;an image processor arranged to process the multi-view images of a head by: in each image, identifying image patches containing the left and right eyes of the head, respectively;in respect of the image patches containing the left eyes of the head in each image of the multi-view images, and also in respect of the image patches containing the right eyes of the head in each image of the multi-view images, performing the steps of:deriving a feature vector from plural local image descriptors of the image patch in at least one image of the multi-view images; andderiving a displacement vector field representing a transformation of an image patch, using the derived feature vector to look up reference data comprising reference displacement vector fields associated with possible values of the feature vector;adjusting each image of the multi-view images by transforming the image patches containing the left and right eyes of the head in accordance with the derived displacement vector field; andpresenting the adjusted image on the display.
  • 16. A device according to claim 15, further comprising a telecommunications interface arranged to transmit the adjusted images over a telecommunications network to a destination device for display thereon.
  • 17. A device according to claim 15, wherein the multi-view images are a stereoscopic pair of images.
US Referenced Citations (344)
Number Name Date Kind
1128979 Hess Feb 1915 A
1970311 Ives Aug 1934 A
2133121 Stearns Oct 1938 A
2247969 Lemuel Jul 1941 A
2480178 Zinberg Aug 1949 A
2810905 Barlow Oct 1957 A
3409351 Winnek Nov 1968 A
3715154 Bestenreiner Feb 1973 A
4057323 Ward Nov 1977 A
4528617 Blackington Jul 1985 A
4542958 Young Sep 1985 A
4804253 Stewart Feb 1989 A
4807978 Grinberg et al. Feb 1989 A
4829365 Eichenlaub May 1989 A
4914553 Hamada et al. Apr 1990 A
5278608 Taylor et al. Jan 1994 A
5347644 Sedlmayr Sep 1994 A
5349419 Taguchi et al. Sep 1994 A
5459592 Shibatani et al. Oct 1995 A
5466926 Sasano et al. Nov 1995 A
5510831 Mayhew Apr 1996 A
5528720 Winston et al. Jun 1996 A
5581402 Taylor Dec 1996 A
5588526 Fantone et al. Dec 1996 A
5697006 Taguchi et al. Dec 1997 A
5703667 Ochiai Dec 1997 A
5727107 Umemoto et al. Mar 1998 A
5771066 Barnea Jun 1998 A
5796451 Kim Aug 1998 A
5808792 Woodgate et al. Sep 1998 A
5850580 Taguchi et al. Dec 1998 A
5875055 Morishima et al. Feb 1999 A
5896225 Chikazawa Apr 1999 A
5903388 Sedlmayr May 1999 A
5933276 Magee Aug 1999 A
5956001 Sumida et al. Sep 1999 A
5959664 Woodgate Sep 1999 A
5959702 Goodman Sep 1999 A
5969850 Harrold et al. Oct 1999 A
5971559 Ishikawa et al. Oct 1999 A
6008484 Woodgate et al. Dec 1999 A
6014164 Woodgate et al. Jan 2000 A
6023315 Harrold et al. Feb 2000 A
6044196 Winston et al. Mar 2000 A
6055013 Woodgate et al. Apr 2000 A
6061179 Inoguchi et al. May 2000 A
6061489 Ezra et al. May 2000 A
6064424 Berkel et al. May 2000 A
6075557 Holliman et al. Jun 2000 A
6094216 Taniguchi et al. Jul 2000 A
6108059 Yang Aug 2000 A
6118584 Berkel et al. Sep 2000 A
6128054 Schwarzenberger Oct 2000 A
6144118 Cahill et al. Nov 2000 A
6172723 Inoue et al. Jan 2001 B1
6199995 Umemoto et al. Mar 2001 B1
6219113 Takahara Apr 2001 B1
6224214 Martin et al. May 2001 B1
6232592 Sugiyama May 2001 B1
6256447 Laine Jul 2001 B1
6262786 Perlo et al. Jul 2001 B1
6283858 Hayes, Jr. et al. Sep 2001 B1
6295109 Kubo et al. Sep 2001 B1
6302541 Grossmann Oct 2001 B1
6305813 Lekson et al. Oct 2001 B1
6335999 Winston et al. Jan 2002 B1
6373637 Gulick et al. Apr 2002 B1
6377295 Woodgate et al. Apr 2002 B1
6422713 Fohl et al. Jul 2002 B1
6456340 Margulis Sep 2002 B1
6464365 Gunn et al. Oct 2002 B1
6476850 Erbey Nov 2002 B1
6481849 Martin et al. Nov 2002 B2
6654156 Crossland et al. Nov 2003 B1
6663254 Ohsumi Dec 2003 B2
6724452 Takeda et al. Apr 2004 B1
6731355 Miyashita May 2004 B2
6736512 Balogh May 2004 B2
6801243 Berkel Oct 2004 B1
6816158 Lemelson et al. Nov 2004 B1
6825985 Brown et al. Nov 2004 B2
6847354 Vranish Jan 2005 B2
6847488 Travis Jan 2005 B2
6859240 Brown et al. Feb 2005 B1
6867828 Taira et al. Mar 2005 B2
6870671 Travis Mar 2005 B2
6883919 Travis Apr 2005 B2
7052168 Epstein et al. May 2006 B2
7058252 Woodgate et al. Jun 2006 B2
7073933 Gotoh et al. Jul 2006 B2
7091931 Yoon Aug 2006 B2
7101048 Travis Sep 2006 B2
7136031 Lee et al. Nov 2006 B2
7215391 Kuan et al. May 2007 B2
7215415 Maehara et al. May 2007 B2
7215475 Woodgate et al. May 2007 B2
7239293 Perlin et al. Jul 2007 B2
7365908 Dolgoff Apr 2008 B2
7375886 Lipton et al. May 2008 B2
7410286 Travis Aug 2008 B2
7430358 Qi et al. Sep 2008 B2
7492346 Manabe et al. Feb 2009 B2
7528893 Schultz et al. May 2009 B2
7545429 Travis Jun 2009 B2
7587117 Winston et al. Sep 2009 B2
7614777 Koganezawa et al. Nov 2009 B2
7660047 Travis et al. Feb 2010 B1
7750981 Shestak et al. Jul 2010 B2
7750982 Nelson et al. Jul 2010 B2
7771102 Iwasaki Aug 2010 B2
7944428 Travis May 2011 B2
7970246 Travis et al. Jun 2011 B2
7976208 Travis Jul 2011 B2
3016475 Travis Sep 2011 A1
8216405 Emerton et al. Jul 2012 B2
8223296 Lee et al. Jul 2012 B2
8251562 Kuramitsu et al. Aug 2012 B2
8325295 Sugita et al. Dec 2012 B2
8354806 Travis et al. Jan 2013 B2
8477261 Travis et al. Jul 2013 B2
8502253 Min Aug 2013 B2
8534901 Panagotacos et al. Sep 2013 B2
8556491 Lee Oct 2013 B2
8651725 Ie et al. Feb 2014 B2
8714804 Kim et al. May 2014 B2
8752995 Park Jun 2014 B2
8942434 Karakotsios et al. Jan 2015 B1
9197884 Lee et al. Nov 2015 B2
9224060 Ramaswamy Dec 2015 B1
9224248 Ye et al. Dec 2015 B2
9350980 Robinson et al. May 2016 B2
9378574 Kim et al. Jun 2016 B2
9740282 McInerny Aug 2017 B1
9872007 Woodgate et al. Jan 2018 B2
9986812 Yamanashi et al. Jun 2018 B2
20010001566 Moseley et al. May 2001 A1
20010050686 Allen Dec 2001 A1
20020013691 Warnes Jan 2002 A1
20020018299 Daniell Feb 2002 A1
20020113246 Nagai et al. Aug 2002 A1
20020113866 Taniguchi et al. Aug 2002 A1
20030046839 Oda et al. Mar 2003 A1
20030117790 Lee et al. Jun 2003 A1
20030133191 Morita et al. Jul 2003 A1
20030137738 Ozawa et al. Jul 2003 A1
20030137821 Gotoh et al. Jul 2003 A1
20030197779 Zhang et al. Oct 2003 A1
20030218672 Zhang et al. Nov 2003 A1
20040008877 Leppard et al. Jan 2004 A1
20040021809 Sumiyoshi et al. Feb 2004 A1
20040042233 Suzuki et al. Mar 2004 A1
20040046709 Yoshino Mar 2004 A1
20040105264 Spero Jun 2004 A1
20040108971 Waldern et al. Jun 2004 A1
20040109303 Olczak Jun 2004 A1
20040135741 Tomisawa et al. Jul 2004 A1
20040170011 Kim et al. Sep 2004 A1
20040263968 Kobayashi et al. Dec 2004 A1
20040263969 Lipton et al. Dec 2004 A1
20050007753 Hees et al. Jan 2005 A1
20050053274 Mayer et al. Mar 2005 A1
20050094295 Yamashita et al. May 2005 A1
20050104878 Kaye et al. May 2005 A1
20050110980 Maehara et al. May 2005 A1
20050135116 Epstein et al. Jun 2005 A1
20050174768 Conner Aug 2005 A1
20050180167 Hoelen et al. Aug 2005 A1
20050190345 Dubin et al. Sep 2005 A1
20050237488 Yamasaki et al. Oct 2005 A1
20050254127 Evans et al. Nov 2005 A1
20050264717 Chien et al. Dec 2005 A1
20050274956 Bhat Dec 2005 A1
20050276071 Sasagawa et al. Dec 2005 A1
20050280637 Ikeda et al. Dec 2005 A1
20060012845 Edwards Jan 2006 A1
20060056166 Yeo et al. Mar 2006 A1
20060114664 Sakata et al. Jun 2006 A1
20060132423 Travis Jun 2006 A1
20060139447 Unkrich Jun 2006 A1
20060158729 Vissenberg et al. Jul 2006 A1
20060176912 Anikitchev Aug 2006 A1
20060203200 Koide Sep 2006 A1
20060215129 Alasaarela et al. Sep 2006 A1
20060221642 Daiku Oct 2006 A1
20060227427 Dolgoff Oct 2006 A1
20060244918 Cossairt et al. Nov 2006 A1
20060250580 Silverstein et al. Nov 2006 A1
20060262376 Mather et al. Nov 2006 A1
20060269213 Hwang et al. Nov 2006 A1
20060284974 Lipton et al. Dec 2006 A1
20060291053 Robinson et al. Dec 2006 A1
20060291243 Niioka et al. Dec 2006 A1
20070008406 Shestak et al. Jan 2007 A1
20070013624 Bourhill Jan 2007 A1
20070019882 Tanaka et al. Jan 2007 A1
20070025680 Winston et al. Feb 2007 A1
20070035706 Margulis Feb 2007 A1
20070035829 Woodgate et al. Feb 2007 A1
20070035964 Olczak Feb 2007 A1
20070081110 Lee Apr 2007 A1
20070085105 Beeson et al. Apr 2007 A1
20070109401 Lipton et al. May 2007 A1
20070115551 Spilman et al. May 2007 A1
20070115552 Robinson et al. May 2007 A1
20070153160 Lee et al. Jul 2007 A1
20070183466 Son et al. Aug 2007 A1
20070188667 Schwerdtner Aug 2007 A1
20070189701 Chakmakjian et al. Aug 2007 A1
20070223252 Lee et al. Sep 2007 A1
20070244606 Zhang et al. Oct 2007 A1
20080079662 Saishu et al. Apr 2008 A1
20080084519 Brigham et al. Apr 2008 A1
20080086289 Brott Apr 2008 A1
20080128728 Nemchuk et al. Jun 2008 A1
20080225205 Travis Sep 2008 A1
20080259012 Fergason Oct 2008 A1
20080291359 Miyashita Nov 2008 A1
20080297431 Yuuki et al. Dec 2008 A1
20080297459 Sugimoto et al. Dec 2008 A1
20080304282 Mi et al. Dec 2008 A1
20080316768 Travis Dec 2008 A1
20090014700 Metcalf et al. Jan 2009 A1
20090016057 Rinko Jan 2009 A1
20090040426 Mather et al. Feb 2009 A1
20090067156 Bonnett et al. Mar 2009 A1
20090135623 Kunimochi May 2009 A1
20090140656 Kohashikawa et al. Jun 2009 A1
20090160757 Robinson Jun 2009 A1
20090167651 Benitez et al. Jul 2009 A1
20090174700 Daiku Jul 2009 A1
20090190072 Nagata et al. Jul 2009 A1
20090190079 Saitoh Jul 2009 A1
20090225380 Schwerdtner et al. Sep 2009 A1
20090278936 Pastoor et al. Nov 2009 A1
20090290203 Schwerdtner Nov 2009 A1
20100034987 Fujii et al. Feb 2010 A1
20100040280 McKnight Feb 2010 A1
20100053771 Travis et al. Mar 2010 A1
20100091093 Robinson Apr 2010 A1
20100091254 Travis et al. Apr 2010 A1
20100165598 Chen et al. Jul 2010 A1
20100177387 Travis et al. Jul 2010 A1
20100182542 Nakamoto et al. Jul 2010 A1
20100188438 Kang Jul 2010 A1
20100188602 Feng Jul 2010 A1
20100214135 Bathiche et al. Aug 2010 A1
20100220260 Sugita et al. Sep 2010 A1
20100231498 Large et al. Sep 2010 A1
20100277575 Ismael et al. Nov 2010 A1
20100278480 Vasylyev Nov 2010 A1
20100289870 Leister Nov 2010 A1
20100295920 McGowan Nov 2010 A1
20100295930 Ezhov Nov 2010 A1
20100300608 Emerton et al. Dec 2010 A1
20100302135 Larson et al. Dec 2010 A1
20100309296 Harrold et al. Dec 2010 A1
20100321953 Coleman et al. Dec 2010 A1
20110013417 Saccomanno et al. Jan 2011 A1
20110019112 Dolgoff Jan 2011 A1
20110032483 Hruska et al. Feb 2011 A1
20110032724 Kinoshita Feb 2011 A1
20110043142 Travis et al. Feb 2011 A1
20110043501 Daniel Feb 2011 A1
20110044056 Travis et al. Feb 2011 A1
20110044579 Travis et al. Feb 2011 A1
20110051237 Hasegawa et al. Mar 2011 A1
20110187293 Travis Aug 2011 A1
20110187635 Lee et al. Aug 2011 A1
20110188120 Tabirian et al. Aug 2011 A1
20110199460 Gallagher Aug 2011 A1
20110216266 Travis Sep 2011 A1
20110221998 Adachi et al. Sep 2011 A1
20110228183 Hamagishi Sep 2011 A1
20110235359 Liu et al. Sep 2011 A1
20110242150 Song et al. Oct 2011 A1
20110242277 Do et al. Oct 2011 A1
20110242298 Bathiche et al. Oct 2011 A1
20110255303 Nichol et al. Oct 2011 A1
20110285927 Schultz et al. Nov 2011 A1
20110292321 Travis et al. Dec 2011 A1
20110310232 Wilson et al. Dec 2011 A1
20120002136 Nagata et al. Jan 2012 A1
20120002295 Dobschal et al. Jan 2012 A1
20120008067 Mun et al. Jan 2012 A1
20120013720 Kadowaki et al. Jan 2012 A1
20120062991 Mich et al. Mar 2012 A1
20120063166 Panagotacos et al. Mar 2012 A1
20120075285 Oyagi et al. Mar 2012 A1
20120081920 Ie et al. Apr 2012 A1
20120086776 Lo Apr 2012 A1
20120105486 Lankford et al. May 2012 A1
20120106193 Kim et al. May 2012 A1
20120114201 Luisi et al. May 2012 A1
20120127573 Robinson et al. May 2012 A1
20120154450 Aho et al. Jun 2012 A1
20120162966 Kim et al. Jun 2012 A1
20120169838 Sekine Jul 2012 A1
20120206050 Spero Aug 2012 A1
20120219180 Mehra Aug 2012 A1
20120223956 Saito et al. Sep 2012 A1
20120236133 Gallagher Sep 2012 A1
20120236484 Miyake Sep 2012 A1
20120243204 Robinson Sep 2012 A1
20120243261 Yamamoto et al. Sep 2012 A1
20120293721 Ueyama Nov 2012 A1
20120299913 Robinson et al. Nov 2012 A1
20120314145 Robinson Dec 2012 A1
20120319928 Rhodes Dec 2012 A1
20130070046 Wolf et al. Mar 2013 A1
20130076853 Diao Mar 2013 A1
20130101253 Popovich et al. Apr 2013 A1
20130107340 Wong et al. May 2013 A1
20130127861 Gollier May 2013 A1
20130135588 Popovich et al. May 2013 A1
20130156265 Hennessy Jun 2013 A1
20130163659 Sites Jun 2013 A1
20130169701 Whitehead et al. Jul 2013 A1
20130294684 Lipton et al. Nov 2013 A1
20130307831 Robinson et al. Nov 2013 A1
20130307946 Robinson et al. Nov 2013 A1
20130321599 Harrold et al. Dec 2013 A1
20130328866 Woodgate et al. Dec 2013 A1
20130335821 Robinson et al. Dec 2013 A1
20140002586 Nourbakhsh Jan 2014 A1
20140009508 Woodgate et al. Jan 2014 A1
20140016871 Son et al. Jan 2014 A1
20140022619 Woodgate et al. Jan 2014 A1
20140036361 Woodgate et al. Feb 2014 A1
20140043323 Sumi Feb 2014 A1
20140126238 Kao et al. May 2014 A1
20140240828 Robinson et al. Aug 2014 A1
20140267584 Atzpadin et al. Sep 2014 A1
20140340728 Taheri Nov 2014 A1
20140368602 Woodgate et al. Dec 2014 A1
20150077526 Kim Mar 2015 A1
20150269737 Lam Sep 2015 A1
20150339512 Son et al. Nov 2015 A1
20160125227 Soare et al. May 2016 A1
20160196465 Wu et al. Jul 2016 A1
20160219258 Woodgate et al. Jul 2016 A1
20170134720 Park May 2017 A1
20170195662 Sommerlade et al. Jul 2017 A1
20170364149 Lu et al. Dec 2017 A1
20180035886 Courtemanche et al. Feb 2018 A1
Foreign Referenced Citations (91)
Number Date Country
1142869 Feb 1997 CN
1377453 Oct 2002 CN
1454329 Nov 2003 CN
1466005 Jan 2004 CN
1487332 Apr 2004 CN
1696788 Nov 2005 CN
1823292 Aug 2006 CN
1826553 Aug 2006 CN
1866112 Nov 2006 CN
2872404 Feb 2007 CN
1307481 Mar 2007 CN
101029975 Sep 2007 CN
101049028 Oct 2007 CN
200983052 Nov 2007 CN
101114080 Jan 2008 CN
101142823 Mar 2008 CN
100449353 Jan 2009 CN
101364004 Feb 2009 CN
101598863 Dec 2009 CN
100591141 Feb 2010 CN
101660689 Mar 2010 CN
102147079 Aug 2011 CN
202486493 Oct 2012 CN
1910399 May 2013 CN
0653891 May 1995 EP
0721131 Jul 1996 EP
0830984 Mar 1998 EP
0833183 Apr 1998 EP
0860729 Aug 1998 EP
0939273 Sep 1999 EP
0656555 Mar 2003 EP
2003394 Dec 2008 EP
1394593 Jun 2010 EP
2451180 May 2012 EP
1634119 Aug 2012 EP
2405542 Feb 2005 GB
H08211334 Aug 1996 JP
H08237691 Sep 1996 JP
H08254617 Oct 1996 JP
H08070475 Dec 1996 JP
H08340556 Dec 1996 JP
2000048618 Feb 2000 JP
2000200049 Jul 2000 JP
2001093321 Apr 2001 JP
2001281456 Oct 2001 JP
2002049004 Feb 2002 JP
2003215349 Jul 2003 JP
2003215705 Jul 2003 JP
2004319364 Nov 2004 JP
2005116266 Apr 2005 JP
2005135844 May 2005 JP
2005183030 Jul 2005 JP
2005259361 Sep 2005 JP
2006004877 Jan 2006 JP
2006031941 Feb 2006 JP
2006310269 Nov 2006 JP
2007-109255 Apr 2007 JP
H3968742 Aug 2007 JP
2007273288 Oct 2007 JP
2007286652 Nov 2007 JP
2008204874 Sep 2008 JP
2010160527 Jul 2010 JP
20110216281 Oct 2011 JP
2013015619 Jan 2013 JP
2013502693 Jan 2013 JP
2013540083 Oct 2013 JP
20030064258 Jul 2003 KR
20090932304 Dec 2009 KR
20110006773 Jan 2011 KR
20110017918 Feb 2011 KR
20110067534 Jun 2011 KR
20120048301 May 2012 KR
20120049890 May 2012 KR
20130002646 Jan 2013 KR
20140139730 Dec 2014 KR
2005028780 Sep 2005 TW
1994006249 Apr 1994 WO
1995020811 Aug 1995 WO
1995027915 Oct 1995 WO
1998021620 May 1998 WO
1999011074 Mar 1999 WO
2001027528 Apr 2001 WO
2001061241 Aug 2001 WO
2001079923 Oct 2001 WO
2011020962 Feb 2011 WO
2011022342 Feb 2011 WO
2011068907 Jun 2011 WO
2011148366 Dec 2011 WO
2011149739 Dec 2011 WO
2012158574 Nov 2012 WO
2016132148 Aug 2016 WO
Non-Patent Literature Citations (134)
Entry
3M™ ePrivacy Filter software professional version; http://www.cdw.com/shop/products/3M-ePrivacy-Filter-software-professional-version/3239412.aspx?cm_mmc=ShoppingFeeds-_-ChannelIntelligence-_-Software-_-3239412_3MT%20ePrivacy%20Filter%20software%20professional%20version_3MF-EPFPRO&cpncode=37-7582919&srccode=cii_10191459#PO; Copyright 2007-2016.
AU-2011329639 Australia Patent Examination Report No. 1 dated Mar. 6, 2014.
AU-2013262869 Australian Office Action of Australian Patent Office dated Feb. 22, 2016.
AU-2015258258 Australian Office Action of Australian Patent Office dated Jun. 9, 2016.
Bahadur, “Liquid crystals applications and uses,” World Scientific, vol. 1, pp. 178 (1990).
CA-2817044 Canadian office action dated Jul. 14, 2016.
CN-201180065590.0 Office first action dated Dec. 31, 2014.
CN-201180065590.0 Office second action dated Oct. 21, 2015.
CN-201180065590.0 Office Third action dated Jun. 6, 2016.
CN-201280034488.9 2d Office Action from the State Intellectual Property Office of P.R. China dated Mar. 22, 2016.
CN-201280034488.9 1st Office Action from the State Intellectual Property Office of P.R. China dated Jun. 11, 2015.
CN-201380026045.X Chinese First Office Action of Chinese Patent Office dated Aug. 29, 2016.
CN-201380026046.4 Chinese 1st Office Action of the State Intellectual Property Office of P.R. China dated Oct. 24, 2016.
CN-201380026047.9 Chinese 1st Office Action of the State Intellectual Property Office of P.R. dated Dec. 18, 2015.
CN-201380026047.9 Chinese 2d Office Action of the State Intellectual Property Office of P.R. dated Jul. 12, 2016.
CN-201380026050.0 Chinese 1st Office Action of the State Intellectual Property Office of P.R. dated Jun. 3, 2016.
CN-201380026058.7 Chinese 1st Office Action of the State Intellectual Property Office of P.R. China dated Nov. 2, 2016.
CN-201380026059.1 Chinese 1st Office Action of the State Intellectual Property Office of P.R. dated Apr. 25, 2016.
CN-201380026076.5 Office first action dated May 11, 2016.
CN-201380049451.8 Chinese Office Action of the State Intellectual Property Office of P.R. dated Apr. 5, 2016.
CN-201380063047.6 Chinese Office Action of the State Intellectual Property Office of P.R. China dated Oct. 9, 2016.
CN-201380063055.0 Chinese 1st Office Action of the State Intellectual Property Office of P.R. dated Jun. 23, 2016.
CN-201480023023.2 Office action dated Aug. 12, 2016.
EP-07864751.8 European Search Report dated Jun. 1, 2012.
EP-07864751.8 Supplementary European Search Report dated May 29, 2015.
EP-09817048.3 European Search Report dated Apr. 29, 2016.
EP-11842021.5 Office Action dated Dec. 17, 2014.
EP-11842021.5 Office Action dated Oct. 2, 2015.
EP-11842021.5 Office Action dated Sep. 2, 2016.
EP-13758536.0 European Extended Search Report of European Patent Office dated Feb. 4, 2016.
EP-13790013.0 European Extended Search Report of European Patent Office dated Jan. 26, 2016.
EP-13790141.9 European Extended Search Report of European Patent Office dated Feb. 11, 2016.
EP-13790195.5 European Extended Search Report of European Patent Office dated Mar. 2, 2016.
EP-13790267.2 European Extended Search Report of European Patent Office dated Feb. 25, 2016.
EP-13790274.8 European Extended Search Report of European Patent Office dated Feb. 8, 2016.
EP-13790775.4 European Extended Search Report of European Patent Office dated Oct. 9, 2015.
EP-13790775.4 Office Action dated Aug. 29, 2016.
EP-13790809.1 European Extended Search Report of European Patent Office dated Feb. 16, 2016.
EP-13790942.0 European Extended Search Report of European Patent Office dated May 23, 2016.
EP-13791332.3 European Extended Search Report of European Patent Office dated Feb. 1, 2016.
EP-13791437.0 European Extended Search Report of European Patent Office dated Oct. 14, 2015.
EP-13791437.0 European first office action dated Aug. 30, 2016.
EP-13822472.0 European Extended Search Report of European Patent Office dated Mar. 2, 2016.
EP-13843659.7 European Extended Search Report of European Patent Office dated May 10, 2016.
EP-13844510.1 European Extended Search Report of European Patent Office dated May 13, 2016.
EP-13865893.5 European Extended Search Report of European Patent Office dated Oct. 6, 2016.
EP-14754859.8 European Extended Search Report of European Patent Office dated Oct. 14, 2016.
EP-16150248.9 European Extended Search Report of European Patent Office dated Jun. 16, 2016.
Ian Sexton et al: “Stereoscopic and autostereoscopic display-systems”,—IEEE Signal Processing Magazine, May 1, 1999 (May 1, 1999 ), pp. 85-99, XP055305471, Retrieved from the Internet: RL:http://ieeexplore.ieee.org/iel5/79/16655/00768575.pdf [retrieved on Sep. 26, 2016].
JP-2009538527 Reasons for rejection dated Jul. 17, 2012 with translation.
JP-200980150139.1 1st Office Action dated Feb. 11, 2014.
JP-200980150139.1 2d Office Action dated Apr. 5, 2015.
JP-2013540083 Notice of reasons for rejection dated Jun. 30, 2015.
JP-2013540083 Notice of reasons for rejection with translation dated Jun. 21, 2016.
Kalantar, et al. “Backlight Unit With Double Surface Light Emission,” J. Soc. Inf. Display, vol. 12, Issue 4, pp. 379-387 (Dec. 2004).
KR-20117010839 1st Office action (translated) dated Aug. 28, 2015.
KR-20117010839 2d Office action (translated) dated Apr. 28, 2016.
KR-20137015775 Office action (translated) dated Oct. 18, 2016.
Languy et al., “Performance comparison of four kinds of flat nonimaging Fresnel lenses made of polycarbonates and polymethyl methacrylate for concentrated photovoltaics”, Optics Letters, 36, pp. 2743-2745.
Lipton, “Stereographics: Developers' Handbook”, Stereographic Developers Handbook, Jan. 1, 1997, XP002239311, p. 42-49.
Marjanovic, M.,“Interlace, Interleave, and Field Dominance,” http://www.mir.com/DMG/interl.html, pp. 1-5 (2001).
PCT/US2007/85475 International preliminary report on patentability dated May 26, 2009.
PCT/US2007/85475 International search report and written opinion dated Apr. 10, 2008.
PCT/US2009/060686 international preliminary report on patentability dated Apr. 19, 2011.
PCT/US2009/060686 international search report and written opinion of international searching authority dated Dec. 10, 2009.
PCT/US2011/061511 International Preliminary Report on Patentability dated May 21, 2013.
PCT/US2011/061511 International search report and written opinion of international searching authority dated Jun. 29, 2012.
PCT/US2012/037677 International search report and written opinion of international searching authority dated Jun. 29, 2012.
PCT/US2012/042279 International search report and written opinion of international searching authority dated Feb. 26, 2013.
PCT/US2012/052189 International search report and written opinion of the international searching authority dated Jan. 29, 2013.
PCT/US2013/041192 International search report and written opinion of international searching authority dated Aug. 28, 2013.
PCT/US2013/041228 International search report and written opinion of international searching authority dated Aug. 23, 2013.
PCT/US2013/041235 International search report and written opinion of international searching authority dated Aug. 23, 2013.
PCT/US2013/041237 International search report and written opinion of international searching authority dated May 15, 2013.
PCT/US2013/041548 International search report and written opinion of international searching authority dated Aug. 27, 2013
PCT/US2013/041619 International search report and written opinion of international searching authority dated Aug. 27, 2013.
PCT/US2013/041655 International search report and written opinion of international searching authority dated Aug. 27, 2013.
PCT/US2013/041683 International search report and written opinion of international searching authority dated Aug. 27, 2013.
PCT/US2013/041697 International search report and written opinion of international searching authority dated Aug. 23, 2013.
PCT/US2013/041703 International search report and written opinion of international searching authority dated Aug. 27, 2013.
PCT/US2013/049969 International search report and written opinion of international searching authority dated Oct. 23, 2013.
PCT/US2013/063125 International search report and written opinion of international searching authority dated Jan. 20, 2014.
PCT/US2013/063133 International search report and written opinion of international searching authority dated Jan. 20, 2014.
PCT/US2013/077288 International search report and written opinion of international searching authority dated Apr. 18, 2014.
PCT/US2014/017779 International search report and written opinion of international searching authority dated May 28, 2014.
PCT/US2014/042721 International search report and written opinion of international searching authority dated Oct. 10, 2014.
PCT/US2014/057860 International Preliminary Report on Patentability dated Apr. 5, 2016.
PCT/US2014/057860 International search report and written opinion of international searching authority dated Jan. 5, 2015.
PCT/US2014/060312 International search report and written opinion of international searching authority dated Jan. 19, 2015.
PCT/US2014/060368 International search report and written opinion of international searching authority dated Jan. 14, 2015.
PCT/US2014/065020 International search report and written opinion of international searching authority dated May 21, 2015.
PCT/US2015/000327 International search report and written opinion of international searching authority dated Apr. 25, 2016.
PCT/US2015/021583 International search report and written opinion of international searching authority dated Sep. 10, 2015.
PCT/US2015/038024 International search report and written opinion of international searching authority dated Dec. 30, 2015.
PCT/US2016/027297 International search report and written opinion of international searching authority dated Jul. 26, 2017.
PCT/US2016/027350 International search report and written opinion of the international searching authority dated Jul. 25, 2016.
PCT/US2016/034418 International search report and written opinion of the international searching authority dated Sep. 7, 2016.
Robinson et al., U.S. Appl. No. 14/751,878 entitled “Directional privacy display” filed Jun. 26, 2015. The application is available to Examiner on the USPTO database and has not been filed herewith.
Robinson et al., U.S. Appl. No. 15/097,750 entitled “Wide angle imaging directional backlights” filed Apr. 13, 2016. The application is available to Examiner on the USPTO database and has not been filed herewith.
Robinson et al., U.S. Appl. No. 15/098,084 entitled “Wide angle imaging directional backlights” filed Apr. 13, 2016. The application is available to Examiner on the USPTO database and has not been filed herewith.
International Search Report and Written Opinion dated Oct. 16, 2018 in International Patent Application No. PCT/US18/45648.
Saffari et al., “On-line Random Forests”, 3rd IEEE ICCV Workshop, On-line Computer Vision, 2009.
Sahoo et al., “Online Deep Learning: Learning Deep Neural Networks on the Fly”, School of Information Systems, Singapore Management University (https://arxiv.org/abs/1711.03705), 2017, pp. 1-9.
Yang, “Mutli-scale recognition with DAG-CNNs”, ICCV 2015.
Robinson et al., U.S. Appl. No. 15/165,960 entitled “Wide Angle Imaging Directional Backlights” filed May 26, 2016. The application is available to Examiner on the USPTO database and has not been filed herewith.
Robinson et al., U.S. Appl. No. 15/29,0543 entitled “Wide angle imaging directional backlights” filed Oct. 11, 2016. The application is available to Examiner on the USPTO database and has not been filed herewith.
Robinson, U.S. Appl. No. 13/300,293 entitled “Directional flat illuminators” filed Nov. 18, 2011. The application is available to Examiner on the USPTO database and has not been filed herewith.
RU-2013122560 First office action dated Jan. 1, 2014.
RU-2013122560 Second office action dated Apr. 10, 2015.
Tabiryan et al., “The Promise of Diffractive Waveplates,” Optics and Photonics News, vol. 21, Issue 3, pp. 40-45 (Mar. 2010).
Travis, et al. “Backlight for view-sequential autostereo 3D”, Microsoft E&DD Applied Sciences, (date unknown), 25 pages.
Travis, et al. “Collimated light from a waveguide for a display,” Optics Express, vol. 17, No. 22, pp. 19714-19719 (2009).
Williams S P et al., “New Computational Control Techniques and Increased Understanding for Stereo 3-D Displays”, Proceedings of SPIE, SPIE, US, vol. 1256, Jan. 1, 1990, XP000565512, p. 75, 77, 79.
Viola and Jones, “Rapid Object Detection using a Boosted Cascade of Simple Features”, CVPR 2001, p. 1-9.
Cootes et al., “Active shape models—their training and application” Computer Vision and Image Understanding 61 (1):38-59 Jan. 1995.
Cootes et al., “Active appearance models”, IEEE Trans. Pattern Analysis and Machine Intelligence, 23(6):681-685, 2001.
Dalal et al., “Histogram of Oriented Gradients for Human Detection”, Computer Vision and Pattern Recognition, pp. 886-893, 2005.
Lowe, “Distintive Image Features from Scale-Invariant Keypoints”, International Journal of Computer Vision 60 (2), pp. 91-110.
Kononenko et al., “Learning to Look Up: Realtime Monocular Gaze Correction Using Machine Leaming”, Computer Vision and Pattern Recognition, pp. 4667-4675, 2015.
Ozuysal et al., “Fast Keypoint Recognition in Ten Lines of Code”, Computer Vision and Pattern Recognition, pp. 1-8, 2007.
Ho, “Random Decision Forests”, Proceedings of the 3rd International Conference on Document Analysis and Recognition, Montreal, QC, pp. 278-282, Aug. 14-16, 1995.
Drucker et al., “Support Vector regression Machines”, Advances in Neural Information Processing Systems 9, pp. 155-161, NIPS 1996.
Zach et al., “A Duality Based Approach for Realtime TV-L1 Optical Flow”, Pattern Recognition (Proc. DAGM), 2007, pp. 214-223.
PCT/US2017/012203 International search report and written opinion of international searching authority dated Apr. 18, 2017.
PCT/RU2016/000118 International search report and written opinion of international searching authority dated Aug. 25, 2016.
PCT/RU2016/000118 International Preliminary Report on Patentability dated Sep. 26, 2017.
Kononenko, et al., “Learning to Look Up: Realtime Monocular Gaze Correction Using Machine Learning”, Computer Vision and Pattern recognition, pp. 4667-4675, 2015.
Ganin, et al., “DeepWarp: Photorealistic Image Resynthesis for Gaze Manipulation”, Jul. 25, 2016 (Jul. 25, 2016), XP055295123,Retrieved from the Internet: URL:http://arxiv.org/pdf/1607.07215v2.pdf [retrieved on Jan. 10, 2018].
Giger, et al., “Gaze Correction with a Single Webcam”, published in: Proceedings of IEEE ICME 2014 (Chengdu, China, Jul. 14-18, 2014).
Xiong, et al., “Supervised descent method and its applications to face alignment”, In Computer Vision Pattern Recognition (CVPR), 2013 IEEE Conference on, pp. 532-539. IEEE, 2013.
Smith, et al., Gaze locking: passive eye contact detection for human-object interaction. In Proceedings of the 26th annual ACM symposium on User interface software and technology, pp. 271-280. ACM, 2013.
Ren, et al., Face aignment at 3000 fps via regressing local binary features. In CVPR, pp. 1685-1692, 2014.
Yip, “Face and Eye Rectification in Video Conference Using Artificial Neural Network”, IEEE International Conference on Multimedia and Expo, 2005. ICME 2005. Amsterdam, The Netherlands, July 6-8, 2005, IEEE, Piscataway, NJ, USA, Jul. 6, 2005 (Jul. 6, 2005), pp. 690-693,XP010844250,DOI: 10.1109/ICME.2005.1521517ISBN: 978-0-7803-9331-8 the whole document.
EP-17736268.8 European Extended Search Report of European Patent Office dated Jul. 12, 2019.
Related Publications (1)
Number Date Country
20200021794 A1 Jan 2020 US
Provisional Applications (1)
Number Date Country
62274897 Jan 2016 US
Continuations (1)
Number Date Country
Parent 15397951 Jan 2017 US
Child 16405415 US