Aspects of embodiments of the present disclosure are generally related to training data generation systems and methods of using the same.
In many areas of automation, such as robotics, sensors are used to determine the physical relationship of objects in the real world. For example, robotic systems often use sensing systems to measure the locations of various physical objects in order to, for example, grasp an object that may arrive at a variety of orientations, reorient the object into a desired position, and connect the object to another object. The position and orientation of an object with respect to a reference coordinate system may be referred to as a “pose” and, in a three-dimensional coordinate system, generally includes six degrees of freedom—rotation around three axes and translation along the three axes. Recently, statistical models, such as machine learning models, are being utilized to control the operation of such automation systems.
Statistical models (e.g., machine learning models) are generally trained using large amounts of data. In the field of computer vision, the training data generally includes labeled images, which are used to train deep learning models, such as convolutional neural networks, to perform computer vision tasks such as image classification, instance segmentation, and pose estimation. However, manually collecting photographs of various scenes and labeling the photographs is time consuming and expensive. Some techniques for augmenting these training data sets include generating synthetic training data. For example, three-dimensional (3-D) computer graphics rendering engines (e.g., scanline rendering engines and ray tracing rendering engines) are capable of generating photorealistic two-dimensional (2-D) images of virtual environments of arrangements of 3D models of objects that can be used for training deep learning models.
However, synthetic data may not capture the full range of nuances and complexities that exist in the real world. As such, systems trained with synthetic data do not always perform well when presented with real-world data.
The above information disclosed in this Background section is only for enhancement of understanding of the present disclosure, and therefore it may contain information that does not form the prior art that is already known to a person of ordinary skill in the art.
Aspects of embodiments of the present disclosure relate to the generation of data for training machine learning models. In particular, aspects of embodiments of the present disclosure relate to generating a large quantity of real-world data for training machine learning models to perform computer vision tasks on input images that are captured based on visible light in a scene or on imaging modalities other than images of the intensity of visible light in a scene.
Aspects of embodiments of the present disclosure relate to systems and methods for generating and using visual datasets for training computer vision models including object pose detection models.
Aspects of embodiments of the present disclosure relate to particular techniques for generating labeled data based on the pose of a single object or the poses of multiple objects in a cluttered bin of those objects.
According to some embodiments of the present invention, there is provided a data capture stage including: a frame at least partially surrounding a target object; a rotation device within the frame and configured to selectively rotate the target object; a plurality of cameras coupled to the frame and configured to capture images of the target object from different angles; a sensor coupled to the frame and configured to sense mapping data corresponding to the target object; and an augmentation data generator configured to control a rotation of the rotation device, to control operations of the plurality of cameras and the sensor, and to generate training data based on the images and the mapping data.
In some embodiments, the data capture stage further includes a plurality of light sources coupled to the frame and configured to illuminate the target object with different colors and from different angles.
In some embodiments, the augmentation data generator is configured to control a color intensity of a light source of the plurality of light sources.
In some embodiments, a light source of the plurality of light sources is moveably coupled to the frame, and the augmentation data generator is further configured to control a position of the light source.
In some embodiments, the plurality of cameras includes: a color camera configured to capture red-green-blue (RGB) images of the target object; and a polarization camera configured to capture polarization images of the target object.
In some embodiments, the sensor includes at least one of an infrared sensor, an ultraviolet sensor, and a depth sensor, and the depth sensor includes at least one of a light detection and ranging (LIDAR) sensor and a stereo camera system.
In some embodiments, the plurality of cameras and the sensor are moveably coupled to the frame, and the augmentation data generator is further configured to control poses of the cameras and the sensor.
In some embodiments, the plurality of cameras and the sensor are angled to face the target object, and the rotation device is configured to rotate the target object with a 0.001 degree precision.
In some embodiments, the augmentation data generator is configured to: rotate the rotation device by a first angle; initiate capture of first images by the cameras and initiate sensing of first mapping data by the sensor; rotate the rotation device by a second angle; initiate capture of second images by the cameras and initiate sensing of second mapping data by the sensor; and generate the training data based on the first and second images and the first and second mapping data.
In some embodiments, the augmentation data generator is configured to: initiate capture of a first image of the target object and a second image of the target object by the plurality of cameras; receive a first pose of the target object corresponding to a first viewpoint of the first image; project the first pose onto the second image to generate a second pose of the target object corresponding to a second viewpoint of the second image; generate a second label associated with the second image, the second label including the second pose; and generating the training data based on the second image and the second label.
According to some embodiments of the present invention, there is provided a method of capturing training data in a data capture stage, the method including: initiating capture of a first image of a target object by a first camera of the data capture stage; initiating capture of a second image of the target object by a second camera of the data capture stage; receiving a first pose of the target object corresponding to a first viewpoint of the first image; projecting the first pose onto the second image to generate a second pose of the target object corresponding to a second viewpoint of the second image; generating a second label associated with the second image, the second label including the second pose; and generating the training data based on the second image and the second label.
In some embodiments, the first viewpoint is different from the second viewpoint, and the first image is a red-green-blue (RGB) image and the second image is an RGB image or a polarization image.
In some embodiments, the first and second poses include six-degree-of-freedom (6DoF) poses of the target object.
In some embodiments, the second label further includes at least one of a computer-aided design (CAD) model of the target object posed according to the second pose and key points corresponding to the CAD model.
In some embodiments, the method further includes: generating a first label associated with the first image, the first label including the first pose, wherein the generating the training data is further based on the first image and the first label.
In some embodiments, the method further includes: initiating rotation of a rotation device of the data capture stage on which the target object is placed; initiating capture of a third image of the target object, the third image corresponding to a viewpoint different from the first and second viewpoints; projecting the first pose onto the third image to generate a third pose of the target object corresponding to a third viewpoint of the third image; generating a third label associated with the third image, the third label including the third pose; and generating the training data further based on the third image and the third label.
In some embodiments, the method further includes: initiating a first lighting condition via a plurality of light sources of the data capture stage, wherein initiating the capture of the first image is in response to the initiating the first lighting condition; and initiating a second lighting condition via the plurality of light sources of the data capture stage, wherein initiating the capture of the second image is in response to the initiating the second lighting condition.
In some embodiments, the method further includes: initiating sensing of mapping data of the target object by a sensor of the data capture stage, wherein the mapping data includes one of a heat map corresponding to the target object, a depth map corresponding to the target object, and an ultraviolet map corresponding to the target object.
In some embodiments, the method further includes: projecting the first pose onto the mapping data to generate a third pose of the target object corresponding to a third viewpoint of the mapping data; generating a third label associated with the mapping data, the third label including the third pose; and generating the training data further based on the mapping data and the third label.
In some embodiments, the method further includes: providing the first and second images, a computer-aided design (CAD) model of the target object, and calibration information of the first and second cameras to a user device, wherein receiving the first pose of the target object includes receiving the first pose from the user device.
The accompanying drawings, together with the specification, illustrate example embodiments of the present disclosure, and, together with the description, serve to explain the principles of the present disclosure.
The detailed description set forth below is intended as a description of example embodiments of a system and method for generating a large set of training data from a single object or an arrangement of the same objects, provided in accordance with the present disclosure, and is not intended to represent the only forms in which the present disclosure may be constructed or utilized. The description sets forth the features of the present disclosure in connection with the illustrated embodiments. It is to be understood, however, that the same or equivalent functions and structures may be accomplished by different embodiments that are also intended to be encompassed within the scope of the disclosure. As denoted elsewhere herein, like element numbers are intended to indicate like elements or features.
Automation systems that manipulate objects, such as tools, partially assembled products, components, and the like from a particular manufacturing environment, rely on pose estimations (i.e., six degree of freedom (6DoF) pose estimations) of the object based on some visual input. Training statistical models that perform 6DoF pose estimation often involves a great deal of training data. Sometimes this data is in the form of real images with ground truth pose estimates. However, oftentimes, the number of available labeled data is too small to properly train a pose estimator. Further, manually collecting different images of typical objects and scenes in the manufacturing environment and labeling these images based on their ground truth values is generally a time consuming and expensive task.
Accordingly, aspects of the present disclosure are directed to systems and methods for amplifying data, that is, by taking one real scene with an arrangement of identical or substantially identical objects (with labeled 6DoF poses), and then applying data augmentation techniques that can consistently be applied to the 6DoF labels. Since at least one data sample needs ground truth label, aspects of embodiments of the present disclosure provide a multi-view pose annotation technique for labeling a 6DoF pose from multiple viewpoints. In some embodiments, an augmentation data generator uses a scene with 6DoF labels to generate labeled variants of the scene, at different viewpoints, exposure conditions, and optical properties (such as color, polarimetry, etc).
Aspects of embodiments of the present disclosure relate to systems and methods for generating data for training machine learning models for performing computer vision tasks on images captured based on standard modalities such as color or monochrome cameras configured to capture images based on the intensity of visible light, as well as non-standard modalities such as polarization images captured based on polarized light (e.g., images captured with a polarizing filter or polarization filter in an optical path of the camera for capturing circularly and/or linearly polarized light), non-visible or invisible light (e.g., light in the infrared or ultraviolet (UV) ranges), and combinations thereof (e.g., polarized infrared light). However, embodiments of the present disclosure are not limited thereto and may be applied to other multi-spectral imaging techniques.
According to various embodiments of the present disclosure, the model training system 7 and/or the augmentation data generator 220 are implemented using one or more electronic circuits configured to perform various operations as described in more detail below. Types of electronic circuits may include a central processing unit (CPU), a graphics processing unit (GPU), an artificial intelligence (AI) accelerator (e.g., a vector processor, which may include vector arithmetic logic units configured efficiently perform operations common to neural networks, such dot products and softmax), a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), a digital signal processor (DSP), or the like. For example, in some circumstances, aspects of embodiments of the present disclosure are implemented in program instructions that are stored in a non-volatile computer readable memory where, when executed by the electronic circuit (e.g., a CPU, a GPU, an AI accelerator, or combinations thereof), perform the operations described herein to capture and augment training data based on input from a plurality of cameras and sensors. The operations performed by the model training system 7 and the augmentation data generator 220 may be performed by a single electronic circuit (e.g., a single CPU, a single GPU, or the like) or may be allocated between multiple electronic circuits (e.g., multiple GPUs or a CPU in conjunction with a GPU). The multiple electronic circuits may be local to one another (e.g., located on a same die, located within a same package, or located within a same embedded device or computer system) and/or may be remote from one other (e.g., in communication over a network such as a local personal area network such as Bluetooth®, over a local area network such as a local wired and/or wireless network, and/or over wide area network such as the internet, such a case where some operations are performed locally and other operations are performed on a server hosted by a cloud computing service). One or more electronic circuits operating to implement the model training system 7 and the augmentation data generator 220 may be referred to herein as a computer or a computer system, which may include memory storing instructions that, when executed by the one or more electronic circuits, implement the systems and methods described herein.
For context,
A polarization camera 10 has a lens 12 with a field of view, where the lens 12 and the polarization camera 10 are oriented such that the field of view encompasses the scene 1. The lens 12 is configured to direct light (e.g., focus light) from the scene 1 onto a light sensitive medium such as an image sensor 14 (e.g., a complementary metal oxide semiconductor (CMOS) image sensor or charge-coupled device (CCD) image sensor).
The polarization camera 10 further includes a polarizer or polarizing filter or polarization mask 16 placed in the optical path between the scene 1 and the image sensor 14. According to some embodiments of the present disclosure, the polarizer or polarization mask 16 is configured to enable the polarization camera 10 to capture images of the scene 1 with the polarizer set at various specified angles (e.g., at 45° rotations or at 60° rotations or at non-uniformly spaced rotations).
As one example,
While the above description relates to some possible implementations of a polarization camera using a polarization mosaic, embodiments of the present disclosure are not limited thereto and encompass other types of polarization cameras that are capable of capturing images at multiple different polarizations. For example, the polarization mask 16 may have fewer than or more than four different polarizations, or may have polarizations at different angles (e.g., at angles of polarization of: 0°, 60° degrees, and 120° or at angles of polarization of 0°, 30°, 60°, 90°, 120°, and 150°). As another example, the polarization mask 16 may be implemented using an electronically controlled polarization mask, such as an electro-optic modulator (e.g., may include a liquid crystal layer), where the polarization angles of the individual pixels of the mask may be independently controlled, such that different portions of the image sensor 14 receive light having different polarizations. As another example, the electro-optic modulator may be configured to transmit light of different linear polarizations when capturing different frames, e.g., so that the camera captures images with the entirety of the polarization mask set to, sequentially, to different linear polarizer angles (e.g., sequentially set to: 0 degrees; 45 degrees; 90 degrees; or 135 degrees). As another example, the polarization mask 16 may include a polarizing filter that rotates mechanically, such that different polarization raw frames are captured by the polarization camera 10 with the polarizing filter mechanically rotated with respect to the lens 12 to transmit light at different angles of polarization to image sensor 14.
A polarization camera may also refer to an array of multiple cameras having substantially parallel optical axes, such that each of the cameras captures images of a scene from substantially the same pose. The optical path of each camera of the array includes a polarizing filter, where the polarizing filters have different angles of polarization. For example, a two-by-two (2×2) array of four cameras may include one camera having a polarizing filter set at an angle of 0°, a second camera having a polarizing filter set at an angle of 45°, a third camera having a polarizing filter set at an angle of 90°, and a fourth camera having a polarizing filter set at an angle of 135°.
As a result, the polarization camera captures multiple input images 18 (or polarization raw frames) of the scene 1, where each of the polarization raw frames 18 corresponds to an image taken behind a polarization filter or polarizer at a different angle of polarization ϕpol (e.g., 0 degrees, 45 degrees, 90 degrees, or 135 degrees). Each of the polarization raw frames is captured from substantially the same pose with respect to the scene 1 (e.g., the images captured with the polarization filter at 0 degrees, 45 degrees, 90 degrees, or 135 degrees are all captured by a same polarization camera located at a same location and orientation), as opposed to capturing the polarization raw frames from disparate locations and orientations with respect to the scene. The polarization camera 10 may be configured to detect light in a variety of different portions of the electromagnetic spectrum, such as the human-visible portion of the electromagnetic spectrum, red, green, and blue portions of the human-visible spectrum, as well as invisible portions of the electromagnetic spectrum such as infrared and ultraviolet.
The augmentation data generator 220 of the data capture stage 100 utilizes the input images 18, in addition to other input data (e.g., RGB images, sensed map data, etc.), to generate training data 101 for a multitude of 2D and 3D vision tasks.
In some embodiments, the data capture stage 200 includes a frame 202, a rotation device (e.g., a turntable) 204, a plurality of cameras 206, a plurality of sensors 208, a plurality of light sources 210, and an augmentation data generator 220, which are used to collect training data based on a target object 300 from an arrangement 302 of target objects 300. The object 300 may be opaque, shiny metallic, transparent/semi-transparent, or the like.
The frame 202 may include a plurality of skeletal struts that create an enclosure. The frame may have a spherical shape, as shown in
In some examples, different light sources 210 may be capable of producing different color lights (e.g., red, green, and blue). Each of the light sources 210 may be moveably coupled to the frame 202 (e.g., coupled to the skeletal struts of the frame 202) and include one or more light emitting diodes (LEDs) with adjustable brightness/intensity. In some embodiments, the controller 222 is configured to control the on/off status, the brightness, and the position (e.g., vertical position) of the light sources 210 within the frame 202. As such, the data capture stage 200 is capable of creating many unique lighting condition that may be desired for illuminating the target object 300, such as a particular lighting condition suitable for measuring reflectance. In some examples, the data capture stage 200 is capable of reproducing lighting conditions that mimic those of a manufacturing environment for which the data capture stage 200 is generating training data.
The cameras 206 may include RGB cameras (also referred to as visible-light or color cameras) capable of capturing color images (e.g., RGB images) in the visible range of light and polarization cameras 10 that are capable of capturing polarized light, as described above. The cameras 206 may be connected to the frame 202 at different points around the target object 300 with overlapping fields of view. This allows the cameras 206 to capture images of the target object 300 from different but overlapping viewpoints.
In some embodiments, the various individual cameras 206 are registered with one another by determining their relative poses (or relative positions and orientations) by capturing multiple images of a calibration target, such as a checkerboard pattern, an ArUco target (see, e.g., Garrido-Jurado, Sergio, et al. “Automatic generation and detection of highly reliable fiducial markers under occlusion.” Pattern Recognition 47.6 (2014): 390-402), or a ChArUco target (see, e.g., An, Gwon Hwan, et al. “Charuco board-based omnidirectional camera calibration method.” Electronics 7.12 (2018): 421). In particular, the process of calibrating the targets may include computing intrinsic matrices characterizing the internal parameters of each camera 206 (e.g., matrices characterizing the focal length, image sensor format, and principal point of the camera) and extrinsic matrices characterizing the pose of each camera with respect to world coordinates (e.g., matrices for performing transformations between camera coordinate space and world or scene coordinate space). Different cameras 206 may have image sensors with different sensor formats (e.g., aspect ratios) and/or different resolutions without limitation, and the computed intrinsic and extrinsic parameters of the individual cameras enable the augmentation data generator 220 to map different portions of the different images from the various cameras 206 to a same coordinate space (where possible, such as where the fields of view overlap).
The sensors 208 may capture different parameters of a plenoptic function (such as polarization information, infrared and UV data, etc.). In some embodiments, the sensors 208 include at least one of an infrared sensor configured to capture the temperature variations over the target object 300, a depth sensor configured to capture depth information of the target object 300, and the like. According to some examples, the depth sensor may be a LIDAR (light detection and ranging) sensor that emits pulses of light and determines depth based on time-of-flight measurements, or may be a stereo camera configuration (e.g., a stereo polarization camera system) that computes depth estimates based on parallax shifts between stereo pairs of images (e.g., stereo pair of RGB or polarization images). While the disclosure herein provides examples of depth and infrared sensors, embodiments of the present disclosure are not limited thereto, and the data capture stage 200 may include any suitable type of sensor such as a sensor for capturing a UV texture maps, etc. The sensors 208 may be calibrated by using the same checkboard discussed above and/or any other suitable object (such as a cube or sphere) to estimate the extrinsic parameters of the sensors (see, e.g., Eung-su Kim, et al. “Extrinsic Calibration between Camera and LiDAR Sensors by Matching Multiple 3D Planes.” Sensors 2020, 20, 52).
According to some embodiments, the controller 222 controls the operations of the cameras 206 and sensors 208 and, for example, may be capable of syncing the capture time of the cameras 206.
The rotation device 204 includes a table top (e.g. a flat table top) that is configured to rotate about a vertical axis (e.g., about an axis orthogonal to the table top) with high precision (e.g., with a precision of 0.001 degrees). As the cameras 206 and the sensors 208 are substantially stationary during image capture and the sensing of mapping data, the rotatable rotation device 204 allows the target object to be imaged and sensed from almost any arbitrary angle.
According to some embodiments, in addition to calibrating the relative position and orientation of the cameras 206, the data capture stage 200 calibrates the cameras 206 for different rotations of the rotation device 204. With the calibration target on the tabletop, the cameras 206 capture a first set of images, then the controller 222 rotates the rotation device 204 by a set amount (e.g., 1 degree) and the cameras 206 capture a second set of images. The first and second set of images allow the augmentation data generator 220 to calculate the transformation matrix for the set rotation. Using this, the augmentation data generator 220 can generate the transformation matrix for any rotation. For example, it can be shown that:
Transformation Matrix at X degrees=(Transformation Matrix at 1 degree)X
The same transformation matrix may be utilized to calibrate the sensors for different angles.
As will be discussed further below, the augmentation data generator 220 utilizes the rotation transformations to automatically generate a plethora labeled training data from a single scene.
In some embodiments, the augmentation data generator 220 includes a pose estimator 224 configured to compute or estimate a pose of the target objects 300 based on information captured by the cameras 10 and, in some examples, based on the mapping data sensed by the sensors 208. However, embodiments of the present disclosure are not limited thereto, and the pose estimator 224 may exist within the user device 230, or may not be utilized.
In particular, a “pose” refers to the position and orientation of an object with respect to a reference coordinate system. For example, a reference coordinate system may be defined with a camera 206 at the origin, where the direction along the optical axis of the camera 206 (e.g., a direction through the center of its field of view) is defined as the z-axis of the coordinate system, and the x and y axes are defined to be perpendicular to one another and perpendicular to the z-axis. (Embodiments of the present disclosure are not limited to this particular coordinate system, and a person having ordinary skill in the art would understand that poses can be mathematically transformed to equivalent representations in different coordinate systems.)
Generally, in a three-dimensional coordinate system, an objects has six degrees of freedom—rotation around three axes (e.g., rotation around x-, y-, and z-axes) and translation along the three axes (e.g., translation along x-, y-, and z-axes). As such, an object's “pose” may be defined as its six degrees of freedom (6DoF) pose.
In some embodiments, it is assumed that a three-dimensional (3-D) model or computer aided design (CAD) model representing a canonical or ideal version of the target object 300 is available. For example, in some embodiments of the present disclosure, the target object 300 is an individual instance of manufactured components that have a substantially uniform appearance from one component to the next. Examples of such manufactured components include screws, bolts, nuts, connectors, and springs, as well as specialty parts such electronic circuit components (e.g., packaged integrated circuits, light emitting diodes, switches, resistors, and the like), laboratory supplies (e.g. test tubes, PCR tubes, bottles, caps, lids, pipette tips, sample plates, and the like), and manufactured parts (e.g., handles, switch caps, light bulbs, and the like). Accordingly, in these circumstances, a CAD model defining the ideal or canonical shape of the target object 300 may be used to define a coordinate system for the object (e.g., the coordinate system used in the representation of the CAD model).
In some examples, the pose estimator 224 utilizes at least one of the images (e.g., RGB images) of the target object 300 in the overlapping and different fields of view of the cameras 206 and the CAD model of the target object 300 to generate an initial pose estimate of the object 300. In embodiments in which the pose estimator 224 is within the augmentation data generator 220, this initial pose estimate is provided to user device 230. In other examples, the user device 230 houses the pose estimator 224 and utilizes the initial pose estimate. The user device 230 renders projections of the CAD model with the initial estimated pose onto the different images. The projection may be performed by using object-to-camera and camera-to-world transforms to rotate the CAD model to a particular pose and then render the 3D model onto a 2D image of the object 300. The device 230 allows a user to manually manipulate and refine the object pose to create a refined pose that is more accurate than the initial pose estimate generated by the pose estimator 224. This refined pose is used by the augmentation data generator 220 to create ground truth poses or labels for a plethora of images and data captured by the cameras 206 and sensors 208.
The augmentation data generator 220 may receive images provided by first to fourth cameras 206 that capture different views of the object 300 from different view point (e.g., first to fourth viewpoints; see, e.g.,
According to some examples, the tool 232 allows a user to label the pose of an object 300 within a scene (e.g., an object in the top right corner of the cluttered bin, relative to a viewpoint (e.g., the top left viewpoint of
According to some embodiments, the augmentation data generator 220 receives this refined 6DoF pose with respect to a particular viewpoint and uses it as a ground truth pose (or label) for that viewpoint. Using the transformation matrixes from the camera calibrations, the augmentation data generator 220 can project the 6DoF pose onto the image captured by each camera 206 and use the projected 6DoF pose as the label or ground truth pose for the image. Further, by using the sensor calibration information, the augmentation data generator 220 can generate projections of the refined 6DoF pose onto the sensed data (e.g., heat map or depth map) and save the projections as labels for the sensed data.
In some embodiment, the controller 222 of the augmentation data generator 220 rotates the rotation device 204 to certain angles and creates various lighting conditions via the light sources 210. At each angle of rotation and lighting condition, the cameras 206 and sensors 208 capture the scene. For example, the cameras 206 may capture RGB and polarization images, the infrared sensors may detect the temperature variations across the target object 300 and generate a heat map, and the depth sensors may detect depth variations across the object 300 and generate a depth map accordingly. As the rotation transformation is known through the rotation calibration, the augmentation data generator 220 can reproject the refined pose received from the user device 230 to each of the captured images and sensed data at each rotation angle and lighting condition and use the reprojected pose data at each instance as the data label. Thus, according to some embodiments, by rotating the rotation device 204 to different angles, creating various lighting environments, and automatically capturing and labeling data at each rotation and lighting condition, the data capture stage 200 can automatically generate a large set of training data 101 from a single scene. In some examples, the rotation device 204 may be rotated without changing the lighting environment, and at each rotation a number of images and sensed data may be captured. Conversely, the rotation device 204 may be maintained in a fixed angle while the lighting condition is changed and the scene captured under different lighting conditions.
Using this method, the augmentation data generator 220 may aggregate a plethora of labeled data, which includes RGB images, polarization images, 3D point clouds, depth maps, heat maps, UV maps, and/or the like. Each piece of data may be labeled with one or more of a 6DoF pose, a bounding box, key points, 3D CAD model.
As shown in
Referring to
In some examples, the augmentation data generator 220 then provides the first and second images, a CAD model of the target object 300, and calibration information of the first and second cameras 206 to the user device 230.
The augmentation data generator 220 receives a first pose (e.g., a refined pose from the user device 230) of the target object 300 corresponding to a first viewpoint of the first image (S804). The augmentation data generator 220 then projects the first pose onto the second image to generate a second pose of the target object corresponding to a second viewpoint of the second image (S806), which is different from the first viewpoint. The first and second poses may be six-degree-of-freedom (6DoF) poses of the target object 300.
The augmentation data generator 220 generates a first label (e.g., a first ground truth pose) associated with the first image, which includes the first pose (S808), and generates a second label (e.g., a second ground truth pose) associated with the second image, which includes the second pose (S810).
The augmentation data generator 220 then generates the training data based on the first image and its associated first label and based on the second image and the associated second label (S812). The second label may further include at least one of a CAD model of the target object 300 posed according to the second pose and key points corresponding to the CAD model.
Referring to
The augmentation data generator 220 projects the first pose onto the mapping data to generate a third pose of the target object corresponding to a third viewpoint of the mapping data (S816). The augmentation data generator 220 then generates a third label associated with the mapping data, the third label including the third pose (S818). The augmentation data generator 220 augments the training data based on the mapping data and the third label (S820).
The operations performed by the constituent components of the augmentation data generator and the user device of the present disclosure may be performed by a “processing circuit” or “processor” that may include any combination of hardware, firmware, and software, employed to process data or digital signals. Processing circuit hardware may include, for example, application specific integrated circuits (ASICs), general purpose or special purpose central processing units (CPUs), digital signal processors (DSPs), graphics processing units (GPUs), and programmable logic devices such as field programmable gate arrays (FPGAs). In a processing circuit, as used herein, each function is performed either by hardware configured, i.e., hard-wired, to perform that function, or by more general-purpose hardware, such as a CPU, configured to execute instructions stored in a non-transitory storage medium. A processing circuit may be fabricated on a single printed wiring board (PWB) or distributed over several interconnected PWBs. A processing circuit may contain other processing circuits; for example, a processing circuit may include two processing circuits, an FPGA and a CPU, interconnected on a PWB.
It will be understood that, although the terms “first”, “second”, “third”, etc., may be used herein to describe various elements, components, regions, layers, and/or sections, these elements, components, regions, layers, and/or sections should not be limited by these terms. These terms are used to distinguish one element, component, region, layer, or section from another element, component, region, layer, or section. Thus, a first element, component, region, layer, or section discussed below could be termed a second element, component, region, layer, or section, without departing from the scope of the inventive concept.
The terminology used herein is for the purpose of describing particular embodiments and is not intended to be limiting of the inventive concept. As used herein, the singular forms “a” and “an” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “include”, “including”, “comprises”, and/or “comprising”, when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. Further, the use of “may” when describing embodiments of the inventive concept refers to “one or more embodiments of the inventive concept”. Also, the term “exemplary” is intended to refer to an example or illustration.
For the purposes of this disclosure, “at least one of X, Y, and Z” and “at least one selected from the group consisting of X, Y, and Z” may be construed as X only, Y only, Z only, or any combination of two or more of X, Y, and Z, such as, for instance, XYZ, XYY, YZ, and ZZ.
It will be understood that when an element or layer is referred to as being “on”, “connected to”, “coupled to”, or “adjacent” another element or layer, it can be directly on, connected to, coupled to, or adjacent the other element or layer, or one or more intervening elements or layers may be present. When an element or layer is referred to as being “directly on,” “directly connected to”, “directly coupled to”, or “immediately adjacent” another element or layer, there are no intervening elements or layers present.
As used herein, the term “substantially,” “about,” and similar terms are used as terms of approximation and not as terms of degree, and are intended to account for the inherent variations in measured or calculated values that would be recognized by those of ordinary skill in the art.
As used herein, the terms “use”, “using”, and “used” may be considered synonymous with the terms “utilize”, “utilizing”, and “utilized”, respectively.
Further, the use of “may” when describing embodiments of the inventive concept refers to “one or more embodiments of the inventive concept.” Also, the term “exemplary” is intended to refer to an example or illustration.
While the present invention has been described in connection with certain exemplary embodiments, it is to be understood that the invention is not limited to the disclosed embodiments, but, on the contrary, is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims, and equivalents thereof.
Number | Name | Date | Kind |
---|---|---|---|
4124798 | Thompson | Nov 1978 | A |
4198646 | Alexander et al. | Apr 1980 | A |
4323925 | Abell et al. | Apr 1982 | A |
4460449 | Montalbano | Jul 1984 | A |
4467365 | Murayama et al. | Aug 1984 | A |
4652909 | Glenn | Mar 1987 | A |
4888645 | Mitchell et al. | Dec 1989 | A |
4899060 | Lischke | Feb 1990 | A |
4962425 | Rea | Oct 1990 | A |
5005083 | Grage et al. | Apr 1991 | A |
5070414 | Tsutsumi | Dec 1991 | A |
5144448 | Hornbaker et al. | Sep 1992 | A |
5157499 | Oguma et al. | Oct 1992 | A |
5325449 | Burt et al. | Jun 1994 | A |
5327125 | Iwase et al. | Jul 1994 | A |
5463464 | Ladewski | Oct 1995 | A |
5475422 | Suzuki et al. | Dec 1995 | A |
5488674 | Burt et al. | Jan 1996 | A |
5517236 | Sergeant et al. | May 1996 | A |
5629524 | Stettner et al. | May 1997 | A |
5638461 | Fridge | Jun 1997 | A |
5675377 | Gibas et al. | Oct 1997 | A |
5703961 | Rogina et al. | Dec 1997 | A |
5710875 | Hsu et al. | Jan 1998 | A |
5757425 | Barton et al. | May 1998 | A |
5793900 | Nourbakhsh et al. | Aug 1998 | A |
5801919 | Griencewic | Sep 1998 | A |
5808350 | Jack et al. | Sep 1998 | A |
5832312 | Rieger et al. | Nov 1998 | A |
5833507 | Woodgate et al. | Nov 1998 | A |
5880691 | Fossum et al. | Mar 1999 | A |
5911008 | Niikura et al. | Jun 1999 | A |
5933190 | Dierickx et al. | Aug 1999 | A |
5963664 | Kumar et al. | Oct 1999 | A |
5973844 | Burger | Oct 1999 | A |
6002743 | Telymonde | Dec 1999 | A |
6005607 | Uomori et al. | Dec 1999 | A |
6034690 | Gallery et al. | Mar 2000 | A |
6069351 | Mack | May 2000 | A |
6069365 | Chow et al. | May 2000 | A |
6084979 | Kanade et al. | Jul 2000 | A |
6095989 | Hay et al. | Aug 2000 | A |
6097394 | Levoy et al. | Aug 2000 | A |
6124974 | Burger | Sep 2000 | A |
6130786 | Osawa et al. | Oct 2000 | A |
6137100 | Fossum et al. | Oct 2000 | A |
6137535 | Meyers | Oct 2000 | A |
6141048 | Meyers | Oct 2000 | A |
6160909 | Melen | Dec 2000 | A |
6163414 | Kikuchi et al. | Dec 2000 | A |
6172352 | Liu | Jan 2001 | B1 |
6175379 | Uomori et al. | Jan 2001 | B1 |
6185529 | Chen et al. | Feb 2001 | B1 |
6198852 | Anandan et al. | Mar 2001 | B1 |
6205241 | Melen | Mar 2001 | B1 |
6239909 | Hayashi et al. | May 2001 | B1 |
6292713 | Jouppi et al. | Sep 2001 | B1 |
6340994 | Margulis et al. | Jan 2002 | B1 |
6358862 | Ireland et al. | Mar 2002 | B1 |
6373518 | Sogawa | Apr 2002 | B1 |
6419638 | Hay et al. | Jul 2002 | B1 |
6443579 | Myers | Sep 2002 | B1 |
6445815 | Sato | Sep 2002 | B1 |
6476805 | Shum et al. | Nov 2002 | B1 |
6477260 | Shimomura | Nov 2002 | B1 |
6502097 | Chan et al. | Dec 2002 | B1 |
6525302 | Dowski, Jr. et al. | Feb 2003 | B2 |
6546153 | Hoydal | Apr 2003 | B1 |
6552742 | Seta | Apr 2003 | B1 |
6563537 | Kawamura et al. | May 2003 | B1 |
6571466 | Glenn et al. | Jun 2003 | B1 |
6603513 | Berezin | Aug 2003 | B1 |
6611289 | Yu et al. | Aug 2003 | B1 |
6627896 | Hashimoto et al. | Sep 2003 | B1 |
6628330 | Lin | Sep 2003 | B1 |
6628845 | Stone et al. | Sep 2003 | B1 |
6635941 | Suda | Oct 2003 | B2 |
6639596 | Shum et al. | Oct 2003 | B1 |
6647142 | Beardsley | Nov 2003 | B1 |
6657218 | Noda | Dec 2003 | B2 |
6671399 | Berestov | Dec 2003 | B1 |
6674892 | Melen | Jan 2004 | B1 |
6750488 | Driescher et al. | Jun 2004 | B1 |
6750904 | Lambert | Jun 2004 | B1 |
6765617 | Tangen et al. | Jul 2004 | B1 |
6771833 | Edgar | Aug 2004 | B1 |
6774941 | Boisvert et al. | Aug 2004 | B1 |
6788338 | Dinev et al. | Sep 2004 | B1 |
6795253 | Shinohara | Sep 2004 | B2 |
6801653 | Wu et al. | Oct 2004 | B1 |
6819328 | Moriwaki et al. | Nov 2004 | B1 |
6819358 | Kagle et al. | Nov 2004 | B1 |
6833863 | Clemens | Dec 2004 | B1 |
6879735 | Portniaguine et al. | Apr 2005 | B1 |
6897454 | Sasaki et al. | May 2005 | B2 |
6903770 | Kobayashi et al. | Jun 2005 | B1 |
6909121 | Nishikawa | Jun 2005 | B2 |
6917702 | Beardsley | Jul 2005 | B2 |
6927922 | George et al. | Aug 2005 | B2 |
6958862 | Joseph | Oct 2005 | B1 |
6985175 | Iwai et al. | Jan 2006 | B2 |
7013318 | Rosengard et al. | Mar 2006 | B2 |
7015954 | Foote et al. | Mar 2006 | B1 |
7085409 | Sawhney et al. | Aug 2006 | B2 |
7161614 | Yamashita et al. | Jan 2007 | B1 |
7199348 | Olsen et al. | Apr 2007 | B2 |
7206449 | Raskar et al. | Apr 2007 | B2 |
7215364 | Wachtel et al. | May 2007 | B2 |
7235785 | Hornback et al. | Jun 2007 | B2 |
7245761 | Swaminathan et al. | Jul 2007 | B2 |
7262799 | Suda | Aug 2007 | B2 |
7292735 | Blake et al. | Nov 2007 | B2 |
7295697 | Satoh | Nov 2007 | B1 |
7333651 | Kim et al. | Feb 2008 | B1 |
7369165 | Bosco et al. | May 2008 | B2 |
7391572 | Jacobowitz et al. | Jun 2008 | B2 |
7408725 | Sato | Aug 2008 | B2 |
7425984 | Chen et al. | Sep 2008 | B2 |
7430312 | Gu | Sep 2008 | B2 |
7471765 | Jaffray et al. | Dec 2008 | B2 |
7496293 | Shamir et al. | Feb 2009 | B2 |
7564019 | Olsen et al. | Jul 2009 | B2 |
7599547 | Sun et al. | Oct 2009 | B2 |
7606484 | Richards et al. | Oct 2009 | B1 |
7620265 | Wolff et al. | Nov 2009 | B1 |
7633511 | Shum et al. | Dec 2009 | B2 |
7639435 | Chiang | Dec 2009 | B2 |
7639838 | Nims | Dec 2009 | B2 |
7646549 | Zalevsky et al. | Jan 2010 | B2 |
7657090 | Omatsu et al. | Feb 2010 | B2 |
7667824 | Moran | Feb 2010 | B1 |
7675080 | Boettiger | Mar 2010 | B2 |
7675681 | Tomikawa et al. | Mar 2010 | B2 |
7706634 | Schmitt et al. | Apr 2010 | B2 |
7723662 | Levoy et al. | May 2010 | B2 |
7738013 | Galambos et al. | Jun 2010 | B2 |
7741620 | Doering et al. | Jun 2010 | B2 |
7782364 | Smith | Aug 2010 | B2 |
7826153 | Hong | Nov 2010 | B2 |
7840067 | Shen et al. | Nov 2010 | B2 |
7912673 | Hébert et al. | Mar 2011 | B2 |
7924321 | Nayar et al. | Apr 2011 | B2 |
7956871 | Fainstain et al. | Jun 2011 | B2 |
7965314 | Miller et al. | Jun 2011 | B1 |
7973834 | Yang | Jul 2011 | B2 |
7986018 | Rennie | Jul 2011 | B2 |
7990447 | Honda et al. | Aug 2011 | B2 |
8000498 | Shih et al. | Aug 2011 | B2 |
8013904 | Tan et al. | Sep 2011 | B2 |
8027531 | Wilburn et al. | Sep 2011 | B2 |
8044994 | Vetro et al. | Oct 2011 | B2 |
8055466 | Bryll | Nov 2011 | B2 |
8077245 | Adamo et al. | Dec 2011 | B2 |
8089515 | Chebil et al. | Jan 2012 | B2 |
8098297 | Crisan et al. | Jan 2012 | B2 |
8098304 | Pinto et al. | Jan 2012 | B2 |
8106949 | Tan et al. | Jan 2012 | B2 |
8111910 | Tanaka | Feb 2012 | B2 |
8126279 | Marcellin et al. | Feb 2012 | B2 |
8130120 | Kawabata et al. | Mar 2012 | B2 |
8131097 | Lelescu et al. | Mar 2012 | B2 |
8149323 | Li et al. | Apr 2012 | B2 |
8164629 | Zhang | Apr 2012 | B1 |
8169486 | Corcoran et al. | May 2012 | B2 |
8180145 | Wu et al. | May 2012 | B2 |
8189065 | Georgiev et al. | May 2012 | B2 |
8189089 | Georgiev et al. | May 2012 | B1 |
8194296 | Compton et al. | Jun 2012 | B2 |
8212914 | Chiu | Jul 2012 | B2 |
8213711 | Tam | Jul 2012 | B2 |
8231814 | Duparre | Jul 2012 | B2 |
8242426 | Ward et al. | Aug 2012 | B2 |
8244027 | Takahashi | Aug 2012 | B2 |
8244058 | Intwala et al. | Aug 2012 | B1 |
8254668 | Mashitani et al. | Aug 2012 | B2 |
8279325 | Pitts et al. | Oct 2012 | B2 |
8280194 | Wong et al. | Oct 2012 | B2 |
8284240 | Saint-Pierre et al. | Oct 2012 | B2 |
8289409 | Chang | Oct 2012 | B2 |
8289440 | Pitts et al. | Oct 2012 | B2 |
8290358 | Georgiev | Oct 2012 | B1 |
8294099 | Blackwell, Jr. | Oct 2012 | B2 |
8294754 | Jung et al. | Oct 2012 | B2 |
8300085 | Yang et al. | Oct 2012 | B2 |
8305456 | McMahon | Nov 2012 | B1 |
8315476 | Georgiev et al. | Nov 2012 | B1 |
8345144 | Georgiev et al. | Jan 2013 | B1 |
8360574 | Ishak et al. | Jan 2013 | B2 |
8400555 | Georgiev et al. | Mar 2013 | B1 |
8406562 | Bassi et al. | Mar 2013 | B2 |
8411146 | Twede | Apr 2013 | B2 |
8416282 | Lablans | Apr 2013 | B2 |
8446492 | Nakano et al. | May 2013 | B2 |
8456517 | Spektor et al. | Jun 2013 | B2 |
8493496 | Freedman et al. | Jul 2013 | B2 |
8514291 | Chang | Aug 2013 | B2 |
8514491 | Duparre | Aug 2013 | B2 |
8541730 | Inuiya | Sep 2013 | B2 |
8542933 | Venkataraman et al. | Sep 2013 | B2 |
8553093 | Wong et al. | Oct 2013 | B2 |
8558929 | Tredwell | Oct 2013 | B2 |
8559705 | Ng | Oct 2013 | B2 |
8559756 | Georgiev et al. | Oct 2013 | B2 |
8565547 | Strandemar | Oct 2013 | B2 |
8576302 | Yoshikawa | Nov 2013 | B2 |
8577183 | Robinson | Nov 2013 | B2 |
8581995 | Lin et al. | Nov 2013 | B2 |
8619082 | Ciurea et al. | Dec 2013 | B1 |
8648918 | Kauker et al. | Feb 2014 | B2 |
8648919 | Mantzel et al. | Feb 2014 | B2 |
8655052 | Spooner et al. | Feb 2014 | B2 |
8682107 | Yoon et al. | Mar 2014 | B2 |
8687087 | Pertsel et al. | Apr 2014 | B2 |
8692893 | McMahon | Apr 2014 | B2 |
8754941 | Sarwari et al. | Jun 2014 | B1 |
8773536 | Zhang | Jul 2014 | B1 |
8780113 | Ciurea et al. | Jul 2014 | B1 |
8787691 | Takahashi et al. | Jul 2014 | B2 |
8792710 | Keselman | Jul 2014 | B2 |
8804255 | Duparre | Aug 2014 | B2 |
8823813 | Mantzel et al. | Sep 2014 | B2 |
8830375 | Ludwig | Sep 2014 | B2 |
8831367 | Venkataraman et al. | Sep 2014 | B2 |
8831377 | Pitts et al. | Sep 2014 | B2 |
8836793 | Kriesel et al. | Sep 2014 | B1 |
8842201 | Tajiri | Sep 2014 | B2 |
8854433 | Rafii | Oct 2014 | B1 |
8854462 | Herbin et al. | Oct 2014 | B2 |
8861089 | Duparre | Oct 2014 | B2 |
8866912 | Mullis | Oct 2014 | B2 |
8866920 | Venkataraman et al. | Oct 2014 | B2 |
8866951 | Keelan | Oct 2014 | B2 |
8878950 | Lelescu et al. | Nov 2014 | B2 |
8885059 | Venkataraman et al. | Nov 2014 | B1 |
8885922 | Ito et al. | Nov 2014 | B2 |
8896594 | Xiong et al. | Nov 2014 | B2 |
8896719 | Venkataraman et al. | Nov 2014 | B1 |
8902321 | Venkataraman et al. | Dec 2014 | B2 |
8928793 | McMahon | Jan 2015 | B2 |
8977038 | Tian et al. | Mar 2015 | B2 |
9001226 | Ng et al. | Apr 2015 | B1 |
9019426 | Han et al. | Apr 2015 | B2 |
9025894 | Venkataraman et al. | May 2015 | B2 |
9025895 | Venkataraman et al. | May 2015 | B2 |
9030528 | Pesach et al. | May 2015 | B2 |
9031335 | Venkataraman et al. | May 2015 | B2 |
9031342 | Venkataraman | May 2015 | B2 |
9031343 | Venkataraman | May 2015 | B2 |
9036928 | Venkataraman | May 2015 | B2 |
9036931 | Venkataraman et al. | May 2015 | B2 |
9041823 | Venkataraman et al. | May 2015 | B2 |
9041824 | Lelescu et al. | May 2015 | B2 |
9041829 | Venkataraman et al. | May 2015 | B2 |
9042667 | Venkataraman et al. | May 2015 | B2 |
9047684 | Lelescu et al. | Jun 2015 | B2 |
9049367 | Venkataraman et al. | Jun 2015 | B2 |
9055233 | Venkataraman et al. | Jun 2015 | B2 |
9060120 | Venkataraman et al. | Jun 2015 | B2 |
9060124 | Venkataraman et al. | Jun 2015 | B2 |
9077893 | Venkataraman et al. | Jul 2015 | B2 |
9094661 | Venkataraman et al. | Jul 2015 | B2 |
9100586 | McMahon et al. | Aug 2015 | B2 |
9100635 | Duparre et al. | Aug 2015 | B2 |
9123117 | Ciurea et al. | Sep 2015 | B2 |
9123118 | Ciurea et al. | Sep 2015 | B2 |
9124815 | Venkataraman et al. | Sep 2015 | B2 |
9124831 | Mullis | Sep 2015 | B2 |
9124864 | Mullis | Sep 2015 | B2 |
9128228 | Duparre | Sep 2015 | B2 |
9129183 | Venkataraman et al. | Sep 2015 | B2 |
9129377 | Ciurea et al. | Sep 2015 | B2 |
9143711 | McMahon | Sep 2015 | B2 |
9147254 | Florian et al. | Sep 2015 | B2 |
9185276 | Rodda et al. | Nov 2015 | B2 |
9188765 | Venkataraman et al. | Nov 2015 | B2 |
9191580 | Venkataraman et al. | Nov 2015 | B2 |
9197821 | McMahon | Nov 2015 | B2 |
9210392 | Nisenzon et al. | Dec 2015 | B2 |
9214013 | Venkataraman et al. | Dec 2015 | B2 |
9235898 | Venkataraman et al. | Jan 2016 | B2 |
9235900 | Ciurea et al. | Jan 2016 | B2 |
9240049 | Ciurea et al. | Jan 2016 | B2 |
9247117 | Jacques | Jan 2016 | B2 |
9253380 | Venkataraman et al. | Feb 2016 | B2 |
9253397 | Lee et al. | Feb 2016 | B2 |
9256974 | Hines | Feb 2016 | B1 |
9264592 | Rodda et al. | Feb 2016 | B2 |
9264610 | Duparre | Feb 2016 | B2 |
9361662 | Lelescu et al. | Jun 2016 | B2 |
9374512 | Venkataraman et al. | Jun 2016 | B2 |
9412206 | McMahon et al. | Aug 2016 | B2 |
9413953 | Maeda | Aug 2016 | B2 |
9426343 | Rodda et al. | Aug 2016 | B2 |
9426361 | Venkataraman et al. | Aug 2016 | B2 |
9438888 | Venkataraman et al. | Sep 2016 | B2 |
9445003 | Lelescu et al. | Sep 2016 | B1 |
9456134 | Venkataraman et al. | Sep 2016 | B2 |
9456196 | Kim et al. | Sep 2016 | B2 |
9462164 | Venkataraman et al. | Oct 2016 | B2 |
9485496 | Venkataraman et al. | Nov 2016 | B2 |
9497370 | Venkataraman et al. | Nov 2016 | B2 |
9497429 | Mullis et al. | Nov 2016 | B2 |
9516222 | Duparre et al. | Dec 2016 | B2 |
9519972 | Venkataraman et al. | Dec 2016 | B2 |
9521319 | Rodda et al. | Dec 2016 | B2 |
9521416 | McMahon et al. | Dec 2016 | B1 |
9536166 | Venkataraman et al. | Jan 2017 | B2 |
9576369 | Venkataraman et al. | Feb 2017 | B2 |
9578237 | Duparre et al. | Feb 2017 | B2 |
9578259 | Molina | Feb 2017 | B2 |
9602805 | Venkataraman et al. | Mar 2017 | B2 |
9633442 | Venkataraman et al. | Apr 2017 | B2 |
9635274 | Lin et al. | Apr 2017 | B2 |
9638883 | Duparre | May 2017 | B1 |
9661310 | Deng et al. | May 2017 | B2 |
9706132 | Nisenzon et al. | Jul 2017 | B2 |
9712759 | Venkataraman et al. | Jul 2017 | B2 |
9729865 | Kuo et al. | Aug 2017 | B1 |
9733486 | Lelescu et al. | Aug 2017 | B2 |
9741118 | Mullis | Aug 2017 | B2 |
9743051 | Venkataraman et al. | Aug 2017 | B2 |
9749547 | Venkataraman et al. | Aug 2017 | B2 |
9749568 | McMahon | Aug 2017 | B2 |
9754422 | McMahon et al. | Sep 2017 | B2 |
9766380 | Duparre et al. | Sep 2017 | B2 |
9769365 | Jannard | Sep 2017 | B1 |
9774789 | Ciurea et al. | Sep 2017 | B2 |
9774831 | Venkataraman et al. | Sep 2017 | B2 |
9787911 | McMahon et al. | Oct 2017 | B2 |
9794476 | Nayar et al. | Oct 2017 | B2 |
9800856 | Venkataraman et al. | Oct 2017 | B2 |
9800859 | Venkataraman et al. | Oct 2017 | B2 |
9807382 | Duparre et al. | Oct 2017 | B2 |
9811753 | Venkataraman et al. | Nov 2017 | B2 |
9813616 | Lelescu et al. | Nov 2017 | B2 |
9813617 | Venkataraman et al. | Nov 2017 | B2 |
9826212 | Newton et al. | Nov 2017 | B2 |
9858673 | Ciurea et al. | Jan 2018 | B2 |
9864921 | Venkataraman et al. | Jan 2018 | B2 |
9866739 | McMahon | Jan 2018 | B2 |
9888194 | Duparre | Feb 2018 | B2 |
9892522 | Smirnov et al. | Feb 2018 | B2 |
9898856 | Yang et al. | Feb 2018 | B2 |
9917998 | Venkataraman et al. | Mar 2018 | B2 |
9924092 | Rodda et al. | Mar 2018 | B2 |
9936148 | McMahon | Apr 2018 | B2 |
9942474 | Venkataraman et al. | Apr 2018 | B2 |
9955070 | Lelescu et al. | Apr 2018 | B2 |
9986224 | Mullis | May 2018 | B2 |
10009538 | Venkataraman et al. | Jun 2018 | B2 |
10019816 | Venkataraman et al. | Jul 2018 | B2 |
10027901 | Venkataraman et al. | Jul 2018 | B2 |
10089740 | Srikanth et al. | Oct 2018 | B2 |
10091405 | Molina | Oct 2018 | B2 |
10119808 | Venkataraman et al. | Nov 2018 | B2 |
10122993 | Venkataraman et al. | Nov 2018 | B2 |
10127682 | Mullis | Nov 2018 | B2 |
10142560 | Venkataraman et al. | Nov 2018 | B2 |
10182216 | Mullis et al. | Jan 2019 | B2 |
10218889 | McMahan | Feb 2019 | B2 |
10225543 | Mullis | Mar 2019 | B2 |
10250871 | Ciurea et al. | Apr 2019 | B2 |
10261219 | Duparre et al. | Apr 2019 | B2 |
10275676 | Venkataraman et al. | Apr 2019 | B2 |
10306120 | Duparre | May 2019 | B2 |
10311649 | McMohan et al. | Jun 2019 | B2 |
10334241 | Duparre et al. | Jun 2019 | B2 |
10366472 | Lelescu et al. | Jul 2019 | B2 |
10375302 | Nayar et al. | Aug 2019 | B2 |
10375319 | Venkataraman et al. | Aug 2019 | B2 |
10380752 | Ciurea et al. | Aug 2019 | B2 |
10390005 | Nisenzon et al. | Aug 2019 | B2 |
10412314 | McMahon et al. | Sep 2019 | B2 |
10430682 | Venkataraman et al. | Oct 2019 | B2 |
10455168 | McMahon | Oct 2019 | B2 |
10455218 | Venkataraman et al. | Oct 2019 | B2 |
10462362 | Lelescu et al. | Oct 2019 | B2 |
10482618 | Jain et al. | Nov 2019 | B2 |
10540806 | Yang et al. | Jan 2020 | B2 |
10542208 | Lelescu et al. | Jan 2020 | B2 |
10547772 | Molina | Jan 2020 | B2 |
10560684 | Mullis | Feb 2020 | B2 |
10574905 | Srikanth et al. | Feb 2020 | B2 |
10638099 | Mullis et al. | Apr 2020 | B2 |
10643383 | Venkataraman | May 2020 | B2 |
10674138 | Venkataraman et al. | Jun 2020 | B2 |
10694114 | Venkataraman et al. | Jun 2020 | B2 |
10708492 | Venkataraman et al. | Jul 2020 | B2 |
10735635 | Duparre | Aug 2020 | B2 |
10742861 | McMahon | Aug 2020 | B2 |
10767981 | Venkataraman et al. | Sep 2020 | B2 |
10805589 | Venkataraman et al. | Oct 2020 | B2 |
10818026 | Jain et al. | Oct 2020 | B2 |
10839485 | Lelescu et al. | Nov 2020 | B2 |
10909707 | Ciurea et al. | Feb 2021 | B2 |
10944961 | Ciurea et al. | Mar 2021 | B2 |
10958892 | Mullis | Mar 2021 | B2 |
10984276 | Venkataraman et al. | Apr 2021 | B2 |
11022725 | Duparre et al. | Jun 2021 | B2 |
11024046 | Venkataraman | Jun 2021 | B2 |
20010005225 | Clark et al. | Jun 2001 | A1 |
20010019621 | Hanna et al. | Sep 2001 | A1 |
20010028038 | Hamaguchi et al. | Oct 2001 | A1 |
20010038387 | Tomooka et al. | Nov 2001 | A1 |
20020003669 | Kedar et al. | Jan 2002 | A1 |
20020012056 | Trevino et al. | Jan 2002 | A1 |
20020015536 | Warren et al. | Feb 2002 | A1 |
20020027608 | Johnson et al. | Mar 2002 | A1 |
20020028014 | Ono | Mar 2002 | A1 |
20020039438 | Mori et al. | Apr 2002 | A1 |
20020057845 | Fossum et al. | May 2002 | A1 |
20020061131 | Sawhney et al. | May 2002 | A1 |
20020063807 | Margulis | May 2002 | A1 |
20020075450 | Aratani et al. | Jun 2002 | A1 |
20020087403 | Meyers et al. | Jul 2002 | A1 |
20020089596 | Yasuo | Jul 2002 | A1 |
20020094027 | Sato et al. | Jul 2002 | A1 |
20020101528 | Lee et al. | Aug 2002 | A1 |
20020113867 | Takigawa et al. | Aug 2002 | A1 |
20020113888 | Sonoda et al. | Aug 2002 | A1 |
20020118113 | Oku et al. | Aug 2002 | A1 |
20020120634 | Min et al. | Aug 2002 | A1 |
20020122113 | Foote | Sep 2002 | A1 |
20020163054 | Suda | Nov 2002 | A1 |
20020167537 | Trajkovic | Nov 2002 | A1 |
20020171666 | Endo et al. | Nov 2002 | A1 |
20020177054 | Saitoh et al. | Nov 2002 | A1 |
20020190991 | Efran et al. | Dec 2002 | A1 |
20020195548 | Dowski, Jr. et al. | Dec 2002 | A1 |
20030025227 | Daniell | Feb 2003 | A1 |
20030026474 | Yano | Feb 2003 | A1 |
20030086079 | Barth et al. | May 2003 | A1 |
20030124763 | Fan et al. | Jul 2003 | A1 |
20030140347 | Varsa | Jul 2003 | A1 |
20030156189 | Utsumi et al. | Aug 2003 | A1 |
20030179418 | Wengender et al. | Sep 2003 | A1 |
20030188659 | Merry et al. | Oct 2003 | A1 |
20030190072 | Adkins et al. | Oct 2003 | A1 |
20030198377 | Ng | Oct 2003 | A1 |
20030211405 | Venkataraman | Nov 2003 | A1 |
20030231179 | Suzuki | Dec 2003 | A1 |
20040003409 | Berstis | Jan 2004 | A1 |
20040008271 | Hagimori et al. | Jan 2004 | A1 |
20040012689 | Tinnerino et al. | Jan 2004 | A1 |
20040027358 | Nakao | Feb 2004 | A1 |
20040047274 | Amanai | Mar 2004 | A1 |
20040050104 | Ghosh et al. | Mar 2004 | A1 |
20040056966 | Schechner et al. | Mar 2004 | A1 |
20040061787 | Liu et al. | Apr 2004 | A1 |
20040066454 | Otani et al. | Apr 2004 | A1 |
20040071367 | Irani et al. | Apr 2004 | A1 |
20040075654 | Hsiao et al. | Apr 2004 | A1 |
20040096119 | Williams et al. | May 2004 | A1 |
20040100570 | Shizukuishi | May 2004 | A1 |
20040105021 | Hu | Jun 2004 | A1 |
20040114807 | Lelescu et al. | Jun 2004 | A1 |
20040141659 | Zhang | Jul 2004 | A1 |
20040151401 | Sawhney et al. | Aug 2004 | A1 |
20040165090 | Ning | Aug 2004 | A1 |
20040169617 | Yelton et al. | Sep 2004 | A1 |
20040170340 | Tipping et al. | Sep 2004 | A1 |
20040174439 | Upton | Sep 2004 | A1 |
20040179008 | Gordon et al. | Sep 2004 | A1 |
20040179834 | Szajewski et al. | Sep 2004 | A1 |
20040196379 | Chen et al. | Oct 2004 | A1 |
20040207600 | Zhang et al. | Oct 2004 | A1 |
20040207836 | Chhibber et al. | Oct 2004 | A1 |
20040212734 | Macinnis et al. | Oct 2004 | A1 |
20040213449 | Safaee-Rad et al. | Oct 2004 | A1 |
20040218809 | Blake et al. | Nov 2004 | A1 |
20040234873 | Venkataraman | Nov 2004 | A1 |
20040239782 | Equitz et al. | Dec 2004 | A1 |
20040239885 | Jaynes et al. | Dec 2004 | A1 |
20040240052 | Minefuji et al. | Dec 2004 | A1 |
20040251509 | Choi | Dec 2004 | A1 |
20040264806 | Herley | Dec 2004 | A1 |
20050006477 | Patel | Jan 2005 | A1 |
20050007461 | Chou et al. | Jan 2005 | A1 |
20050009313 | Suzuki et al. | Jan 2005 | A1 |
20050010621 | Pinto et al. | Jan 2005 | A1 |
20050012035 | Miller | Jan 2005 | A1 |
20050036778 | DeMonte | Feb 2005 | A1 |
20050047678 | Jones et al. | Mar 2005 | A1 |
20050048690 | Yamamoto | Mar 2005 | A1 |
20050068436 | Fraenkel et al. | Mar 2005 | A1 |
20050083531 | Millerd et al. | Apr 2005 | A1 |
20050084179 | Hanna et al. | Apr 2005 | A1 |
20050111705 | Waupotitsch et al. | May 2005 | A1 |
20050117015 | Cutler | Jun 2005 | A1 |
20050128509 | Tokkonen et al. | Jun 2005 | A1 |
20050128595 | Shimizu | Jun 2005 | A1 |
20050132098 | Sonoda et al. | Jun 2005 | A1 |
20050134698 | Schroeder et al. | Jun 2005 | A1 |
20050134699 | Nagashima | Jun 2005 | A1 |
20050134712 | Gruhlke et al. | Jun 2005 | A1 |
20050147277 | Higaki et al. | Jul 2005 | A1 |
20050151759 | Gonzalez-Banos et al. | Jul 2005 | A1 |
20050168924 | Wu et al. | Aug 2005 | A1 |
20050175257 | Kuroki | Aug 2005 | A1 |
20050185711 | Pfister et al. | Aug 2005 | A1 |
20050203380 | Sauer et al. | Sep 2005 | A1 |
20050205785 | Hornback et al. | Sep 2005 | A1 |
20050219264 | Shum et al. | Oct 2005 | A1 |
20050219363 | Kohler et al. | Oct 2005 | A1 |
20050224843 | Boemler | Oct 2005 | A1 |
20050225654 | Feldman et al. | Oct 2005 | A1 |
20050265633 | Piacentino et al. | Dec 2005 | A1 |
20050275946 | Choo et al. | Dec 2005 | A1 |
20050286612 | Takanashi | Dec 2005 | A1 |
20050286756 | Hong et al. | Dec 2005 | A1 |
20060002635 | Nestares et al. | Jan 2006 | A1 |
20060007331 | Izumi et al. | Jan 2006 | A1 |
20060013318 | Webb et al. | Jan 2006 | A1 |
20060018509 | Miyoshi | Jan 2006 | A1 |
20060023197 | Joel | Feb 2006 | A1 |
20060023314 | Boettiger et al. | Feb 2006 | A1 |
20060028476 | Sobel et al. | Feb 2006 | A1 |
20060029270 | Berestov et al. | Feb 2006 | A1 |
20060029271 | Miyoshi et al. | Feb 2006 | A1 |
20060033005 | Jerdev et al. | Feb 2006 | A1 |
20060034003 | Zalevsky | Feb 2006 | A1 |
20060034531 | Poon et al. | Feb 2006 | A1 |
20060035415 | Wood | Feb 2006 | A1 |
20060038891 | Okutomi et al. | Feb 2006 | A1 |
20060039611 | Rother et al. | Feb 2006 | A1 |
20060046204 | Ono et al. | Mar 2006 | A1 |
20060049930 | Zruya et al. | Mar 2006 | A1 |
20060050980 | Kohashi et al. | Mar 2006 | A1 |
20060054780 | Garrood et al. | Mar 2006 | A1 |
20060054782 | Olsen et al. | Mar 2006 | A1 |
20060055811 | Frtiz et al. | Mar 2006 | A1 |
20060069478 | Iwama | Mar 2006 | A1 |
20060072029 | Miyatake et al. | Apr 2006 | A1 |
20060087747 | Ohzawa et al. | Apr 2006 | A1 |
20060098888 | Morishita | May 2006 | A1 |
20060103754 | Wenstrand et al. | May 2006 | A1 |
20060119597 | Oshino | Jun 2006 | A1 |
20060125936 | Gruhike et al. | Jun 2006 | A1 |
20060138322 | Costello et al. | Jun 2006 | A1 |
20060139475 | Esch et al. | Jun 2006 | A1 |
20060152803 | Provitola | Jul 2006 | A1 |
20060153290 | Watabe et al. | Jul 2006 | A1 |
20060157640 | Perlman et al. | Jul 2006 | A1 |
20060159369 | Young | Jul 2006 | A1 |
20060176566 | Boettiger et al. | Aug 2006 | A1 |
20060187322 | Janson, Jr. et al. | Aug 2006 | A1 |
20060187338 | May et al. | Aug 2006 | A1 |
20060197937 | Bamji et al. | Sep 2006 | A1 |
20060203100 | Ajito et al. | Sep 2006 | A1 |
20060203113 | Wada et al. | Sep 2006 | A1 |
20060210146 | Gu | Sep 2006 | A1 |
20060210186 | Berkner | Sep 2006 | A1 |
20060214085 | Olsen et al. | Sep 2006 | A1 |
20060215924 | Steinberg et al. | Sep 2006 | A1 |
20060221250 | Rossbach et al. | Oct 2006 | A1 |
20060239549 | Kelly et al. | Oct 2006 | A1 |
20060243889 | Farnworth et al. | Nov 2006 | A1 |
20060251410 | Trutna | Nov 2006 | A1 |
20060274174 | Tewinkle | Dec 2006 | A1 |
20060278948 | Yamaguchi et al. | Dec 2006 | A1 |
20060279648 | Senba et al. | Dec 2006 | A1 |
20060289772 | Johnson et al. | Dec 2006 | A1 |
20070002159 | Olsen et al. | Jan 2007 | A1 |
20070008575 | Yu et al. | Jan 2007 | A1 |
20070009150 | Suwa | Jan 2007 | A1 |
20070024614 | Tam et al. | Feb 2007 | A1 |
20070030356 | Yea et al. | Feb 2007 | A1 |
20070035707 | Margulis | Feb 2007 | A1 |
20070036427 | Nakamura et al. | Feb 2007 | A1 |
20070040828 | Zalevsky et al. | Feb 2007 | A1 |
20070040922 | McKee et al. | Feb 2007 | A1 |
20070041391 | Lin et al. | Feb 2007 | A1 |
20070052825 | Cho | Mar 2007 | A1 |
20070083114 | Yang et al. | Apr 2007 | A1 |
20070085917 | Kobayashi | Apr 2007 | A1 |
20070092245 | Bazakos et al. | Apr 2007 | A1 |
20070102622 | Olsen et al. | May 2007 | A1 |
20070116447 | Ye | May 2007 | A1 |
20070126898 | Feldman et al. | Jun 2007 | A1 |
20070127831 | Venkataraman | Jun 2007 | A1 |
20070139333 | Sato et al. | Jun 2007 | A1 |
20070140685 | Wu | Jun 2007 | A1 |
20070146503 | Shiraki | Jun 2007 | A1 |
20070146511 | Kinoshita et al. | Jun 2007 | A1 |
20070153335 | Hosaka | Jul 2007 | A1 |
20070158427 | Zhu et al. | Jul 2007 | A1 |
20070159541 | Sparks et al. | Jul 2007 | A1 |
20070160310 | Tanida et al. | Jul 2007 | A1 |
20070165931 | Higaki | Jul 2007 | A1 |
20070166447 | U r-Rehman et al. | Jul 2007 | A1 |
20070171290 | Kroger | Jul 2007 | A1 |
20070177004 | Kolehmainen et al. | Aug 2007 | A1 |
20070182843 | Shimamura et al. | Aug 2007 | A1 |
20070201859 | Sarrat | Aug 2007 | A1 |
20070206241 | Smith et al. | Sep 2007 | A1 |
20070211164 | Olsen et al. | Sep 2007 | A1 |
20070216765 | Wong et al. | Sep 2007 | A1 |
20070225600 | Weibrecht et al. | Sep 2007 | A1 |
20070228256 | Mentzer et al. | Oct 2007 | A1 |
20070236595 | Pan et al. | Oct 2007 | A1 |
20070242141 | Ciurea | Oct 2007 | A1 |
20070247517 | Zhang et al. | Oct 2007 | A1 |
20070257184 | Olsen et al. | Nov 2007 | A1 |
20070258006 | Olsen et al. | Nov 2007 | A1 |
20070258706 | Raskar et al. | Nov 2007 | A1 |
20070263113 | Baek et al. | Nov 2007 | A1 |
20070263114 | Gurevich et al. | Nov 2007 | A1 |
20070268374 | Robinson | Nov 2007 | A1 |
20070291995 | Rivera | Dec 2007 | A1 |
20070296721 | Chang et al. | Dec 2007 | A1 |
20070296832 | Ota et al. | Dec 2007 | A1 |
20070296835 | Olsen et al. | Dec 2007 | A1 |
20070296846 | Barman et al. | Dec 2007 | A1 |
20070296847 | Chang et al. | Dec 2007 | A1 |
20070297696 | Hamza et al. | Dec 2007 | A1 |
20080006859 | Mionetto | Jan 2008 | A1 |
20080019611 | Larkin et al. | Jan 2008 | A1 |
20080024683 | Damera-Venkata et al. | Jan 2008 | A1 |
20080025649 | Liu et al. | Jan 2008 | A1 |
20080030592 | Border et al. | Feb 2008 | A1 |
20080030597 | Olsen et al. | Feb 2008 | A1 |
20080043095 | Vetro et al. | Feb 2008 | A1 |
20080043096 | Vetro et al. | Feb 2008 | A1 |
20080044170 | Yap et al. | Feb 2008 | A1 |
20080054518 | Ra et al. | Mar 2008 | A1 |
20080056302 | Erdal et al. | Mar 2008 | A1 |
20080062164 | Bassi et al. | Mar 2008 | A1 |
20080079805 | Takagi et al. | Apr 2008 | A1 |
20080080028 | Bakin et al. | Apr 2008 | A1 |
20080084486 | Enge et al. | Apr 2008 | A1 |
20080088793 | Sverdrup et al. | Apr 2008 | A1 |
20080095523 | Schilling-Benz et al. | Apr 2008 | A1 |
20080099804 | Venezia et al. | May 2008 | A1 |
20080106620 | Sawachi | May 2008 | A1 |
20080112059 | Choi et al. | May 2008 | A1 |
20080112635 | Kondo et al. | May 2008 | A1 |
20080117289 | Schowengerdt et al. | May 2008 | A1 |
20080118241 | TeKolste et al. | May 2008 | A1 |
20080131019 | Ng | Jun 2008 | A1 |
20080131107 | Ueno | Jun 2008 | A1 |
20080151097 | Chen et al. | Jun 2008 | A1 |
20080152213 | Medioni et al. | Jun 2008 | A1 |
20080152215 | Horie et al. | Jun 2008 | A1 |
20080152296 | Oh et al. | Jun 2008 | A1 |
20080156991 | Hu et al. | Jul 2008 | A1 |
20080158259 | Kempf et al. | Jul 2008 | A1 |
20080158375 | Kakkori et al. | Jul 2008 | A1 |
20080158698 | Chang et al. | Jul 2008 | A1 |
20080165257 | Boettiger | Jul 2008 | A1 |
20080174670 | Olsen et al. | Jul 2008 | A1 |
20080187305 | Raskar et al. | Aug 2008 | A1 |
20080193026 | Horie et al. | Aug 2008 | A1 |
20080208506 | Kuwata | Aug 2008 | A1 |
20080211737 | Kim et al. | Sep 2008 | A1 |
20080218610 | Chapman et al. | Sep 2008 | A1 |
20080218611 | Parulski et al. | Sep 2008 | A1 |
20080218612 | Border et al. | Sep 2008 | A1 |
20080218613 | Janson et al. | Sep 2008 | A1 |
20080219654 | Border et al. | Sep 2008 | A1 |
20080239116 | Smith | Oct 2008 | A1 |
20080240598 | Hasegawa | Oct 2008 | A1 |
20080246866 | Kinoshita et al. | Oct 2008 | A1 |
20080247638 | Tanida et al. | Oct 2008 | A1 |
20080247653 | Moussavi et al. | Oct 2008 | A1 |
20080272416 | Yun | Nov 2008 | A1 |
20080273751 | Yuan et al. | Nov 2008 | A1 |
20080278591 | Barna et al. | Nov 2008 | A1 |
20080278610 | Boettiger | Nov 2008 | A1 |
20080284880 | Numata | Nov 2008 | A1 |
20080291295 | Kato et al. | Nov 2008 | A1 |
20080298674 | Baker et al. | Dec 2008 | A1 |
20080310501 | Ward et al. | Dec 2008 | A1 |
20090027543 | Kanehiro | Jan 2009 | A1 |
20090050946 | Duparre et al. | Feb 2009 | A1 |
20090052743 | Techmer | Feb 2009 | A1 |
20090060281 | Tanida et al. | Mar 2009 | A1 |
20090066693 | Carson | Mar 2009 | A1 |
20090079862 | Subbotin | Mar 2009 | A1 |
20090086074 | Li et al. | Apr 2009 | A1 |
20090091645 | Trimeche et al. | Apr 2009 | A1 |
20090091806 | Inuiya | Apr 2009 | A1 |
20090092363 | Daum et al. | Apr 2009 | A1 |
20090096050 | Park | Apr 2009 | A1 |
20090102956 | Georgiev | Apr 2009 | A1 |
20090103792 | Rahn et al. | Apr 2009 | A1 |
20090109306 | Shan et al. | Apr 2009 | A1 |
20090127430 | Hirasawa et al. | May 2009 | A1 |
20090128644 | Camp, Jr. et al. | May 2009 | A1 |
20090128833 | Yahav | May 2009 | A1 |
20090129667 | Ho et al. | May 2009 | A1 |
20090140131 | Utagawa | Jun 2009 | A1 |
20090141933 | Wagg | Jun 2009 | A1 |
20090147919 | Goto et al. | Jun 2009 | A1 |
20090152664 | Klem et al. | Jun 2009 | A1 |
20090167922 | Perlman et al. | Jul 2009 | A1 |
20090167923 | Safaee-Rad et al. | Jul 2009 | A1 |
20090167934 | Gupta | Jul 2009 | A1 |
20090175349 | Ye et al. | Jul 2009 | A1 |
20090179142 | Duparre et al. | Jul 2009 | A1 |
20090180021 | Kikuchi et al. | Jul 2009 | A1 |
20090200622 | Tai et al. | Aug 2009 | A1 |
20090201371 | Matsuda et al. | Aug 2009 | A1 |
20090207235 | Francini et al. | Aug 2009 | A1 |
20090219435 | Yuan | Sep 2009 | A1 |
20090225203 | Tanida et al. | Sep 2009 | A1 |
20090237520 | Kaneko et al. | Sep 2009 | A1 |
20090245573 | Saptharishi et al. | Oct 2009 | A1 |
20090245637 | Barman et al. | Oct 2009 | A1 |
20090256947 | Ciurea et al. | Oct 2009 | A1 |
20090263017 | Tanbakuchi | Oct 2009 | A1 |
20090268192 | Koenck et al. | Oct 2009 | A1 |
20090268970 | Babacan et al. | Oct 2009 | A1 |
20090268983 | Stone et al. | Oct 2009 | A1 |
20090273663 | Yoshida | Nov 2009 | A1 |
20090274387 | Jin | Nov 2009 | A1 |
20090279800 | Uetani et al. | Nov 2009 | A1 |
20090284651 | Srinivasan | Nov 2009 | A1 |
20090290811 | Imai | Nov 2009 | A1 |
20090297056 | Lelescu et al. | Dec 2009 | A1 |
20090302205 | Olsen et al. | Dec 2009 | A9 |
20090317061 | Jung et al. | Dec 2009 | A1 |
20090322876 | Lee et al. | Dec 2009 | A1 |
20090323195 | Hembree et al. | Dec 2009 | A1 |
20090323206 | Oliver et al. | Dec 2009 | A1 |
20090324118 | Maslov et al. | Dec 2009 | A1 |
20100002126 | Wenstrand et al. | Jan 2010 | A1 |
20100002313 | Duparre et al. | Jan 2010 | A1 |
20100002314 | Duparre | Jan 2010 | A1 |
20100007714 | Kim et al. | Jan 2010 | A1 |
20100013927 | Nixon | Jan 2010 | A1 |
20100044815 | Chang | Feb 2010 | A1 |
20100045809 | Packard | Feb 2010 | A1 |
20100053342 | Hwang et al. | Mar 2010 | A1 |
20100053347 | Agarwala et al. | Mar 2010 | A1 |
20100053415 | Yun | Mar 2010 | A1 |
20100053600 | Tanida et al. | Mar 2010 | A1 |
20100060746 | Olsen et al. | Mar 2010 | A9 |
20100073463 | Momonoi et al. | Mar 2010 | A1 |
20100074532 | Gordon et al. | Mar 2010 | A1 |
20100085351 | Deb et al. | Apr 2010 | A1 |
20100085425 | Tan | Apr 2010 | A1 |
20100086227 | Sun et al. | Apr 2010 | A1 |
20100091389 | Henriksen et al. | Apr 2010 | A1 |
20100097444 | Lablans | Apr 2010 | A1 |
20100097491 | Farina et al. | Apr 2010 | A1 |
20100103175 | Okutomi et al. | Apr 2010 | A1 |
20100103259 | Tanida et al. | Apr 2010 | A1 |
20100103308 | Butterfield et al. | Apr 2010 | A1 |
20100111444 | Coffman | May 2010 | A1 |
20100118127 | Nam et al. | May 2010 | A1 |
20100128145 | Pitts et al. | May 2010 | A1 |
20100129048 | Pitts et al. | May 2010 | A1 |
20100133230 | Henriksen et al. | Jun 2010 | A1 |
20100133418 | Sargent et al. | Jun 2010 | A1 |
20100141802 | Knight et al. | Jun 2010 | A1 |
20100142828 | Chang et al. | Jun 2010 | A1 |
20100142839 | Lakus-Becker | Jun 2010 | A1 |
20100157073 | Kondo et al. | Jun 2010 | A1 |
20100165152 | Lim | Jul 2010 | A1 |
20100166410 | Chang | Jul 2010 | A1 |
20100171866 | Brady et al. | Jul 2010 | A1 |
20100177411 | Hegde et al. | Jul 2010 | A1 |
20100182406 | Benitez | Jul 2010 | A1 |
20100194860 | Mentz et al. | Aug 2010 | A1 |
20100194901 | van Hoorebeke et al. | Aug 2010 | A1 |
20100195716 | Klein Gunnewiek et al. | Aug 2010 | A1 |
20100201809 | Oyama et al. | Aug 2010 | A1 |
20100201834 | Maruyama et al. | Aug 2010 | A1 |
20100202054 | Niederer | Aug 2010 | A1 |
20100202683 | Robinson | Aug 2010 | A1 |
20100208100 | Olsen et al. | Aug 2010 | A9 |
20100214423 | Ogawa | Aug 2010 | A1 |
20100220212 | Perlman et al. | Sep 2010 | A1 |
20100223237 | Mishra et al. | Sep 2010 | A1 |
20100225740 | Jung et al. | Sep 2010 | A1 |
20100231285 | Boomer et al. | Sep 2010 | A1 |
20100238327 | Griffith et al. | Sep 2010 | A1 |
20100244165 | Lake et al. | Sep 2010 | A1 |
20100245684 | Xiao et al. | Sep 2010 | A1 |
20100254627 | Panahpour Tehrani et al. | Oct 2010 | A1 |
20100259610 | Petersen | Oct 2010 | A1 |
20100265346 | Iizuka | Oct 2010 | A1 |
20100265381 | Yamamoto et al. | Oct 2010 | A1 |
20100265385 | Knight et al. | Oct 2010 | A1 |
20100277629 | Tanaka | Nov 2010 | A1 |
20100281070 | Chan et al. | Nov 2010 | A1 |
20100289941 | Ito et al. | Nov 2010 | A1 |
20100290483 | Park et al. | Nov 2010 | A1 |
20100302423 | Adams, Jr. et al. | Dec 2010 | A1 |
20100309292 | Ho et al. | Dec 2010 | A1 |
20100309368 | Choi et al. | Dec 2010 | A1 |
20100321595 | Chiu | Dec 2010 | A1 |
20100321640 | Yeh et al. | Dec 2010 | A1 |
20100329556 | Mitarai et al. | Dec 2010 | A1 |
20100329582 | Albu et al. | Dec 2010 | A1 |
20110001037 | Tewinkle | Jan 2011 | A1 |
20110013006 | Uzenbajakava et al. | Jan 2011 | A1 |
20110018973 | Takayama | Jan 2011 | A1 |
20110019048 | Raynor et al. | Jan 2011 | A1 |
20110019243 | Constant, Jr. et al. | Jan 2011 | A1 |
20110031381 | Tay et al. | Feb 2011 | A1 |
20110032341 | Ignatov et al. | Feb 2011 | A1 |
20110032370 | Ludwig | Feb 2011 | A1 |
20110033129 | Robinson | Feb 2011 | A1 |
20110038536 | Gong | Feb 2011 | A1 |
20110043604 | Peleg et al. | Feb 2011 | A1 |
20110043613 | Rohaly et al. | Feb 2011 | A1 |
20110043661 | Podoleanu | Feb 2011 | A1 |
20110043665 | Ogasahara | Feb 2011 | A1 |
20110043668 | McKinnon et al. | Feb 2011 | A1 |
20110044502 | Liu et al. | Feb 2011 | A1 |
20110051255 | Lee et al. | Mar 2011 | A1 |
20110055729 | Mason et al. | Mar 2011 | A1 |
20110064327 | Dagher et al. | Mar 2011 | A1 |
20110069189 | Venkataraman et al. | Mar 2011 | A1 |
20110080487 | Venkataraman et al. | Apr 2011 | A1 |
20110084893 | Lee et al. | Apr 2011 | A1 |
20110085028 | Samadani et al. | Apr 2011 | A1 |
20110090217 | Mashitani et al. | Apr 2011 | A1 |
20110102553 | Corcoran et al. | May 2011 | A1 |
20110108708 | Olsen et al. | May 2011 | A1 |
20110115886 | Nguyen et al. | May 2011 | A1 |
20110121421 | Charbon et al. | May 2011 | A1 |
20110122308 | Duparre | May 2011 | A1 |
20110128393 | Tavi et al. | Jun 2011 | A1 |
20110128412 | Milnes et al. | Jun 2011 | A1 |
20110129165 | Lim et al. | Jun 2011 | A1 |
20110141309 | Nagashima et al. | Jun 2011 | A1 |
20110142138 | Tian et al. | Jun 2011 | A1 |
20110149408 | Hahgholt et al. | Jun 2011 | A1 |
20110149409 | Haugholt et al. | Jun 2011 | A1 |
20110150321 | Cheong et al. | Jun 2011 | A1 |
20110153248 | Gu et al. | Jun 2011 | A1 |
20110157321 | Nakajima et al. | Jun 2011 | A1 |
20110157451 | Chang | Jun 2011 | A1 |
20110169994 | DiFrancesco et al. | Jul 2011 | A1 |
20110176020 | Chang | Jul 2011 | A1 |
20110181797 | Galstian et al. | Jul 2011 | A1 |
20110193944 | Lian et al. | Aug 2011 | A1 |
20110199458 | Hayasaka et al. | Aug 2011 | A1 |
20110200319 | Kravitz et al. | Aug 2011 | A1 |
20110206291 | Kashani et al. | Aug 2011 | A1 |
20110207074 | Hall-Holt et al. | Aug 2011 | A1 |
20110211068 | Yokota | Sep 2011 | A1 |
20110211077 | Nayar et al. | Sep 2011 | A1 |
20110211824 | Georgiev et al. | Sep 2011 | A1 |
20110221599 | Högasten | Sep 2011 | A1 |
20110221658 | Haddick et al. | Sep 2011 | A1 |
20110221939 | Jerdev | Sep 2011 | A1 |
20110221950 | Oostra et al. | Sep 2011 | A1 |
20110222757 | Yeatman, Jr. et al. | Sep 2011 | A1 |
20110228142 | Brueckner et al. | Sep 2011 | A1 |
20110228144 | Tian et al. | Sep 2011 | A1 |
20110234825 | Liu et al. | Sep 2011 | A1 |
20110234841 | Akeley et al. | Sep 2011 | A1 |
20110241234 | Duparre | Oct 2011 | A1 |
20110242342 | Goma et al. | Oct 2011 | A1 |
20110242355 | Goma et al. | Oct 2011 | A1 |
20110242356 | Aleksic et al. | Oct 2011 | A1 |
20110243428 | Das Gupta et al. | Oct 2011 | A1 |
20110255592 | Sung et al. | Oct 2011 | A1 |
20110255745 | Hodder et al. | Oct 2011 | A1 |
20110255786 | Hunter et al. | Oct 2011 | A1 |
20110261993 | Weiming et al. | Oct 2011 | A1 |
20110267264 | Mccarthy et al. | Nov 2011 | A1 |
20110267348 | Lin et al. | Nov 2011 | A1 |
20110273531 | Ito et al. | Nov 2011 | A1 |
20110274175 | Sumitomo | Nov 2011 | A1 |
20110274366 | Tardif | Nov 2011 | A1 |
20110279705 | Kuang et al. | Nov 2011 | A1 |
20110279721 | McMahon | Nov 2011 | A1 |
20110285701 | Chen et al. | Nov 2011 | A1 |
20110285866 | Bhrugumalla et al. | Nov 2011 | A1 |
20110285910 | Bamji et al. | Nov 2011 | A1 |
20110292216 | Fergus et al. | Dec 2011 | A1 |
20110298898 | Jung et al. | Dec 2011 | A1 |
20110298917 | Yanagita | Dec 2011 | A1 |
20110300929 | Tardif et al. | Dec 2011 | A1 |
20110310980 | Mathew | Dec 2011 | A1 |
20110316968 | Taguchi et al. | Dec 2011 | A1 |
20110317766 | Lim et al. | Dec 2011 | A1 |
20120012748 | Pain | Jan 2012 | A1 |
20120013748 | Stanwood et al. | Jan 2012 | A1 |
20120014456 | Martinez Bauza et al. | Jan 2012 | A1 |
20120019530 | Baker | Jan 2012 | A1 |
20120019700 | Gaber | Jan 2012 | A1 |
20120023456 | Sun et al. | Jan 2012 | A1 |
20120026297 | Sato | Feb 2012 | A1 |
20120026342 | Yu et al. | Feb 2012 | A1 |
20120026366 | Golan et al. | Feb 2012 | A1 |
20120026451 | Nystrom | Feb 2012 | A1 |
20120026478 | Chen et al. | Feb 2012 | A1 |
20120038745 | Yu et al. | Feb 2012 | A1 |
20120039525 | Tian et al. | Feb 2012 | A1 |
20120044249 | Mashitani et al. | Feb 2012 | A1 |
20120044372 | Côtéet al. | Feb 2012 | A1 |
20120051624 | Ando | Mar 2012 | A1 |
20120056982 | Katz et al. | Mar 2012 | A1 |
20120057040 | Park et al. | Mar 2012 | A1 |
20120062697 | Treado et al. | Mar 2012 | A1 |
20120062702 | Jiang et al. | Mar 2012 | A1 |
20120062756 | Tian et al. | Mar 2012 | A1 |
20120069235 | Imai | Mar 2012 | A1 |
20120081519 | Goma et al. | Apr 2012 | A1 |
20120086803 | Malzbender et al. | Apr 2012 | A1 |
20120105590 | Fukumoto et al. | May 2012 | A1 |
20120105654 | Kwatra et al. | May 2012 | A1 |
20120105691 | Waqas et al. | May 2012 | A1 |
20120113232 | Joblove | May 2012 | A1 |
20120113318 | Galstian et al. | May 2012 | A1 |
20120113413 | Miahczylowicz-Wolski et al. | May 2012 | A1 |
20120114224 | Xu et al. | May 2012 | A1 |
20120114260 | Takahashi et al. | May 2012 | A1 |
20120120264 | Lee et al. | May 2012 | A1 |
20120127275 | Von Zitzewitz et al. | May 2012 | A1 |
20120127284 | Bar-Zeev et al. | May 2012 | A1 |
20120147139 | Li et al. | Jun 2012 | A1 |
20120147205 | Lelescu et al. | Jun 2012 | A1 |
20120153153 | Chang et al. | Jun 2012 | A1 |
20120154551 | Inoue | Jun 2012 | A1 |
20120155830 | Sasaki et al. | Jun 2012 | A1 |
20120162374 | Markas et al. | Jun 2012 | A1 |
20120163672 | McKinnon | Jun 2012 | A1 |
20120163725 | Fukuhara | Jun 2012 | A1 |
20120169433 | Mullins et al. | Jul 2012 | A1 |
20120170134 | Bolis et al. | Jul 2012 | A1 |
20120176479 | Mayhew et al. | Jul 2012 | A1 |
20120176481 | Lukk et al. | Jul 2012 | A1 |
20120188235 | Wu et al. | Jul 2012 | A1 |
20120188341 | Klein Gunnewiek et al. | Jul 2012 | A1 |
20120188389 | Lin et al. | Jul 2012 | A1 |
20120188420 | Black et al. | Jul 2012 | A1 |
20120188634 | Kubala et al. | Jul 2012 | A1 |
20120198677 | Duparre | Aug 2012 | A1 |
20120200669 | Lai et al. | Aug 2012 | A1 |
20120200726 | Bugnariu | Aug 2012 | A1 |
20120200734 | Tang | Aug 2012 | A1 |
20120206582 | DiCarlo et al. | Aug 2012 | A1 |
20120218455 | Imai et al. | Aug 2012 | A1 |
20120219236 | Ali et al. | Aug 2012 | A1 |
20120224083 | Jovanovski et al. | Sep 2012 | A1 |
20120229602 | Chen et al. | Sep 2012 | A1 |
20120229628 | Ishiyama et al. | Sep 2012 | A1 |
20120237114 | Park et al. | Sep 2012 | A1 |
20120249550 | Akeley et al. | Oct 2012 | A1 |
20120249750 | Izzat et al. | Oct 2012 | A1 |
20120249836 | Ali et al. | Oct 2012 | A1 |
20120249853 | Krolczyk et al. | Oct 2012 | A1 |
20120250990 | Bocirnea | Oct 2012 | A1 |
20120262601 | Choi et al. | Oct 2012 | A1 |
20120262607 | Shimura et al. | Oct 2012 | A1 |
20120268574 | Gidon et al. | Oct 2012 | A1 |
20120274626 | Hsieh | Nov 2012 | A1 |
20120287291 | McMahon | Nov 2012 | A1 |
20120290257 | Hodge et al. | Nov 2012 | A1 |
20120293489 | Chen et al. | Nov 2012 | A1 |
20120293624 | Chen et al. | Nov 2012 | A1 |
20120293695 | Tanaka | Nov 2012 | A1 |
20120307084 | Mantzel | Dec 2012 | A1 |
20120307093 | Miyoshi | Dec 2012 | A1 |
20120307099 | Yahata | Dec 2012 | A1 |
20120314033 | Lee et al. | Dec 2012 | A1 |
20120314937 | Kim et al. | Dec 2012 | A1 |
20120327222 | Ng et al. | Dec 2012 | A1 |
20130002828 | Ding et al. | Jan 2013 | A1 |
20130002953 | Noguchi et al. | Jan 2013 | A1 |
20130003184 | Duparre | Jan 2013 | A1 |
20130010073 | Do et al. | Jan 2013 | A1 |
20130016245 | Yuba | Jan 2013 | A1 |
20130016885 | Tsujimoto | Jan 2013 | A1 |
20130022111 | Chen et al. | Jan 2013 | A1 |
20130027580 | Olsen et al. | Jan 2013 | A1 |
20130033579 | Wajs | Feb 2013 | A1 |
20130033585 | Li et al. | Feb 2013 | A1 |
20130038696 | Ding et al. | Feb 2013 | A1 |
20130047396 | Au et al. | Feb 2013 | A1 |
20130050504 | Safaee-Rad et al. | Feb 2013 | A1 |
20130050526 | Keelan | Feb 2013 | A1 |
20130057710 | McMahon | Mar 2013 | A1 |
20130070060 | Chatterjee et al. | Mar 2013 | A1 |
20130076967 | Brunner et al. | Mar 2013 | A1 |
20130077859 | Stauder et al. | Mar 2013 | A1 |
20130077880 | Venkataraman et al. | Mar 2013 | A1 |
20130077882 | Venkataraman et al. | Mar 2013 | A1 |
20130083172 | Baba | Apr 2013 | A1 |
20130088489 | Schmeitz et al. | Apr 2013 | A1 |
20130088637 | Duparre | Apr 2013 | A1 |
20130093842 | Yahata | Apr 2013 | A1 |
20130100254 | Morioka et al. | Apr 2013 | A1 |
20130107061 | Kumar et al. | May 2013 | A1 |
20130113888 | Koguchi | May 2013 | A1 |
20130113899 | Morohoshi et al. | May 2013 | A1 |
20130113939 | Strandemar | May 2013 | A1 |
20130120536 | Song et al. | May 2013 | A1 |
20130120605 | Georgiev et al. | May 2013 | A1 |
20130121559 | Hu et al. | May 2013 | A1 |
20130127988 | Wang et al. | May 2013 | A1 |
20130128049 | Schofield et al. | May 2013 | A1 |
20130128068 | Georgiev et al. | May 2013 | A1 |
20130128069 | Georgiev et al. | May 2013 | A1 |
20130128087 | Georgiev et al. | May 2013 | A1 |
20130128121 | Agarwala et al. | May 2013 | A1 |
20130135315 | Bares et al. | May 2013 | A1 |
20130135448 | Nagumo et al. | May 2013 | A1 |
20130147979 | McMahon et al. | Jun 2013 | A1 |
20130155050 | Rastogi et al. | Jun 2013 | A1 |
20130162641 | Zhang et al. | Jun 2013 | A1 |
20130169754 | Aronsson et al. | Jul 2013 | A1 |
20130176394 | Tian et al. | Jul 2013 | A1 |
20130208138 | Li et al. | Aug 2013 | A1 |
20130215108 | McMahon et al. | Aug 2013 | A1 |
20130215231 | Hiramoto et al. | Aug 2013 | A1 |
20130216144 | Robinson et al. | Aug 2013 | A1 |
20130222556 | Shimada | Aug 2013 | A1 |
20130222656 | Kaneko | Aug 2013 | A1 |
20130223759 | Nishiyama | Aug 2013 | A1 |
20130229540 | Farina et al. | Sep 2013 | A1 |
20130230237 | Schlosser et al. | Sep 2013 | A1 |
20130250123 | Zhang et al. | Sep 2013 | A1 |
20130250150 | Malone et al. | Sep 2013 | A1 |
20130258067 | Zhang et al. | Oct 2013 | A1 |
20130259317 | Gaddy | Oct 2013 | A1 |
20130265459 | Duparre et al. | Oct 2013 | A1 |
20130274596 | Azizian et al. | Oct 2013 | A1 |
20130274923 | By | Oct 2013 | A1 |
20130278631 | Border et al. | Oct 2013 | A1 |
20130286236 | Mankowski | Oct 2013 | A1 |
20130293760 | Nisenzon et al. | Nov 2013 | A1 |
20130308197 | Duparre | Nov 2013 | A1 |
20130321581 | El-ghoroury et al. | Dec 2013 | A1 |
20130321589 | Kirk et al. | Dec 2013 | A1 |
20130335598 | Gustavsson et al. | Dec 2013 | A1 |
20130342641 | Morioka et al. | Dec 2013 | A1 |
20140002674 | Duparre et al. | Jan 2014 | A1 |
20140002675 | Duparre et al. | Jan 2014 | A1 |
20140009586 | McNamer et al. | Jan 2014 | A1 |
20140013273 | Ng | Jan 2014 | A1 |
20140037137 | Broaddus et al. | Feb 2014 | A1 |
20140037140 | Benhimane et al. | Feb 2014 | A1 |
20140043507 | Wang et al. | Feb 2014 | A1 |
20140059462 | Wernersson | Feb 2014 | A1 |
20140076336 | Clayton et al. | Mar 2014 | A1 |
20140078333 | Miao | Mar 2014 | A1 |
20140079336 | Venkataraman et al. | Mar 2014 | A1 |
20140081454 | Nuyujukian et al. | Mar 2014 | A1 |
20140085502 | Lin et al. | Mar 2014 | A1 |
20140092281 | Nisenzon et al. | Apr 2014 | A1 |
20140098266 | Nayar et al. | Apr 2014 | A1 |
20140098267 | Tian et al. | Apr 2014 | A1 |
20140104490 | Hsieh et al. | Apr 2014 | A1 |
20140118493 | Sali et al. | May 2014 | A1 |
20140118584 | Lee et al. | May 2014 | A1 |
20140125760 | Au et al. | May 2014 | A1 |
20140125771 | Grossmann et al. | May 2014 | A1 |
20140132810 | McMahon | May 2014 | A1 |
20140139642 | Ni et al. | May 2014 | A1 |
20140139643 | Hogasten et al. | May 2014 | A1 |
20140140626 | Cho et al. | May 2014 | A1 |
20140146132 | Bagnato et al. | May 2014 | A1 |
20140146201 | Knight et al. | May 2014 | A1 |
20140176592 | Wilburn et al. | Jun 2014 | A1 |
20140183258 | DiMuro | Jul 2014 | A1 |
20140183334 | Wang et al. | Jul 2014 | A1 |
20140186045 | Poddar et al. | Jul 2014 | A1 |
20140192154 | Jeong et al. | Jul 2014 | A1 |
20140192253 | Laroia | Jul 2014 | A1 |
20140198188 | Izawa | Jul 2014 | A1 |
20140204183 | Lee et al. | Jul 2014 | A1 |
20140218546 | McMahon | Aug 2014 | A1 |
20140232822 | Venkataraman et al. | Aug 2014 | A1 |
20140240528 | Venkataraman et al. | Aug 2014 | A1 |
20140240529 | Venkataraman et al. | Aug 2014 | A1 |
20140253738 | Mullis | Sep 2014 | A1 |
20140267243 | Venkataraman et al. | Sep 2014 | A1 |
20140267286 | Duparre | Sep 2014 | A1 |
20140267633 | Venkataraman et al. | Sep 2014 | A1 |
20140267762 | Mullis et al. | Sep 2014 | A1 |
20140267829 | McMahon et al. | Sep 2014 | A1 |
20140267890 | Lelescu et al. | Sep 2014 | A1 |
20140285675 | Mullis | Sep 2014 | A1 |
20140300706 | Song | Oct 2014 | A1 |
20140307058 | Kirk et al. | Oct 2014 | A1 |
20140307063 | Lee | Oct 2014 | A1 |
20140313315 | Shoham et al. | Oct 2014 | A1 |
20140321712 | Ciurea et al. | Oct 2014 | A1 |
20140333731 | Venkataraman et al. | Nov 2014 | A1 |
20140333764 | Venkataraman et al. | Nov 2014 | A1 |
20140333787 | Venkataraman et al. | Nov 2014 | A1 |
20140340539 | Venkataraman et al. | Nov 2014 | A1 |
20140347509 | Venkataraman et al. | Nov 2014 | A1 |
20140347748 | Duparre | Nov 2014 | A1 |
20140354773 | Venkataraman et al. | Dec 2014 | A1 |
20140354843 | Venkataraman et al. | Dec 2014 | A1 |
20140354844 | Venkataraman et al. | Dec 2014 | A1 |
20140354853 | Venkataraman et al. | Dec 2014 | A1 |
20140354854 | Venkataraman et al. | Dec 2014 | A1 |
20140354855 | Venkataraman et al. | Dec 2014 | A1 |
20140355870 | Venkataraman et al. | Dec 2014 | A1 |
20140368662 | Venkataraman et al. | Dec 2014 | A1 |
20140368683 | Venkataraman et al. | Dec 2014 | A1 |
20140368684 | Venkataraman et al. | Dec 2014 | A1 |
20140368685 | Venkataraman et al. | Dec 2014 | A1 |
20140368686 | Duparre | Dec 2014 | A1 |
20140369612 | Venkataraman et al. | Dec 2014 | A1 |
20140369615 | Venkataraman et al. | Dec 2014 | A1 |
20140376825 | Venkataraman et al. | Dec 2014 | A1 |
20140376826 | Venkataraman et al. | Dec 2014 | A1 |
20150002734 | Lee | Jan 2015 | A1 |
20150003752 | Venkataraman et al. | Jan 2015 | A1 |
20150003753 | Venkataraman et al. | Jan 2015 | A1 |
20150009353 | Venkataraman et al. | Jan 2015 | A1 |
20150009354 | Venkataraman et al. | Jan 2015 | A1 |
20150009362 | Venkataraman et al. | Jan 2015 | A1 |
20150015669 | Venkataraman et al. | Jan 2015 | A1 |
20150035992 | Mullis | Feb 2015 | A1 |
20150036014 | Lelescu et al. | Feb 2015 | A1 |
20150036015 | Lelescu et al. | Feb 2015 | A1 |
20150042766 | Ciurea et al. | Feb 2015 | A1 |
20150042767 | Ciurea et al. | Feb 2015 | A1 |
20150042814 | Vaziri | Feb 2015 | A1 |
20150042833 | Lelescu et al. | Feb 2015 | A1 |
20150049915 | Ciurea et al. | Feb 2015 | A1 |
20150049916 | Ciurea et al. | Feb 2015 | A1 |
20150049917 | Ciurea et al. | Feb 2015 | A1 |
20150055884 | Venkataraman et al. | Feb 2015 | A1 |
20150085073 | Bruls et al. | Mar 2015 | A1 |
20150085174 | Shabtay et al. | Mar 2015 | A1 |
20150091900 | Yang et al. | Apr 2015 | A1 |
20150095235 | Dua | Apr 2015 | A1 |
20150098079 | Montgomery et al. | Apr 2015 | A1 |
20150104076 | Hayasaka | Apr 2015 | A1 |
20150104101 | Bryant et al. | Apr 2015 | A1 |
20150122411 | Rodda et al. | May 2015 | A1 |
20150124059 | Georgiev et al. | May 2015 | A1 |
20150124113 | Rodda et al. | May 2015 | A1 |
20150124151 | Rodda et al. | May 2015 | A1 |
20150138346 | Venkataraman et al. | May 2015 | A1 |
20150146029 | Venkataraman et al. | May 2015 | A1 |
20150146030 | Venkataraman et al. | May 2015 | A1 |
20150161798 | Venkataraman et al. | Jun 2015 | A1 |
20150199793 | Venkataraman et al. | Jul 2015 | A1 |
20150199841 | Venkataraman et al. | Jul 2015 | A1 |
20150207990 | Ford et al. | Jul 2015 | A1 |
20150228081 | Kim et al. | Aug 2015 | A1 |
20150235476 | McMahon et al. | Aug 2015 | A1 |
20150237329 | Venkataraman et al. | Aug 2015 | A1 |
20150243480 | Yamada | Aug 2015 | A1 |
20150244927 | Laroia et al. | Aug 2015 | A1 |
20150245013 | Venkataraman et al. | Aug 2015 | A1 |
20150248744 | Hayasaka et al. | Sep 2015 | A1 |
20150254868 | Srikanth et al. | Sep 2015 | A1 |
20150264337 | Venkataraman et al. | Sep 2015 | A1 |
20150288861 | Duparre | Oct 2015 | A1 |
20150296137 | Duparre et al. | Oct 2015 | A1 |
20150312455 | Venkataraman et al. | Oct 2015 | A1 |
20150317638 | Donaldson | Nov 2015 | A1 |
20150326852 | Duparre et al. | Nov 2015 | A1 |
20150332468 | Hayasaka et al. | Nov 2015 | A1 |
20150373261 | Rodda et al. | Dec 2015 | A1 |
20160037097 | Duparre | Feb 2016 | A1 |
20160042548 | Du et al. | Feb 2016 | A1 |
20160044252 | Molina | Feb 2016 | A1 |
20160044257 | Venkataraman et al. | Feb 2016 | A1 |
20160057332 | Ciurea et al. | Feb 2016 | A1 |
20160065934 | Kaza et al. | Mar 2016 | A1 |
20160163051 | Mullis | Jun 2016 | A1 |
20160165106 | Duparre | Jun 2016 | A1 |
20160165134 | Lelescu et al. | Jun 2016 | A1 |
20160165147 | Nisenzon et al. | Jun 2016 | A1 |
20160165212 | Mullis | Jun 2016 | A1 |
20160182786 | Anderson et al. | Jun 2016 | A1 |
20160191768 | Shin et al. | Jun 2016 | A1 |
20160195733 | Lelescu et al. | Jul 2016 | A1 |
20160198096 | McMahon et al. | Jul 2016 | A1 |
20160209654 | Riccomini et al. | Jul 2016 | A1 |
20160210785 | Balachandreswaran et al. | Jul 2016 | A1 |
20160227195 | Venkataraman et al. | Aug 2016 | A1 |
20160249001 | McMahon | Aug 2016 | A1 |
20160255333 | Nisenzon et al. | Sep 2016 | A1 |
20160266284 | Duparre et al. | Sep 2016 | A1 |
20160267486 | Mitra et al. | Sep 2016 | A1 |
20160267665 | Venkataraman et al. | Sep 2016 | A1 |
20160267672 | Ciurea et al. | Sep 2016 | A1 |
20160269626 | McMahon | Sep 2016 | A1 |
20160269627 | McMahon | Sep 2016 | A1 |
20160269650 | Venkataraman et al. | Sep 2016 | A1 |
20160269651 | Venkataraman et al. | Sep 2016 | A1 |
20160269664 | Duparre | Sep 2016 | A1 |
20160309084 | Venkataraman et al. | Oct 2016 | A1 |
20160309134 | Venkataraman et al. | Oct 2016 | A1 |
20160316140 | Nayar et al. | Oct 2016 | A1 |
20160323578 | Kaneko et al. | Nov 2016 | A1 |
20170004791 | Aubineau et al. | Jan 2017 | A1 |
20170006233 | Venkataraman et al. | Jan 2017 | A1 |
20170011405 | Pandey | Jan 2017 | A1 |
20170048468 | Pain et al. | Feb 2017 | A1 |
20170053382 | Lelescu et al. | Feb 2017 | A1 |
20170054901 | Venkataraman et al. | Feb 2017 | A1 |
20170070672 | Rodda et al. | Mar 2017 | A1 |
20170070673 | Lelescu et al. | Mar 2017 | A1 |
20170070753 | Kaneko | Mar 2017 | A1 |
20170078568 | Venkataraman et al. | Mar 2017 | A1 |
20170085845 | Venkataraman et al. | Mar 2017 | A1 |
20170094243 | Venkataraman et al. | Mar 2017 | A1 |
20170099465 | Mullis et al. | Apr 2017 | A1 |
20170109742 | Varadarajan | Apr 2017 | A1 |
20170142405 | Shors et al. | May 2017 | A1 |
20170163862 | Molina | Jun 2017 | A1 |
20170178363 | Venkataraman et al. | Jun 2017 | A1 |
20170187933 | Duparre | Jun 2017 | A1 |
20170188011 | Panescu et al. | Jun 2017 | A1 |
20170244960 | Ciurea et al. | Aug 2017 | A1 |
20170257562 | Venkataraman et al. | Sep 2017 | A1 |
20170365104 | McMahon et al. | Dec 2017 | A1 |
20180005244 | Govindarajan et al. | Jan 2018 | A1 |
20180007284 | Venkataraman et al. | Jan 2018 | A1 |
20180013945 | Ciurea et al. | Jan 2018 | A1 |
20180024330 | Laroia | Jan 2018 | A1 |
20180035057 | McMahon et al. | Feb 2018 | A1 |
20180040135 | Mullis | Feb 2018 | A1 |
20180048830 | Venkataraman et al. | Feb 2018 | A1 |
20180048879 | Venkataraman et al. | Feb 2018 | A1 |
20180081090 | Duparre et al. | Mar 2018 | A1 |
20180089888 | Ondruska | Mar 2018 | A1 |
20180097993 | Nayar et al. | Apr 2018 | A1 |
20180109782 | Duparre et al. | Apr 2018 | A1 |
20180124311 | Lelescu et al. | May 2018 | A1 |
20180131852 | McMahon | May 2018 | A1 |
20180139382 | Venkataraman et al. | May 2018 | A1 |
20180144458 | Xu | May 2018 | A1 |
20180147645 | Boccadoro et al. | May 2018 | A1 |
20180189767 | Bigioi | Jul 2018 | A1 |
20180197035 | Venkataraman et al. | Jul 2018 | A1 |
20180211402 | Ciurea et al. | Jul 2018 | A1 |
20180227511 | McMahon | Aug 2018 | A1 |
20180240265 | Yang et al. | Aug 2018 | A1 |
20180270473 | Mullis | Sep 2018 | A1 |
20180286120 | Fleishman et al. | Oct 2018 | A1 |
20180302554 | Lelescu et al. | Oct 2018 | A1 |
20180330182 | Venkataraman et al. | Nov 2018 | A1 |
20180376122 | Park et al. | Dec 2018 | A1 |
20190012768 | Tafazoli Bilandi et al. | Jan 2019 | A1 |
20190037116 | Molina | Jan 2019 | A1 |
20190037150 | Srikanth et al. | Jan 2019 | A1 |
20190043253 | Lucas et al. | Feb 2019 | A1 |
20190057513 | Jain et al. | Feb 2019 | A1 |
20190063905 | Venkataraman et al. | Feb 2019 | A1 |
20190089947 | Venkataraman et al. | Mar 2019 | A1 |
20190098209 | Venkataraman et al. | Mar 2019 | A1 |
20190109998 | Venkataraman et al. | Apr 2019 | A1 |
20190122422 | Sheffield | Apr 2019 | A1 |
20190164341 | Venkataraman | May 2019 | A1 |
20190174040 | Mcmahon | Jun 2019 | A1 |
20190197735 | Xiong et al. | Jun 2019 | A1 |
20190215496 | Mullis et al. | Jul 2019 | A1 |
20190230348 | Ciurea et al. | Jul 2019 | A1 |
20190235138 | Duparre et al. | Aug 2019 | A1 |
20190243086 | Rodda et al. | Aug 2019 | A1 |
20190244379 | Venkataraman | Aug 2019 | A1 |
20190268586 | Mullis | Aug 2019 | A1 |
20190289176 | Duparre | Sep 2019 | A1 |
20190347768 | Lelescu et al. | Nov 2019 | A1 |
20190356863 | Venkataraman et al. | Nov 2019 | A1 |
20190362515 | Ciurea et al. | Nov 2019 | A1 |
20190364263 | Jannard et al. | Nov 2019 | A1 |
20190371053 | Engholm et al. | Dec 2019 | A1 |
20200026948 | Venkataraman et al. | Jan 2020 | A1 |
20200066036 | Choi | Feb 2020 | A1 |
20200151894 | Jain et al. | May 2020 | A1 |
20200252597 | Mullis | Aug 2020 | A1 |
20200334905 | Venkataraman | Oct 2020 | A1 |
20200389604 | Venkataraman et al. | Dec 2020 | A1 |
20210042952 | Jain et al. | Feb 2021 | A1 |
20210044790 | Venkataraman et al. | Feb 2021 | A1 |
20210063141 | Venkataraman et al. | Mar 2021 | A1 |
20210133927 | Lelescu et al. | May 2021 | A1 |
20210150748 | Ciurea et al. | May 2021 | A1 |
Number | Date | Country |
---|---|---|
2488005 | Apr 2002 | CN |
1619358 | May 2005 | CN |
1669332 | Sep 2005 | CN |
1727991 | Feb 2006 | CN |
1839394 | Sep 2006 | CN |
1985524 | Jun 2007 | CN |
1992499 | Jul 2007 | CN |
101010619 | Aug 2007 | CN |
101046882 | Oct 2007 | CN |
101064780 | Oct 2007 | CN |
101102388 | Jan 2008 | CN |
101147392 | Mar 2008 | CN |
201043890 | Apr 2008 | CN |
101212566 | Jul 2008 | CN |
101312540 | Nov 2008 | CN |
101427372 | May 2009 | CN |
101551586 | Oct 2009 | CN |
101593350 | Dec 2009 | CN |
101606086 | Dec 2009 | CN |
101785025 | Jul 2010 | CN |
101883291 | Nov 2010 | CN |
102037717 | Apr 2011 | CN |
102164298 | Aug 2011 | CN |
102184720 | Sep 2011 | CN |
102375199 | Mar 2012 | CN |
103004180 | Mar 2013 | CN |
103765864 | Apr 2014 | CN |
104081414 | Oct 2014 | CN |
104508681 | Apr 2015 | CN |
104662589 | May 2015 | CN |
104685513 | Jun 2015 | CN |
104685860 | Jun 2015 | CN |
105409212 | Mar 2016 | CN |
103765864 | Jul 2017 | CN |
104081414 | Aug 2017 | CN |
104662589 | Aug 2017 | CN |
107077743 | Aug 2017 | CN |
107230236 | Oct 2017 | CN |
107346061 | Nov 2017 | CN |
107404609 | Nov 2017 | CN |
104685513 | Apr 2018 | CN |
107924572 | Apr 2018 | CN |
108307675 | Jul 2018 | CN |
104335246 | Sep 2018 | CN |
107404609 | Feb 2020 | CN |
107346061 | Apr 2020 | CN |
107230236 | Dec 2020 | CN |
108307675 | Dec 2020 | CN |
107077743 | Mar 2021 | CN |
114612545 | Jun 2022 | CN |
602011041799.1 | Sep 2017 | DE |
0677821 | Oct 1995 | EP |
0840502 | May 1998 | EP |
1201407 | May 2002 | EP |
1355274 | Oct 2003 | EP |
1734766 | Dec 2006 | EP |
1991145 | Nov 2008 | EP |
1243945 | Jan 2009 | EP |
2026563 | Feb 2009 | EP |
2031592 | Mar 2009 | EP |
2041454 | Apr 2009 | EP |
2072785 | Jun 2009 | EP |
2104334 | Sep 2009 | EP |
2136345 | Dec 2009 | EP |
2156244 | Feb 2010 | EP |
2244484 | Oct 2010 | EP |
0957642 | Apr 2011 | EP |
2336816 | Jun 2011 | EP |
2339532 | Jun 2011 | EP |
2381418 | Oct 2011 | EP |
2386554 | Nov 2011 | EP |
2462477 | Jun 2012 | EP |
2502115 | Sep 2012 | EP |
2569935 | Mar 2013 | EP |
2652678 | Oct 2013 | EP |
2677066 | Dec 2013 | EP |
2708019 | Mar 2014 | EP |
2761534 | Aug 2014 | EP |
2777245 | Sep 2014 | EP |
2867718 | May 2015 | EP |
2873028 | May 2015 | EP |
2888698 | Jul 2015 | EP |
2888720 | Jul 2015 | EP |
2901671 | Aug 2015 | EP |
2973476 | Jan 2016 | EP |
3066690 | Sep 2016 | EP |
2569935 | Dec 2016 | EP |
3201877 | Aug 2017 | EP |
2652678 | Sep 2017 | EP |
3284061 | Feb 2018 | EP |
3286914 | Feb 2018 | EP |
3201877 | Mar 2018 | EP |
2817955 | Apr 2018 | EP |
3328048 | May 2018 | EP |
3075140 | Jun 2018 | EP |
3201877 | Dec 2018 | EP |
3467776 | Apr 2019 | EP |
2708019 | Oct 2019 | EP |
3286914 | Dec 2019 | EP |
2761534 | Nov 2020 | EP |
2888720 | Mar 2021 | EP |
3328048 | Apr 2021 | EP |
2482022 | Jan 2012 | GB |
2708CHENP2014 | Aug 2015 | IN |
361194 | Mar 2021 | IN |
59-025483 | Feb 1984 | JP |
64-037177 | Feb 1989 | JP |
02-285772 | Nov 1990 | JP |
06129851 | May 1994 | JP |
07-015457 | Jan 1995 | JP |
H0756112 | Mar 1995 | JP |
09171075 | Jun 1997 | JP |
09181913 | Jul 1997 | JP |
10253351 | Sep 1998 | JP |
11142609 | May 1999 | JP |
11223708 | Aug 1999 | JP |
11325889 | Nov 1999 | JP |
2000209503 | Jul 2000 | JP |
2001008235 | Jan 2001 | JP |
2001194114 | Jul 2001 | JP |
2001264033 | Sep 2001 | JP |
2001277260 | Oct 2001 | JP |
2001337263 | Dec 2001 | JP |
2002195910 | Jul 2002 | JP |
2002205310 | Jul 2002 | JP |
2002209226 | Jul 2002 | JP |
2002250607 | Sep 2002 | JP |
2002252338 | Sep 2002 | JP |
2003094445 | Apr 2003 | JP |
2003139910 | May 2003 | JP |
2003163938 | Jun 2003 | JP |
2003298920 | Oct 2003 | JP |
2004221585 | Aug 2004 | JP |
2005116022 | Apr 2005 | JP |
2005181460 | Jul 2005 | JP |
2005295381 | Oct 2005 | JP |
2005303694 | Oct 2005 | JP |
2005341569 | Dec 2005 | JP |
2005354124 | Dec 2005 | JP |
2006033228 | Feb 2006 | JP |
2006033493 | Feb 2006 | JP |
2006047944 | Feb 2006 | JP |
2006258930 | Sep 2006 | JP |
2007520107 | Jul 2007 | JP |
2007259136 | Oct 2007 | JP |
2008039852 | Feb 2008 | JP |
2008055908 | Mar 2008 | JP |
2008507874 | Mar 2008 | JP |
2008172735 | Jul 2008 | JP |
2008258885 | Oct 2008 | JP |
2009064421 | Mar 2009 | JP |
2009132010 | Jun 2009 | JP |
2009300268 | Dec 2009 | JP |
2010139288 | Jun 2010 | JP |
2011017764 | Jan 2011 | JP |
2011030184 | Feb 2011 | JP |
2011109484 | Jun 2011 | JP |
2011523538 | Aug 2011 | JP |
2011203238 | Oct 2011 | JP |
2012504805 | Feb 2012 | JP |
2011052064 | Mar 2013 | JP |
2013509022 | Mar 2013 | JP |
2013526801 | Jun 2013 | JP |
2014519741 | Aug 2014 | JP |
2014521117 | Aug 2014 | JP |
2014535191 | Dec 2014 | JP |
2015022510 | Feb 2015 | JP |
2015522178 | Aug 2015 | JP |
2015534734 | Dec 2015 | JP |
5848754 | Jan 2016 | JP |
2016524125 | Aug 2016 | JP |
6140709 | May 2017 | JP |
2017163550 | Sep 2017 | JP |
2017163587 | Sep 2017 | JP |
2017531976 | Oct 2017 | JP |
6546613 | Jul 2019 | JP |
2019-220957 | Dec 2019 | JP |
6630891 | Dec 2019 | JP |
2020017999 | Jan 2020 | JP |
6767543 | Sep 2020 | JP |
6767558 | Sep 2020 | JP |
1020050004239 | Jan 2005 | KR |
100496875 | Jun 2005 | KR |
1020110097647 | Aug 2011 | KR |
20140045373 | Apr 2014 | KR |
20170063827 | Jun 2017 | KR |
101824672 | Feb 2018 | KR |
101843994 | Mar 2018 | KR |
101973822 | Apr 2019 | KR |
10-2002165 | Jul 2019 | KR |
10-2111181 | May 2020 | KR |
191151 | Jul 2013 | SG |
11201500910 | Oct 2015 | SG |
200828994 | Jul 2008 | TW |
200939739 | Sep 2009 | TW |
201228382 | Jul 2012 | TW |
I535292 | May 2016 | TW |
1994020875 | Sep 1994 | WO |
2005057922 | Jun 2005 | WO |
2006039906 | Apr 2006 | WO |
2006039906 | Apr 2006 | WO |
2007013250 | Feb 2007 | WO |
2007083579 | Jul 2007 | WO |
2007134137 | Nov 2007 | WO |
2008045198 | Apr 2008 | WO |
2008050904 | May 2008 | WO |
2008108271 | Sep 2008 | WO |
2008108926 | Sep 2008 | WO |
2008150817 | Dec 2008 | WO |
2009073950 | Jun 2009 | WO |
2009151903 | Dec 2009 | WO |
2009157273 | Dec 2009 | WO |
2010037512 | Apr 2010 | WO |
2011008443 | Jan 2011 | WO |
2011026527 | Mar 2011 | WO |
2011046607 | Apr 2011 | WO |
2011055655 | May 2011 | WO |
2011063347 | May 2011 | WO |
2011105814 | Sep 2011 | WO |
2011116203 | Sep 2011 | WO |
2011063347 | Oct 2011 | WO |
2011121117 | Oct 2011 | WO |
2011143501 | Nov 2011 | WO |
2012057619 | May 2012 | WO |
2012057620 | May 2012 | WO |
2012057621 | May 2012 | WO |
2012057622 | May 2012 | WO |
2012057623 | May 2012 | WO |
2012057620 | Jun 2012 | WO |
2012074361 | Jun 2012 | WO |
2012078126 | Jun 2012 | WO |
2012082904 | Jun 2012 | WO |
2012155119 | Nov 2012 | WO |
2013003276 | Jan 2013 | WO |
2013043751 | Mar 2013 | WO |
2013043761 | Mar 2013 | WO |
2013049699 | Apr 2013 | WO |
2013055960 | Apr 2013 | WO |
2013119706 | Aug 2013 | WO |
2013126578 | Aug 2013 | WO |
2013166215 | Nov 2013 | WO |
2014004134 | Jan 2014 | WO |
2014005123 | Jan 2014 | WO |
2014031795 | Feb 2014 | WO |
2014052974 | Apr 2014 | WO |
2014032020 | May 2014 | WO |
2014078443 | May 2014 | WO |
2014130849 | Aug 2014 | WO |
2014131038 | Aug 2014 | WO |
2014133974 | Sep 2014 | WO |
2014138695 | Sep 2014 | WO |
2014138697 | Sep 2014 | WO |
2014144157 | Sep 2014 | WO |
2014145856 | Sep 2014 | WO |
2014149403 | Sep 2014 | WO |
2014149902 | Sep 2014 | WO |
2014150856 | Sep 2014 | WO |
2014153098 | Sep 2014 | WO |
2014159721 | Oct 2014 | WO |
2014159779 | Oct 2014 | WO |
2014160142 | Oct 2014 | WO |
2014164550 | Oct 2014 | WO |
2014164909 | Oct 2014 | WO |
2014165244 | Oct 2014 | WO |
2014133974 | Apr 2015 | WO |
2015048694 | Apr 2015 | WO |
2015048906 | Apr 2015 | WO |
2015070105 | May 2015 | WO |
2015074078 | May 2015 | WO |
2015081279 | Jun 2015 | WO |
2015134996 | Sep 2015 | WO |
2015183824 | Dec 2015 | WO |
2016054089 | Apr 2016 | WO |
2016172125 | Oct 2016 | WO |
2016167814 | Oct 2016 | WO |
2016172125 | Apr 2017 | WO |
2018053181 | Mar 2018 | WO |
2019038193 | Feb 2019 | WO |
Entry |
---|
US 8,957,977 B2, 02/2015, Venkataraman et al. (withdrawn) |
Ansari et al., “3-D Face Modeling Using Two Views and a Generic Face Model with Application to 3-D Face Recognition”, Proceedings of the IEEE Conference on Advanced Video and Signal Based Surveillance, Jul. 22, 2003, 9 pgs. |
Aufderheide et al., “A MEMS-based Smart Sensor System for Estimation of Camera Pose for Computer Vision Applications”, Research and Innovation Conference 2011, Jul. 29, 2011, pp. 1-10. |
Baker et al., “Limits on Super-Resolution and How to Break Them”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Sep. 2002, vol. 24, No. 9, pp. 1167-1183. |
Banz et al., “Real-Time Semi-Global Matching Disparity Estimation on the GPU”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Sep. 2002, vol. 24, No. 9, pp. 1167-1183. |
Barron et al., “Intrinsic Scene Properties from a Single RGB-D Image”, 2013 IEEE Conference on Computer Vision and Pattern Recognition, Jun. 23-28, 2013, Portland, OR, USA, pp. 17-24. |
Bennett et al., “Multispectral Bilateral Video Fusion”, Computer Graphics (ACM SIGGRAPH Proceedings), Jul. 25, 2006, published Jul. 30, 2006, 1 pg. |
Bennett et al., “Multispectral Video Fusion”, Computer Graphics (ACM SIGGRAPH Proceedings), Jul. 25, 2006, published Jul. 30, 2006, 1 pg. |
Berretti et al., “Face Recognition by Super-Resolved 3D Models from Consumer Depth Cameras”, IEEE Transactions on Information Forensics and Security, vol. 9, No. 9, Sep. 2014, pp. 1436-1448. |
Bertalmio et al., “Image Inpainting”, Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, 2000, ACM Pres/Addison-Wesley Publishing Co., pp. 417-424. |
Bertero et al., “Super-resolution in computational imaging”, Micron, Jan. 1, 2003, vol. 34, Issues 6-7, 17 pgs. |
Bishop et al., “Full-Resolution Depth Map Estimation from an Aliased Plenoptic Light Field”, ACCV Nov. 8, 2010, Part II, LNCS 6493, pp. 186-200. |
Bishop et al., “Light Field Superresolution”, Computational Photography (ICCP), 2009 IEEE International Conference, Conference Date Apr. 16-17, published Jan. 26, 2009, 9 pgs. |
Bishop et al., “The Light Field Camera: Extended Depth of Field, Aliasing, and Superresolution”, IEEE Transactions on Pattern Analysis and Machine Intelligence, May 2012, vol. 34, No. 5, published Aug. 18, 2011, pp. 972-986. |
Blanz et al., “A Morphable Model for The Synthesis of 3D Faces”, In Proceedings of ACM SIGGRAPH 1999, Jul. 1, 1999, pp. 187-194. |
Borman, “Topics in Multiframe Superresolution Restoration”, Thesis of Sean Borman, Apr. 2004, 282 pgs. |
Borman et al., “Image Sequence Processing”, Dekker Encyclopedia of Optical Engineering, Oct. 14, 2002, 81 pgs. |
Borman et al, “Linear models for multi-frame super-resolution restoration under non-affine registration and spatially varying PSF”, Proc. SPIE, May 21, 2004, vol. 5299, 12 pgs. |
Borman et al., “Simultaneous Multi-Frame MAP Super-Resolution Video Enhancement Using Spatio-Temporal Priors”, Image Processing, 1999, ICIP 99 Proceedings, vol. 3, pp. 469-473. |
Borman et al., “Super-Resolution from Image Sequences—A Review”, Circuits & Systems, 1998, pp. 374-378. |
Borman et al., “Nonlinear Prediction Methods for Estimation of Clique Weighting Parameters in NonGaussian Image Models”, Proc. SPIE, Sep. 22, 1998, vol. 3459, 9 pgs. |
Borman et al., “Block-Matching Sub-Pixel Motion Estimation from Noisy, Under-Sampled Frames—An Empirical Performance Evaluation”, Proc SPIE, Dec. 28, 1998, vol. 3653, 10 pgs. |
Borman et al., “Image Resampling and Constraint Formulation for Multi-Frame Super-Resolution Restoration”, Proc SPIE, Dec. 28, 1998, vol. 3653, 10 pgs. |
Bose et al., “Superresolution and Noise Filtering Using Moving Least Squares”, IEEE Transactions on Image Processing, Aug. 2006, vol. 15, Issue 8, published Jul. 17, 2006, pp. 2239-2248. |
Boye et al., “Comparison of Subpixel Image Registration Algorithms”, Proc. of SPIE—IS&T Electronic Imaging, Feb. 3, 2009, vol. 7246, pp. 72460X-1-72460X-9; doi: 10.1117/12.810369. |
Bruckner et al., “Thin wafer-level camera lenses inspired by insect compound eyes”, Optics Express, Nov. 22, 2010, vol. 18, No. 24, pp. 24379-24394. |
Bruckner et al., “Artificial compound eye applying hyperacuity”, Optics Express, Dec. 11, 2006, vol. 14, No. 25, pp. 12076-12084. |
Bruckner et al., “Driving microoptical imaging systems towards miniature camera applications”, Proc. SPIE, Micro-Optics, May 13, 2010, 11 pgs. |
Bryan et al., “Perspective Distortion from Interpersonal Distance Is an Implicit Visual Cue for Social Judgments of Faces”, PLOS One, vol. 7, Issue 9, Sep. 26, 2012, e45301, doi:10.1371/journal.pone.0045301, 9 pgs. |
Bulat et al., “How far are we from solving the 2D & 3D Face Alignment problem? (and a dataset of 230,000 3D facial landmarks)”, arxiv.org, Cornell University Library, 201 Olin Library Cornell University Ithaca, NY 14853, Mar. 21, 2017. |
Cai et al., “3D Deformable Face Tracking with a Commodity Depth Camera”, Proceedings of the European Conference on Computer Vision: Part III, Sep. 5-11, 2010, 14pgs. |
Capel, “Image Mosaicing and Super-resolution”, Retrieved on Nov. 10, 2012, Retrieved from the Internet at URL :<http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.226.2643&rep=rep1 &type=pdf>, 2001, 269 pgs. |
Caron et al., “Multiple camera types simultaneous stereo calibration, Robotics and Automation (ICRA)”, 2011 IEEE International Conference On, May 1, 2011 (May 1, 2011), pp. 2933-2938. |
Carroll et al., “Image Warps for Artistic Perspective Manipulation”, ACM Transactions on Graphics (TOG), vol. 29, No. 4, Jul. 26, 2010, Article No. 127, 9 pgs. |
Chan et al., “Investigation of Computational Compound-Eye Imaging System with Super-Resolution Reconstruction”, IEEE, ISASSP, Jun. 19, 2006, pp. 1177-1180. |
Chan et al., “Extending the Depth of Field in a Compound-Eye Imaging System with Super-Resolution Reconstruction”, Proceedings—International Conference on Pattern Recognition, Jan. 1, 2006, vol. 3, pp. 623-626. |
Chan et al., “Super-resolution reconstruction in a computational compound-eye imaging system”, Multidim. Syst. Sign. Process, published online Feb. 23, 2007, vol. 18, pp. 83-101. |
Chen et al., “Interactive deformation of light fields”, Symposium on Interactive 3D Graphics, 2005, pp. 139-146. |
Chen et al., “KNN Matting”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Sep. 2013, vol. 35, No. 9, pp. 2175-2188. |
Chen et al., “KNN matting”, 2012 IEEE Conference on Computer Vision and Pattern Recognition, Jun. 16-21, 2012, Providence, RI, USA, pp. 869-876. |
Chen et al., “Image Matting with Local and Nonlocal Smooth Priors” CVPR '13 Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition, Jun. 23, 2013, pp. 1902-1907. |
Chen et al., “Human Face Modeling and Recognition Through Multi-View High Resolution Stereopsis”, IEEE Conference on Computer Vision and Pattern Recognition Workshop, Jun. 17-22, 2006, 6 pgs. |
Collins et al., “An Active Camera System for Acquiring Multi-View Video”, IEEE 2002 International Conference on Image Processing, Date of Conference: Sep. 22-25, 2002, Rochester, NY, 4 pgs. |
Cooper et al., “The perceptual basis of common photographic practice”, Journal of Vision, vol. 12, No. 5, Article 8, May 25, 2012, pp. 1-14. |
Crabb et al., “Real-time foreground segmentation via range and color imaging”, 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Anchorage, AK, USA, Jun. 23-28, 2008, pp. 1-5. |
Dainese et al., “Accurate Depth-Map Estimation For 3D Face Modeling”, IEEE European Signal Processing Conference, Sep. 4-8, 2005, 4 pgs. |
Debevec et al., “Recovering High Dynamic Range Radiance Maps from Photographs”, Computer Graphics (ACM SIGGRAPH Proceedings), Aug. 16, 1997, 10 pgs. |
Do, Minh N. “Immersive Visual Communication with Depth”, Presented at Microsoft Research, Jun. 15, 2011, Retrieved from: http://minhdo.ece.illinois.edu/talks/ImmersiveComm.pdf, 42 pgs. |
Do et al., Immersive Visual Communication, IEEE Signal Processing Magazine, vol. 28, Issue 1, Jan. 2011, DOI: 10.1109/MSP.2010.939075, Retrieved from: http://minhdo.ece.illinois.edu/publications/ImmerComm_SPM.pdf, pp. 58-66. |
Dou et al., “End-to-end 3D face reconstruction with deep neural networks” arXiv:1704.05020v1, Apr. 17, 2017, 10 pgs. |
Drouin et al., “Improving Border Localization of Multi-Baseline Stereo Using Border-Cut”, International Journal of Computer Vision, Jul. 5, 2006, vol. 83, Issue 3, 8 pgs. |
Drouin et al., “Fast Multiple-Baseline Stereo with Occlusion”, Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05), Ottawa, Ontario, Canada, Jun. 13-16, 2005, pp. 540-547. |
Drouin et al., “Geo-Consistency for Wide Multi-Camera Stereo”, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05), vol. 1, Jun. 20-25, 2005, pp. 351-358. |
Drulea et al., “Motion Estimation Using the Correlation Transform”, IEEE Transactions on Image Processing, Aug. 2013, vol. 22, No. 8, pp. 3260-3270, first published May 14, 2013. |
Duparre et al., “Microoptical artificial compound eyes—from design to experimental verification of two different concepts”, Proc. of SPIE, Optical Design and Engineering II, vol. 5962, Oct. 17, 2005, pp. 59622A-1-59622A-12. |
Duparre et al., Novel Optics/Micro-Optics for Miniature Imaging Systems, Proc. of SPIE, Apr. 21, 2006, vol. 6196, pp. 619607-1-619607-15. |
Duparre et al., “Micro-optical artificial compound eyes”, Bioinspiration & Biomimetics, Apr. 6, 2006, vol. 1, pp. R1-R16. |
Duparre et al., “Artificial compound eye zoom camera”, Bioinspiration & Biomimetics, Nov. 21, 2008, vol. 3, pp. 1-6. |
Duparre et al., “Artificial apposition compound eye fabricated by micro-optics technology”, Applied Optics, Aug. 1, 2004, vol. 43, No. 22, pp. 4303-4310. |
Duparre et al., “Micro-optically fabricated artificial apposition compound eye”, Electronic Imaging—Science and Technology, Prod. SPIE 5301, Jan. 2004, pp. 25-33. |
Duparre et al., “Chirped arrays of refractive ellipsoidal microlenses for aberration correction under oblique incidence”, Optics Express, Dec. 26, 2005, vol. 13, No. 26, pp. 10539-10551. |
Duparre et al., “Artificial compound eyes—different concepts and their application to ultra flat image acquisition sensors”, MOEMS and Miniaturized Systems IV, Proc. SPIE 5346, Jan. 24, 2004, pp. 89-100. |
Duparre et al., “Ultra-Thin Camera Based on Artificial Apposition Compound Eyes”, 10th Microoptics Conference, Sep. 1-3, 2004, 2 pgs. |
Duparre et al., “Microoptical telescope compound eye”, Optics Express, Feb. 7, 2005, vol. 13, No. 3, pp. 889-903. |
Duparre et al., “Theoretical analysis of an artificial superposition compound eye for application in ultra flat digital image acquisition devices”, Optical Systems Design, Proc. SPIE 5249, Sep. 2003, pp. 408-418. |
Duparre et al., “Thin compound-eye camera”, Applied Optics, May 20, 2005, vol. 44, No. 15, pp. 2949-2956. |
Duparre et al., “Microoptical Artificial Compound Eyes—Two Different Concepts for Compact Imaging Systems”, 11th Microoptics Conference, Oct. 30-Nov. 2, 2005, 2 pgs. |
Eng et al., “Gaze correction for 3D tele-immersive communication system”, IVMSP Workshop, 2013 IEEE 11th. IEEE, Jun. 10, 2013. |
Fanaswala, “Regularized Super-Resolution of Multi-View Images”, Retrieved on Nov. 10, 2012 (Nov. 10, 2012). Retrieved from the Internet at URL :<http://www.site.uottawa.ca/-edubois/theses/Fanaswala_thesis.pdf>, 2009, 163 pgs. |
Fang et al., “Volume Morphing Methods for Landmark Based 3D Image Deformation”, SPIE vol. 2710, Proc. 1996 SPIE Intl Symposium on Medical Imaging, Newport Beach, CA, Feb. 10, 1996, pp. 404-415. |
Fangmin et al., “3D Face Reconstruction Based on Convolutional Neural Network”, 2017 10th International Conference on Intelligent Computation Technology and Automation, Oct. 9-10, 2017, Changsha, China. |
Farrell et al., “Resolution and Light Sensitivity Tradeoff with Pixel Size”, Proceedings of the SPIE Electronic Imaging 2006 Conference, Feb. 2, 2006, vol. 6069, 8 pgs. |
Farsiu et al., “Advances and Challenges in Super-Resolution”, International Journal of Imaging Systems and Technology, Aug. 12, 2004, vol. 14, pp. 47-57. |
Farsiu et al., “Fast and Robust Multiframe Super Resolution”, IEEE Transactions on Image Processing, Oct. 2004, published Sep. 3, 2004, vol. 13, No. 10, pp. 1327-1344. |
Farsiu et al., “Multiframe Demosaicing and Super-Resolution of Color Images”, IEEE Transactions on Image Processing, Jan. 2006, vol. 15, No. 1, date of publication Dec. 12, 2005, pp. 141-159. |
Fechteler et al., Fast and High Resolution 3D Face Scanning, IEEE International Conference on Image Processing, Sep. 16-Oct. 19, 2007, 4 pgs. |
Fecker et al., “Depth Map Compression for Unstructured Lumigraph Rendering”, Proc. SPIE 6077, Proceedings Visual Communications and Image Processing 2006, Jan. 18, 2006, pp. 60770B-1-60770B-8. |
Feris et al., “Multi-Flash Stereopsis: Depth Edge Preserving Stereo with Small Baseline Illumination”, IEEE Trans on PAMI, 2006, 31 pgs. |
Fife et al., “A 3D Multi-Aperture Image Sensor Architecture”, Custom Integrated Circuits Conference, 2006, CICC '06, IEEE, pp. 281-284. |
Fife et al., “A 3MPixel Multi-Aperture Image Sensor with 0.7Mu Pixels in 0.11Mu CMOS”, ISSCC 2008, Session 2, Image Sensors & Technology, 2008, pp. 48-50. |
Fischer et al., “Optical System Design”, 2nd Edition, SPIE Press, Feb. 14, 2008, pp. 49-58. |
Fischer et al., “Optical System Design”, 2nd Edition, SPIE Press, Feb. 14, 2008, pp. 191-198. |
Garg et al., “Unsupervised CNN for Single View Depth Estimation: Geometry to the Rescue”, In European Conference on Computer Vision, Springer, Cham, Jul. 2016, 16 pgs. |
Gastal et al., “Shared Sampling for Real-Time Alpha Matting”, Computer Graphics Forum, Eurographics 2010, vol. 29, Issue 2, May 2010, pp. 575-584. |
Georgeiv et al., “Light Field Camera Design for Integral View Photography”, Adobe Systems Incorporated, Adobe Technical Report, 2003, 13 pgs. |
Georgiev et al., “Light-Field Capture by Multiplexing in the Frequency Domain”, Adobe Systems Incorporated, Adobe Technical Report, 2003, 13 pgs. |
Godard et al., “Unsupervised Monocular Depth Estimation with Left-Right Consistency”, In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, 14 pgs. |
Goldman et al., “Video Object Annotation, Navigation, and Composition”, In Proceedings of UIST 2008, Oct. 19-22, 2008, Monterey CA, USA, pp. 3-12. |
Goodfellow et al., “Generative Adversarial Nets, 2014. Generative adversarial nets”, In Advances in Neural Information Processing Systems (pp. 2672-2680). |
Gortler et al., “The Lumigraph”, In Proceedings of SIGGRAPH 1996, published Aug. 1, 1996, pp. 43-54. |
Gupta et al., “Perceptual Organization and Recognition of Indoor Scenes from RGB-D Images”, 2013 IEEE Conference on Computer Vision and Pattern Recognition, Jun. 23-28, 2013, Portland, OR, USA, pp. 564-571. |
Hacohen et al., “Non-Rigid Dense Correspondence with Applications for Image Enhancement”, ACM Transactions on Graphics, vol. 30, No. 4, Aug. 7, 2011, 9 pgs. |
Hamilton, “JPEG File Interchange Format, Version 1.02”, Sep. 1, 1992, 9 pgs. |
Hardie, “A Fast Image Super-Algorithm Using an Adaptive Wiener Filter”, IEEE Transactions on Image Processing, Dec. 2007, published Nov. 19, 2007, vol. 16, No. 12, pp. 2953-2964. |
Hasinoff et al., “Search-and-Replace Editing for Personal Photo Collections”, 2010 International Conference: Computational Photography (ICCP) Mar. 2010, pp. 1-8. |
Hernandez et al., “Laser Scan Quality 3-D Face Modeling Using a Low-Cost Depth Camera”, 20th European Signal Processing Conference, Aug. 27-31, 2012, Bucharest, Romania, pp. 1995-1999. |
Hernandez-Lopez et al., “Detecting objects using color and depth segmentation with Kinect sensor”, Procedia Technology, vol. 3, Jan. 1, 2012, pp. 196-204, XP055307680, ISSN: 2212-0173, DOI: 10.1016/j.protcy.2012.03.021. |
Higo et al., “A Hand-held Photometric Stereo Camera for 3-D Modeling”, IEEE International Conference on Computer Vision, 2009, pp. 1234-1241. |
Hirschmuller, “Accurate and Efficient Stereo Processing by Semi-Global Matching and Mutual Information”, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), San Diego, CA, USA, Jun. 20-26, 2005, 8 pgs. |
Hirschmuller et al., “Memory Efficient Semi-Global Matching, ISPRS Annals of the Photogrammetry”, Remote Sensing and Spatial Information Sciences, vol. 1-3, 2012, XXII ISPRS Congress, Aug. 25-Sep. 1, 2012, Melbourne, Australia, 6 pgs. |
Holoeye Photonics AG, “Spatial Light Modulators”, Oct. 2, 2013, Brochure retrieved from https://web.archive.org/web/20131002061028/http://holoeye.com/wp-content/uploads/Spatial_Light_Modulators.pdf on Oct. 13, 2017, 4 pgs. |
Holoeye Photonics AG, “Spatial Light Modulators”, Sep. 18, 2013, retrieved from https://web.archive.org/web/20130918113140/http://holoeye.com/spatial-light-modulators/ on Oct. 13, 2017, 4 pgs. |
Holoeye Photonics AG, “LC 2012 Spatial Light Modulator (transmissive)”, Sep. 18, 2013, retrieved from https://web.archive.org/web/20130918151716/http://holoeye.com/spatial-light-modulators/lc-2012-spatial-light-modulator/ on Oct. 20, 2017, 3 pgs. |
Horisaki et al., “Superposition Imaging for Three-Dimensionally Space-Invariant Point Spread Functions”, Applied Physics Express, Oct. 13, 2011, vol. 4, pp. 112501-1-112501-3. |
Horisaki et al., “Irregular Lens Arrangement Design to Improve Imaging Performance of Compound-Eye Imaging Systems”, Applied Physics Express, Jan. 29, 2010, vol. 3, pp. 022501-1-022501-3. |
Horn et al., “LightShop: Interactive Light Field Manipulation and Rendering”, In Proceedings of I3D, Jan. 1, 2007, pp. 121-128. |
Hossain et al., “Inexpensive Construction of a 3D Face Model from Stereo Images”, IEEE International Conference on Computer and Information Technology, Dec. 27-29, 2007, 6 pgs. |
Hu et al., “A Quantitative Evaluation of Confidence Measures for Stereo Vision”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, Issue 11, Nov. 2012, pp. 2121-2133. |
Humenberger er al., “A Census-Based Stereo Vision Algorithm Using Modified Semi-Global Matching and Plane Fitting to Improve Matching Quality”, IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), IEEE, Jun. 13-18, 2010, San Francisco, CA, 8 pgs. |
Isaksen et al., “Dynamically Reparameterized Light Fields”, In Proceedings of SIGGRAPH 2000, 2000, pp. 297-306. |
Izadi et al., “KinectFusion: Real-time 3D Reconstruction and Interaction Using a Moving Depth Camera”, UIST'11, Oct. 16-19, 2011, Santa Barbara, CA, pp. 559-568. |
Jackson et al., “Large Post 3D Face Reconstruction from a Single Image via Direct Volumetric CNN Regression”, arXiv: 1703.07834v2, Sep. 8, 2017, 9 pgs. |
Janoch et al., “A category-level 3-D object dataset: Putting the Kinect to work”, 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops), Nov. 6-13, 2011, Barcelona, Spain, pp. 1168-1174. |
Jarabo et al., “Efficient Propagation of Light Field Edits”, In Proceedings of SIACG 2011, 2011, pp. 75-80. |
Jiang et al., “Panoramic 3D Reconstruction Using Rotational Stereo Camera with Simple Epipolar Constraints”, 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06), vol. 1, Jun. 17-22, 2006, New York, NY, USA, pp. 371-378. |
Joshi, Color Calibration for Arrays of Inexpensive Image Sensors, Mitsubishi Electric Research Laboratories, Inc., TR2004-137, Dec. 2004, 6 pgs. |
Joshi et al., “Synthetic Aperture Tracking: Tracking Through Occlusions”, I CCV IEEE 11th International Conference on Computer Vision; Publication [online]. Oct. 2007 [retrieved Jul. 28, 2014]. Retrieved from the Internet: <URL: http:l/ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4409032&isnumber=4408819>, pp. 1-8. |
Jourabloo, “Large-Pose Face Alignment via CNN-Based Dense 3D Model Fitting”, I CCV IEEE 11th International Conference on Computer Vision; Publication [online]. Oct. 2007 [retrieved Jul. 28, 2014]. Retrieved from the Internet: <URL: http:l/ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4409032&isnumber=4408819>; pp. 1-8. |
Kang et al., “Handling Occlusions in Dense Multi-view Stereo”, Computer Vision and Pattern Recognition, 2001, vol. 1, pp. I-103-I-110. |
Keeton, “Memory-Driven Computing”, Hewlett Packard Enterprise Company, Oct. 20, 2016, 45 pgs. |
Kim, “Scene Reconstruction from a Light Field”, Master Thesis, Sep. 1, 2010 (Sep. 1, 2010), pp. 1-72. |
Kim et al., “Scene reconstruction from high spatio-angular resolution light fields”, ACM Transactions on Graphics (TOG)—SIGGRAPH 2013 Conference Proceedings, vol. 32 Issue 4, Article 73, Jul. 21, 2013, 11 pages. |
Kitamura et al., “Reconstruction of a high-resolution image on a compound-eye image-capturing system”, Applied Optics, Mar. 10, 2004, vol. 43, No. 8, pp. 1719-1727. |
Kittler et al., “3D Assisted Face Recognition: A Survey of 3D Imaging, Modelling, and Recognition Approaches”, Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Jul. 2005, 7 pgs. |
Konolige, Kurt “Projected Texture Stereo”, 2010 IEEE International Conference on Robotics and Automation, May 3-7, 2010, pp. 148-155. |
Kotsia et al., “Facial Expression Recognition in Image Sequences Using Geometric Deformation Features and Support Vector Machines”, IEEE Transactions on Image Processing, Jan. 2007, vol. 16, No. 1, pp. 172-187. |
Krishnamurthy et al., “Compression and Transmission of Depth Maps for Image-Based Rendering”, Image Processing, 2001, pp. 828-831. |
Kubota et al., “Reconstructing Dense Light Field From Array of Multifocus Images for Novel View Synthesis”, IEEE Transactions on Image Processing, vol. 16, No. 1, Jan. 2007, pp. 269-279. |
Kutulakos et al., “Occluding Contour Detection Using Affine Invariants and Purposive Viewpoint Control”, Computer Vision and Pattern Recognition, Proceedings CVPR 94, Seattle, Washington, Jun. 21-23, 1994, 8 pgs. |
Lai et al., “A Large-Scale Hierarchical Multi-View RGB-D Object Dataset”, Proceedings—IEEE International Conference on Robotics and Automation, Conference Date May 9-13, 2011, 8 pgs., DOI: 10.1109/ICRA.201135980382. |
Lane et al., “A Survey of Mobile Phone Sensing”, IEEE Communications Magazine, vol. 48, Issue 9, Sep. 2010, pp. 140-150. |
Lao et al., “3D template matching for pose invariant face recognition using 3D facial model built with isoluminance line based stereo vision”, Proceedings 15th International Conference on Pattern Recognition, Sep. 3-7, 2000, Barcelona, Spain, pp. 911-916. |
Lee, “NFC Hacking: The Easy Way”, Defcon Hacking Conference, 2012, 24 pgs. |
Lee et al., “Electroactive Polymer Actuator for Lens-Drive Unit in Auto-Focus Compact Camera Module”, ETRI Journal, vol. 31, No. 6, Dec. 2009, pp. 695-702. |
Lee et al., “Nonlocal matting”, CVPR 2011, Jun. 20-25, 2011, pp. 2193-2200. |
Lee et al., “Automatic Upright Adjustment of Photographs”, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2012, pp. 877-884. |
Lensvector, “How LensVector Autofocus Works”, 2010, printed Nov. 2, 2012 from http://www.lensvector.com/overview.html, 1 pg. |
Levin et al., “A Closed Form Solution to Natural Image Matting”, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2006, vol. 1, pp. 61-68. |
Levin et al., “Spectral Matting”, 2007 IEEE Conference on Computer Vision and Pattern Recognition, Jun. 17-22, 2007, Minneapolis, MN, USA, pp. 1-8. |
Levoy, “Light Fields and Computational Imaging”, IEEE Computer Society, Sep. 1, 2006, vol. 39, Issue No. 8, pp. 46-55. |
Levoy et al., “Light Field Rendering”, Proc. ADM SIGGRAPH '96, 1996, pp. 1-12. |
Li et al., “A Hybrid Camera for Motion Deblurring and Depth Map Super-Resolution”, Jun. 23-28, 2008, IEEE Conference on Computer Vision and Pattern Recognition, 8 pgs. Retrieved from www.eecis.udel.edu/˜jye/lab_research/08/deblur-feng.pdf on Feb. 5, 2014. |
Li et al., “Fusing Images with Different Focuses Using Support Vector Machines”, IEEE Transactions on Neural Networks, vol. 15, No. 6, Nov. 8, 2004, pp. 1555-1561. |
Lim, “Optimized Projection Pattern Supplementing Stereo Systems”, 2009 IEEE International Conference on Robotics and Automation, May 12-17, 2009, pp. 2823-2829. |
Liu et al., “Virtual View Reconstruction Using Temporal Information”, 2012 IEEE International Conference on Multimedia and Expo, 2012, pp. 115-120. |
Lo et al., “Stereoscopic 3D Copy & Paste”, ACM Transactions on Graphics, vol. 29, No. 6, Article 147, Dec. 2010, pp. 147:1-147:10. |
Ma et al., “Constant Time Weighted Median Filtering for Stereo Matching and Beyond”, ICCV '13 Proceedings of the 2013 IEEE International Conference on Computer Vision, IEEE Computer Society, Washington DC, USA, Dec. 1-8, 2013, 8 pgs. |
Martinez et al., “Simple Telemedicine for Developing Regions: Camera Phones and Paper-Based Microfluidic Devices for Real-Time, Off-Site Diagnosis”, Analytical Chemistry (American Chemical Society), vol. 80, No. 10, May 15, 2008, pp. 3699-3707. |
McGuire et al., “Defocus video matting”, ACM Transactions on Graphics (TOG)—Proceedings of ACM SIGGRAPH 2005, vol. 24, Issue 3, Jul. 2005, pp. 567-576. |
Medioni et al., “Face Modeling and Recognition in 3-D”, Proceedings of the IEEE International Workshop on Analysis and Modeling of Faces and Gestures, 2013, 2 pgs. |
Merkle et al., “Adaptation and optimization of coding algorithms for mobile 3DTV”, Mobile3DTV Project No. 216503, Nov. 2008, 55 pgs. |
Michael et al., “Real-time Stereo Vision: Optimizing Semi-Global Matching”, 2013 IEEE Intelligent Vehicles Symposium (IV), IEEE, Jun. 23-26, 2013, Australia, 6 pgs. |
Milella et al., “3D reconstruction and classification of natural environments by an autonomous vehicle using multi-baseline stereo”, Intelligent Service Robotics, vol. 7, No. 2, Mar. 2, 2014, pp. 79-92. |
Min et al., “Real-Time 3D Face Identification from a Depth Camera”, Proceedings of the IEEE International Conference on Pattern Recognition, Nov. 11-15, 2012, 4 pgs. |
Mitra et al., “Light Field Denoising, Light Field Superresolution and Stereo Camera Based Refocussing using a GMM Light Field Patch Prior”, Computer Vision and Pattern Recognition Workshops (CVPRW), 2012 IEEE Computer Society Conference on Jun. 16-21, 2012, pp. 22-28. |
Moreno-Noguer et al., “Active Refocusing of Images and Videos”, ACM Transactions on Graphics (TOG)—Proceedings of ACM SIGGRAPH 2007, vol. 26, Issue 3, Jul. 2007, 10 pgs. |
Muehlebach, “Camera Auto Exposure Control for VSLAM Applications”, Studies on Mechatronics, Swiss Federal Institute of Technology Zurich, Autumn Term 2010 course, 67 pgs. |
Nayar, “Computational Cameras: Redefining the Image”, IEEE Computer Society, Aug. 14, 2006, pp. 30-38. |
Ng, “Digital Light Field Photography”, Thesis, Jul. 2006, 203 pgs. |
Ng et al., “Super-Resolution Image Restoration from Blurred Low-Resolution Images”, Journal of Mathematical Imaging and Vision, 2005, vol. 23, pp. 367-378. |
Ng et al., “Light Field Photography with a Hand-held Plenoptic Camera”, Stanford Tech Report CTSR Feb. 2005, Apr. 20, 2005, pp. 1-11. |
Nguyen et al., “Image-Based Rendering with Depth Information Using the Propagation Algorithm”, Proceedings. (ICASSP '05). IEEE International Conference on Acoustics, Speech, and Signal Processing, 2005, vol. 5, Mar. 23-23, 2005, pp. II-589-II-592. |
Nguyen et al., “Error Analysis for Image-Based Rendering with Depth Information”, IEEE Transactions on Image Processing, vol. 18, Issue 4, Apr. 2009, pp. 703-716. |
Nishihara, H.K. “PRISM: A Practical Real-Time Imaging Stereo Matcher”, Massachusetts Institute of Technology, A.I. Memo 780, May 1984, 32 pgs. |
Nitta et al., “Image reconstruction for thin observation module by bound optics by using the iterative backprojection method”, Applied Optics, May 1, 2006, vol. 45, No. 13, pp. 2893-2900. |
Nomura et al., “Scene Collages and Flexible Camera Arrays”, Proceedings of Eurographics Symposium on Rendering, Jun. 2007, 12 pgs. |
Park et al., “Super-Resolution Image Reconstruction”, IEEE Signal Processing Magazine, May 2003, pp. 21-36. |
Park et al., “Multispectral Imaging Using Multiplexed Illumination”, 2007 IEEE 11th International Conference on Computer Vision, Oct. 14-21, 2007, Rio de Janeiro, Brazil, pp. 1-8. |
Park et al., “3D Face Reconstruction from Stereo Video”, First International Workshop on Video Processing for Security, Jun. 7-9, 2006, Quebec City, Canada, 2006, 8 pgs. |
Parkkinen et al., “Characteristic Spectra of Munsell Colors”, Journal of the Optical Society of America A, vol. 6, Issue 2, Feb. 1989, pp. 318-322. |
Perwass et al., “Single Lens 3D-Camera with Extended Depth-of-Field”, printed from www.raytrix.de, Jan. 22, 2012, 15 pgs. |
Pham et al., “Robust Super-Resolution without Regularization”, Journal of Physics: Conference Series 124, Jul. 2008, pp. 1-19. |
Philips 3D Solutions, “3D Interface Specifications, White Paper”, Feb. 15, 2008, 2005-2008 Philips Electronics Nederland B.V., Philips 3D Solutions retrieved from www.philips.com/3dsolutions, 29 pgs. |
Polight, “Designing Imaging Products Using Reflowable Autofocus Lenses”, printed Nov. 2, 2012 from http://www.polight.no/tunable-polymer-autofocus-lens-html--11.html, 1 pg. |
Pouydebasque et al., “Varifocal liquid lenses with integrated actuator, high focusing power and low operating voltage fabricated on 200 mm wafers”, Sensors and Actuators A: Physical, vol. 172, Issue 1, Dec. 2011, pp. 280-286. |
Protter et al., “Generalizing the Nonlocal-Means to Super-Resolution Reconstruction”, IEEE Transactions on Image Processing, Dec. 2, 2008, vol. 18, No. 1, pp. 36-51. |
Radtke et al., “Laser lithographic fabrication and characterization of a spherical artificial compound eye”, Optics Express, Mar. 19, 2007, vol. 15, No. 6, pp. 3067-3077. |
Rajan et al., “Simultaneous Estimation of Super Resolved Scene and Depth Map from Low Resolution Defocused Observations”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, No. 9, Sep. 8, 2003, pp. 1-16. |
Rander et al., “Virtualized Reality: Constructing Time-Varying Virtual Worlds from Real World Events”, Proc. of IEEE Visualization '97, Phoenix, Arizona, Oct. 19-24, 1997, pp. 277-283, 552. |
Ranjan et al., “HyperFace: A Deep Multi-Task Learning Framework for Face Detection, Landmark Localization, Pose Estimation, and Gender Recognition”, May 11, 2016 (May 11, 2016), pp. 1-16. |
Rhemann et al, “Fast Cost-Volume Filtering for Visual Correspondence and Beyond”, IEEE Trans. Pattern Anal. Mach. Intell, 2013, vol. 35, No. 2, pp. 504-511. |
Rhemann et al., “A perceptually motivated online benchmark for image matting”, 2009 IEEE Conference on Computer Vision and Pattern Recognition, Jun. 20-25, 2009, Miami, FL, USA, pp. 1826-1833. |
Robert et al., “Dense Depth Map Reconstruction: A Minimization and Regularization Approach which Preserves Discontinuities”, European Conference on Computer Vision (ECCV), pp. 439-451, (1996). |
Robertson et al., “Dynamic Range Improvement Through Multiple Exposures”, In Proc. of the Int. Conf. on Image Processing, 1999, 5 pgs. |
Robertson et al., “Estimation-theoretic approach to dynamic range enhancement using multiple exposures”, Journal of Electronic Imaging, Apr. 2003, vol. 12, No. 2, pp. 219-228. |
Roy et al., “Non-Uniform Hierarchical Pyramid Stereo for Large Images” Computer and Robot Vision, 2002, pp. 208-215. |
Rusinkiewicz et al., “Real-Time 3D Model Acquisition”, ACM Transactions on Graphics (TOG), vol. 21, No. 3, Jul. 2002, pp. 438-446. |
Saatci et al., “Cascaded Classification of Gender and Facial Expression using Active Appearance Models”, IEEE, FGR'06, 2006, 6 pgs. |
Sauer et al., “Parallel Computation of Sequential Pixel Updates in Statistical Tomographic Reconstruction”, ICIP 1995 Proceedings of the 1995 International Conference on Image Processing, Date of Conference: Oct. 23-26, 1995, pp. 93-96. |
Scharstein et al., “High-Accuracy Stereo Depth Maps Using Structured Light”, IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2003), Jun. 2003, vol. 1, pp. 195-202. |
Seitz et al., “Plenoptic Image Editing”, International Journal of Computer Vision 48, Conference Date Jan. 7, 1998, 29 pgs., DOI: 10.1109/ICCV.1998.710696 · Source: DBLP Conference: Computer Vision, Sixth International Conference. |
Shechtman et al., “Increasing Space-Time Resolution in Video”, European Conference on Computer Vision, LNCS 2350, May 28-31, 2002, pp. 753-768. |
Shotton et al., “Real-time human pose recognition in parts from single depth images”, CVPR 2011, Jun. 20-25, 2011, Colorado Springs, CO, USA, pp. 1297-1304. |
Shum et al., “Pop-Up Light Field: An Interactive Image-Based Modeling and Rendering System”, Apr. 2004, ACM Transactions on Graphics, vol. 23, No. 2, pp. 143-162, Retrieved from http://131.107.65.14/en-us/um/people/jiansun/papers/PopupLightField_TOG.pdf on Feb. 5, 2014. |
Shum et al., “A Review of Image-based Rendering Techniques”, Visual Communications and Image Processing 2000, May 2000, 12 pgs. |
Sibbing et al., “Markerless reconstruction of dynamic facial expressions”, 2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshop: Kyoto, Japan, Sep. 27-Oct. 4, 2009, Institute of Electrical and Electronics Engineers, Piscataway, NJ, Sep. 27, 2009 (Sep. 27, 2009), pp. 1778-1785. |
Silberman et al., “Indoor segmentation and support inference from RGBD images”, ECCV'12 Proceedings of the 12th European conference on Computer Vision, vol. Part V, Oct. 7-13, 2012, Florence, Italy, pp. 746-760. |
Stober, “Stanford researchers developing 3-D camera with 12,616 lenses”. Stanford Report, Mar. 19, 2008, Retrieved from: http://news.stanford.edu/news/2008/march19/camera-031908.html, 5 pgs. |
Stollberg et al., “The Gabor superlens as an alternative wafer-level camera approach inspired by superposition compound eyes of nocturnal insects”, Optics Express, Aug. 31, 2009, vol. 17, No. 18, pp. 15747-15759. |
Sun et al., “Image Super-Resolution Using Gradient Profile Prior”, 2008 IEEE Conference on Computer Vision and Pattern Recognition, Jun. 23-28, 2008, 8 pgs.; DOI: 10.1109/CVPR.2008.4587659. |
Taguchi et al., “Rendering-Oriented Decoding for a Distributed Multiview Coding System Using a Coset Code”, Hindawi Publishing Corporation, EURASIP Journal on Image and Video Processing, vol. 2009, Article ID 251081, Online: Apr. 22, 2009, 12 pgs. |
Takeda et al., “Super-resolution Without Explicit Subpixel Motion Estimation”, IEEE Transaction on Image Processing, Sep. 2009, vol. 18, No. 9, pp. 1958-1975. |
Tallon et al., “Upsampling and Denoising of Depth Maps Via Joint-Segmentation”, 20th European Signal Processing Conference, Aug. 27-31, 2012, 5 pgs. |
Tanida et al., “Thin observation module by bound optics (TOMBO): concept and experimental verification”, Applied Optics, Apr. 10, 2001, vol. 40, No. 11, pp. 1806-1813. |
Tanida et al., “Color imaging with an integrated compound imaging system”, Optics Express, Sep. 8, 2003, vol. 11, No. 18, pp. 2109-2117. |
Tao et al., “Depth from Combining Defocus and Correspondence Using Light-Field Cameras”, ICCV '13 Proceedings of the 2013 IEEE International Conference on Computer Vision, Dec. 1, 2013, pp. 673-680. |
Taylor, “Virtual camera movement: The way of the future?”, American Cinematographer, vol. 77, No. 9, Sep. 1996, pp. 93-100. |
Tseng et al., “Automatic 3-D depth recovery from a single urban-scene image”, 2012 Visual Communications and Image Processing, Nov. 27-30, 2012, San Diego, CA, USA, pp. 1-6. |
Uchida et al., 3D Face Recognition Using Passive Stereo Vision, IEEE International Conference on Image Processing 2005, Sep. 14, 2005, 4 pgs. |
Vaish et al., “Reconstructing Occluded Surfaces Using Synthetic Apertures: Stereo, Focus and Robust Measures”, 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06), vol. 2, Jun. 17-22, 2006, pp. 2331-2338. |
Vaish et al., “Using Plane + Parallax for Calibrating Dense Camera Arrays”, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2004, 8 pgs. |
Vaish et al., “Synthetic Aperture Focusing Using a Shear-Warp Factorization of the Viewing Transform”, IEEE Workshop on A3DISS, CVPR, 2005, 8 pgs. |
Van Der Wal et al., “The Acadia Vision Processor”, Proceedings Fifth IEEE International Workshop on Computer Architectures for Machine Perception, Sep. 13, 2000, Padova, Italy, pp. 31-40. |
Veilleux, “CCD Gain Lab: The Theory”, University of Maryland, College Park-Observational Astronomy (ASTR 310), Oct. 19, 2006, pp. 1-5 (online], [retrieved on May 13, 2014]. Retrieved from the Internet <URL: http://www.astro.umd.edu/˜veilleux/ASTR310/fall06/ccd_theory.pdf, 5 pgs. |
Venkataraman et al., “PiCam: An Ultra-Thin High Performance Monolithic Camera Array”, ACM Transactions on Graphics (TOG), ACM, US, vol. 32, No. 6, 1 Nov. 1, 2013, pp. 1-13. |
Vetro et al., “Coding Approaches for End-To-End 3D TV Systems”, Mitsubishi Electric Research Laboratories, Inc., TR2004-137, Dec. 2004, 6 pgs. |
Viola et al., “Robust Real-time Object Detection”, Cambridge Research Laboratory, Technical Report Series, Compaq, CRL Jan. 2001, Feb. 2001, Printed from: http://www.hpl.hp.com/techreports/Compaq-DEC/CRL-2001-1.pdf, 30 pgs. |
Vuong et al., “A New Auto Exposure and Auto White-Balance Algorithm to Detect High Dynamic Range Conditions Using CMOS Technology”, Proceedings of the World Congress on Engineering and Computer Science 2008, WCECS 2008, Oct. 22-24, 2008, 5 pgs. |
Wang, “Calculation of Image Position, Size and Orientation Using First Order Properties”, Dec. 29, 2010, OPTI521 Tutorial, 10 pgs. |
Wang et al., “Soft scissors: an interactive tool for realtime high quality matting”, ACM Transactions on Graphics (TOG)—Proceedings of ACM SIGGRAPH 2007, vol. 26, Issue 3, Article 9, Jul. 2007, 6 pg., published Aug. 5, 2007. |
Wang et al., “Automatic Natural Video Matting with Depth”, 15th Pacific Conference on Computer Graphics and Applications, PG '07, Oct. 29-Nov. 2, 2007, Maui, HI, USA, pp. 469-472. |
Wang et al., “Image and Video Matting: A Survey”, Foundations and Trends, Computer Graphics and Vision, vol. 3, No. 2, 2007, pp. 91-175. |
Wang et al., “Facial Feature Point Detection: A Comprehensive Survey”, arXiv: 1410.1037v1, Oct. 4, 2014, 32 pgs.. |
Wetzstein et al., “Computational Plenoptic Imaging”, Computer Graphics Forum, 2011, vol. 30, No. 8, pp. 2397-2426. |
Wheeler et al., “Super-Resolution Image Synthesis Using Projections Onto Convex Sets in the Frequency Domain”, Proc. SPIE, Mar. 11, 2005, vol. 5674, 12 pgs. |
Widanagamaachchi et al., “3D Face Recognition from 2D Images: A Survey”, Proceedings of the International Conference on Digital Image Computing: Techniques and Applications, Dec. 1-3, 2008, 7 pgs. |
Wieringa et al., “Remote Non-invasive Stereoscopic Imaging of Blood Vessels: First In-vivo Results of a New Multispectral Contrast Enhancement Technology”, Annals of Biomedical Engineering, vol. 34, No. 12, Dec. 2006, pp. 1870-1878, Published online Oct. 12, 2006. |
Wikipedia, “Polarizing Filter (Photography)”, retrieved from http://en.wikipedia.org/wiki/Polarizing_filter_(photography) on Dec. 12, 2012, last modified on Sep. 26, 2012, 5 pgs. |
Wilburn, “High Performance Imaging Using Arrays of Inexpensive Cameras”, Thesis of Bennett Wilburn, Dec. 2004, 128 pgs. |
Wilburn et al., “High Performance Imaging Using Large Camera Arrays”, ACM Transactions on Graphics, Jul. 2005, vol. 24, No. 3, pp. 1-12. |
Wilburn et al., “High-Speed Videography Using a Dense Camera Array”, Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004., vol. 2, Jun. 27-Jul. 2, 2004, pp. 294-301. |
Wilburn et al., “The Light Field Video Camera”, Proceedings of Media Processors 2002, SPIE Electronic Imaging, 2002, 8 pgs. |
Wippermann et al., “Design and fabrication of a chirped array of refractive ellipsoidal micro-lenses for an apposition eye camera objective”, Proceedings of SPIE, Optical Design and Engineering II, Oct. 15, 2005, pp. 59622C-1-59622C-11. |
Wu et al., “A virtual view synthesis algorithm based on image inpainting”, 2012 Third International Conference on Networking and Distributed Computing, Hangzhou, China, Oct. 21-24, 2012, pp. 153-156. |
Xu, “Real-Time Realistic Rendering and High Dynamic Range Image Display and Compression”, Dissertation, School of Computer Science in the College of Engineering and Computer Science at the University of Central Florida, Orlando, Florida, Fall Term 2005, 192 pgs. |
Yang et al., “Superresolution Using Preconditioned Conjugate Gradient Method”, Proceedings of SPIE—The International Society for Optical Engineering, Jul. 2002, 8 pgs. |
Yang et al., “A Real-Time Distributed Light Field Camera”, Eurographics Workshop on Rendering (2002), published Jul. 26, 2002, pp. 1-10. |
Yang et al., Model-based Head Pose Tracking with Stereovision, Microsoft Research, Technical Report, MSR-TR-2001-102, Oct. 2001, 12 pgs. |
Yokochi et al., “Extrinsic Camera Parameter Estimation Based-on Feature Tracking and GPS Data”, 2006, Nara Institute of Science and Technology, Graduate School of Information Science, LNCS 3851, pp. 369-378. |
Zbontar et al., Computing the Stereo Matching Cost with a Convolutional Neural Network, CVPR, 2015, pp. 1592-1599. |
Zhang et al., “A Self-Reconfigurable Camera Array”, Eurographics Symposium on Rendering, published Aug. 8, 2004, 12 pgs. |
Zhang et al., “Depth estimation, spatially variant image registration, and super-resolution using a multi-lenslet camera”, proceedings of SPIE, vol. 7705, Apr. 23, 2010, pp. 770505-770505-8, XP055113797 ISSN: 0277-786X, DOI: 10.1117/12.852171. |
Zhang et al., “Spacetime Faces: High Resolution Capture for Modeling and Animation”, ACM Transactions on Graphics, 2004, 11pgs. |
Zheng et al., “Balloon Motion Estimation Using Two Frames”, Proceedings of the Asilomar Conference on Signals, Systems and Computers, IEEE, Comp. Soc. Press, US, vol. 2 of 2, Nov. 4, 1991, pp. 1057-1061. |
Zhu et al., “Fusion of Time-of-Flight Depth and Stereo for High Accuracy Depth Maps”, 2008 IEEE Conference on Computer Vision and Pattern Recognition, Jun. 23-28, 2008, Anchorage, AK, USA, pp. 1-8. |
Zomet et al., “Robust Super-Resolution”, IEEE, 2001, pp. 1-6. |
“File Formats Version 6”, Alias Systems, 2004, 40 pgs. |
“Light fields and computational photography”, Stanford Computer Graphics Laboratory, Retrieved from: http://graphics.stanford.edu/projects/lightfield/, Earliest publication online: Feb. 10, 1997, 3 pgs. |
“Exchangeable image file format for digital still cameras: Exif Version 2.2”_, Japan Electronics and Information Technology Industries Association, Prepared by Technical Standardization Committee on AV & IT Storage Systems and Equipment, JEITA CP-3451, Apr. 2002, Retrieved from: http://www.exif.org/Exif2-2.PDF, 154 pgs. |
Systems and Equipment, JEITA CP-3451, Apr. 2002, Retrieved from: http://www.exif.org/Eif2-2.PDF, 154 pgs. |
Akizuki, S., et al., “Semi-automatic Training Data Generation for Semantic Segmentation using 6DoF Pose Estimation,” Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, 2019, pp. 607-613. |
An, G.H., “Charuco Board-Based Omnidirectional Camera Calibration Method,” MDPI, Electronics 2018, 7, 421; doi:10.3390/electronics7120421. |
Chen, C., et al., “Pose Estimation for Texture-less Shiny Objects in a Single RGB Image Using Synthetic Training Data,” arXiv:1909.10270v1 [cs.RO] Sep. 23, 2019, 6 pages. |
Deng, X., et al., “Self-supervised 6D Object Pose Estimation for Robot Manipulation,” arXiv:1909.10159v2 [cs.RO] Mar. 7, 2020, 7 pages. |
Garrido-Jurado, S., “Automatic generation and detection of highly reliable fiducial markers under occlusion,” Elsevier, Science Direct, Patter Recognition 47 (2014) pp. 2280-2292. |
Kim, E., et al., “Extrinsic Calibration between Camera and LiDAR Sensors by Matching Multiple 3D Planes,” MDPI, Sensors, 2020, 20, 52; doi:10.33690/s20010052. |
Kleeberger, K., et al., “A Survey on Learning-Based Robotic Grasping,” Robotics in Manufacturing, 2020, 11 pages. |
Kleeberger, K., et al., “Automatic Grasp Pose Generation for Parallel Jaw Grippers,” arXiv:2104.11660v1 [cs.RO] Apr. 23, 2021, 14 pages. |
Kleeberger, K., et al., “Investigations on Output Parameterizations of Neural Networks for Single Shot 6D Object Pose Estimation,” arXiv:2104.07528v1 [cs.CV] Apr. 15, 2021, 7 pages. |
Kleeberger, K., et al., “Large-scale 6D Object Pose Estimation Dataset for Industrial Bin-Picking,” arXiv:1912.12125v1 [cs.CV] Dec. 6, 2019, 6 pages. |
Kleeberger, K., et al., “Transferring Experience from Simulation to the Real World for Precise Pick-And-Place Tasks in Highly Cluttered Scenes,” arXiv:2101.04781v1 [cs.RO] Jan. 12, 2021, 8 pages. |
Marion, P., et al., “LabelFusion: A Pipeline for Generating Ground Truth Labels for Real RGBD Data of Cluttered Scenes,” arXiv:1707.04796v3 [cs.CV] Sep. 26, 2017, 8 pager. |
Patten, T., et al., “Action Selection for Interactive Object Segmentation in Clutter,” Research Gate Conference Paper, Oct. 2018, 9 pages. |
Reza, Md Alimoor, et al., “Automatic Annotation for Semantic Segmentation in Indoor Scenes,” IEEE, 2019, 7 pages. |
Singh, R.P., et al., “Rapid Pose Label Generation through Sparse Representation of Unknown Objects,” arXiv2011.03790 [cs.CV], Nov. 7, 2020, 8 pages. |
Stumpf, D., et al., “SALT: A Semi-automatic Labeling Tool for RGB-D Video Sequences,” arXiv:2102.10820v4 [cs.CV] Feb. 22, 2021, 9 pages. |
Suchi, M., “EasyLabel: A Semi-Automatic Pixel-wise Object Annotation Tool for Creating Robotic RGB-D Datasets,” Conference of Robotics, IEEE, 2019, 7 pages. |
International Preliminary Report on Patentability in International Appln. No. PCT/US2022/039256, dated Feb. 15, 2024, 11 pages. |
International Search Report and Written Opinion in International Appln. No. PCT/US2022/039256, dated Jan. 16, 2023, 20 pages. |
Number | Date | Country | |
---|---|---|---|
20230041560 A1 | Feb 2023 | US |