Machine learning models operating at different frequencies for autonomous vehicles

Information

  • Patent Grant
  • 11816585
  • Patent Number
    11,816,585
  • Date Filed
    Tuesday, December 3, 2019
    5 years ago
  • Date Issued
    Tuesday, November 14, 2023
    2 years ago
Abstract
Systems and methods include machine learning models operating at different frequencies. An example method includes obtaining images at a threshold frequency from one or more image sensors positioned about a vehicle. Location information associated with objects classified in the images is determined based on the images. The images are analyzed via a first machine learning model at the threshold frequency. For a subset of the images, the first machine learning model uses output information from a second machine learning model, the second machine learning model being performed at less than the threshold frequency.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS

Any and all applications for which a foreign or domestic priority claim is identified in the Application Data Sheet as filed with the present application are hereby incorporated by reference under 37 CFR 1.57.


BACKGROUND
Field of the Disclosure

This application relates generally to the machine vision field, and more specifically to enhanced object detection from a vehicle.


Description of the Related Art

In the field of machine vision for autonomous vehicles, automotive image sensors (e.g., cameras) are typically capable of high frame rates of 30 frames per second (fps) or more. However, deep learning based image processing algorithms may be unable to keep up with the high camera frame rates without significantly reducing accuracy, range, or both. Such algorithms may be run at 20 fps or less. This may result in a waste of the additional camera information available, which may thus be unused in image processing and object detection tasks.


Typically, slower machine learning models (e.g., object detectors) which run at slower frame rates than the cameras' frame rates, may have high accuracy, but long latencies, meaning that it can take longer for these slower machine learning models to produce an output. The output may therefore become stale by the time it's outputted. For example, a slower machine learning model detecting an image may take 200 milliseconds to do so. In the 200 milliseconds it takes for the machine learning model to output the detected image, the image has likely moved. To resolve this, a faster machine learning model may be employed. However, the faster machine learning model may be less accurate. As may be appreciated, less accuracy may result in a higher likelihood of false negatives and false positives. For automotive applications, for example, a false negative may represent vehicles in an image that the machine learning model fails to detect, while a false positive may represent a machine learning model predicting a vehicle in a location of the image when there is no vehicle present.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a schematic representation of an example object detection system according to one embodiment.



FIG. 2 is a flowchart of an example process for object detection according to one embodiment.



FIG. 3 is a block diagram illustrating an example of object detection using a detector and a tracker according to one embodiment.



FIG. 4 is a block diagram illustrating an example of object detection using a first detector and a second detector according to one embodiment.





DETAILED DESCRIPTION

Although some embodiments described throughout generally relate to systems and methods for object detection, it will be appreciated by those skilled in the art that the systems and methods described can be implemented and/or adapted for a variety of purposes within the machine vision field, including but not limited to: semantic segmentation, depth estimation, three-dimensional bounding box detection, object re-identification, pose estimation, action classification, simulation environment generation, and sensor fusion.


Embodiments relate to techniques for autonomous driving or navigation by a vehicle. As described herein, one or more image sensors (e.g., cameras) may be positioned about a vehicle. For example, there may be 4, 6, 9, and so on, image sensors positioned at different locations on the vehicle. The image sensors may obtain images at one or more threshold frequencies, such as 30 frames per second, 60 frames per second, and so on. The obtained images may depict a real-world setting in which the vehicle is located. As an example, the real-world setting may include other vehicles, pedestrians, road hazards and the like located proximate to the vehicle. The vehicle may therefore leverage the captured images to ensure that the vehicle is safely driven. For example, the vehicle may generate alerts for viewing by a driver. In this example, an alert may indicate that a pedestrian is crossing a cross-walk. As another example, the vehicle may use the images to inform autonomous, or semi-autonomous, driving and/or navigation of the vehicle.


As described in more detail below, in some embodiments two or more machine learning models may be used to analyze images, or other sensor information, obtained from image sensors positioned about a vehicle. The machine learning models may be implemented via a system of one or more processors, application-specific integrated circuits (ASICs), and so on. In some embodiments, analyzing an image may include performing a forward pass of a deep learning network. The analysis may include classifying an object in an image and determining location information for the object. Location information may, as an example, indicate a bounding box within the image that depicts the object. Location information may also indicate pixels of the image which form the object.


A first machine learning model may analyze images at a first frequency. For example, the first machine learning model may be a “faster” model capable of analyzing all images obtained at the full image sensor frame rate (e.g., 30 frames per second, 60 frames per second, and so on). A second machine learning model may analyze images at a second, lower, frequency. For example, the second machine learning model may be a comparatively slower machine learning model capable of analyzing a subset of the obtained images (e.g., every 2nd image, every 5th image, and so on). Advantageously, the first machine learning model may periodically receive information from the second machine learning model to enhance an accuracy associated with analyzing images.


In some embodiments, the first machine learning model and the second machine learning model may be detectors. A detector, as an example, may be used to detect an object (e.g., classify an object, determine location information, and so on). With respect to the above, the first machine learning model may detect objects with an associated accuracy less than the second machine learning model. For example, the second machine learning model may be more computationally expensive (e.g., the model may have more convolutional networks, layers, and so on).


As will be described, in these embodiments the second machine learning model may therefore analyze the subset of images to accurately detect objects in the subset of images. The first machine learning model may analyze all, or a substantial portion, of the images from the image sensors. Periodically, the first machine learning model may receive output information from the second machine learning model. This output information may be provided as an input, along with an image being analyzed, to the first machine learning model. The second machine learning model may provide supplemental information to the first machine learning model. In this way, an accuracy associated with detection of objects by the first machine learning model may be increased.


In some embodiments, the first machine learning model may be a tracker while the second machine learning model may be a detector. A tracker, as an example, may be used to track (e.g., estimate) a location of an object between images. For example, a tracker may track location information associated with a pedestrian. As will be described, the second machine learning model may be used to detect objects. The first machine learning model may estimate movement of the detected objects in images while the second machine learning model is processing a subsequent image. In some embodiments, the first machine learning mode may use additional sensor input, such as inertial measurement unit (IMG) information, global navigation satellite system (GNSS) information, and so on, to track locations of an object.


While machine learning models are described, such as deep learning models, it may be appreciated that classifiers, detectors, trackers, support vector machines, and so on, may be used and fall within the scope of the disclosure. Additionally, in some embodiments an output of the slower machine learning model may include detected objects. For example, classifications of the objects, location information, and so on, may be provided to the faster machine learning model. In some embodiments, an output of the slower machine learning model may represent feature maps, or other outputs associated with a convolutional network. This output may be provided to the faster machine learning model, which may be trained to periodically use such feature maps when detecting and/or tracking objects.


Overview


In one embodiment the method for object detection includes: receiving a first frame from a camera; processing the first frame with a first image processing engine receiving a second frame from the camera while the first frame is being processed; sending the processed output of the first frame to a second image processing engine with a faster processing speed that the first image processing engine; and combining the processed output of the first frame with the second frame to generate an object detection result for the first frame.


The method functions to provide an image processing system that combines multiple image processing engines to provide object detection outputs at a frame rate and accuracy much higher than either single image processing engine would achieve.


In one variation, combining the processed output of the first frame with the second frame is performed in order to allow the faster image processing engine to use the information from the slower image processing engine in order to make the information more accurate than it would have been with the faster image processing engine's output alone.


In one example, the method processes a high proportion of the images from a video stream output by a camera (e.g., all images, 90% of the images, etc.) with a fast, low accuracy detector. The slow detector generates a low-accuracy output for the detected objects (e.g., position, orientation, optical flow, motion vectors, other object features, etc.), preferably in substantially real-time (e.g., faster than the camera framerate), for subsequent use by a navigation system or other low-latency endpoint. The method concurrently processes a subset of the images from the same video stream with a slow, high accuracy detector, wherein the outputs of the slow detector (which lag behind the fast detector and the camera) are used as priors or input features for the fast detector. In one example, the fast detector can store the object features extracted from each image frame (or differences between image frames) that were sampled after a first image (that is being processed by the slow detector). Once the slow detector output is received by the fast detector, the fast detector can re-calculate the object features based on the more accurate object features output by the slow detector and the object parameter deltas to generate a higher-accuracy output (e.g., the fast detector's output is periodically recalibrated based on the slow detector's output). In a specific example, combining the low-accuracy outputs and the high-accuracy output can involve using the slower image processing engine's output combined with optical flow and motion vectors from the first image, moving the boxes by how much the car has moved in the second image, and then adjusting to generate the image prediction result for the second frame.


All or portions of the method can be performed at a predetermined frequency, performed upon occurrence of an execution event (e.g., upon an autonomous vehicle engaging in driving), or performed at any suitable time.


System


As shown in FIG. 1, one embodiment of an image processing system 100 can include: an image processing network 102, a first image processing engine 104, a second image processing engine 106, an image database 108, an output database 110, a client device or devices 112, and a camera or cameras 114. In some embodiments, processing the images includes one or more of image classification, object recognition, object detection, and object tracking.


In variants, the image processing network 102 functions to facilitate communication between various components of the system (e.g., between the image processing engines, between the image processing engines and the endpoint, etc.), but can additionally or alternatively perform any other suitable functionality. The image processing network can additionally or alternatively host or execute the other components of the system (e.g., the image processing engines). The image processing network can be: a scheduler, a set of processing systems (e.g., processors, ASICs, etc.), or be otherwise configured.


The first image processing engine 104 and second image processing engine 106 communicate with the image processing network 102 to process received images. In some embodiments, the first image processing engine 104 and second image processing engine 106 are components of the same computer device as the image processing network 102, while in other embodiments the first image processing engine 104, second image processing engine 106, and image processing network 102 are all components of separate computer devices. Any combination of components and computer devices may be contemplated.


In some embodiments, the image processing engines 104 and 106 may be deep learning image processing engines, non-deep learning image processing engines, or a combination of deep learning and non-deep learning image processing engines. In some embodiments, the image processing engines 104 and 106 may be detectors of varying image processing speeds, while in other embodiments the image processing engines 104 and 106 are a combination of a detector and a tracker, respectively. In some embodiments, the image processing engines 104 and 106 receives images in the form of a series of video frames from a camera. In some embodiments, the camera is an automotive camera placed within an autonomous vehicle for machine vision purposes, such as detecting objects on the road during the car's operation and predicting locations of objects in future frames based on the locations of the objects in current and past frames.


In some embodiments, the first image processing engine is capable of relatively “slow” image processing speeds, such as 20 fps or lower. In some embodiments, a slow detector may provide high accuracy and provide infrequent high accuracy output. In some embodiments, the first image processing engine is a detector selected from a predefined list of available detectors. In some embodiments, the detector may be chosen or selected with or without human input, according to such criteria as latency and accuracy requirements for a given image processing task.


In one variation, the first image processing engine is a high-accuracy or high-precision (e.g., higher than 50% mAP, 70% mAP, 80% mAP; etc.), high-latency (e.g., slower than 20 fps, slower than 30 fps, etc.) image processing engine. The first image processing engine is preferably an object detector (e.g., region-based convolutional network (R-CNN), fast R-CNN, region-based fully convolutional network (R-FCN), a detector using selective search, exhaustive search, deep learning, etc.), but can alternatively or additionally be: an object recognition algorithm, an object classifier, or any other suitable image processing engine. The second image processing engine can be a low-accuracy or low precision (e.g., lower than 60% mAP, 50% mAP, 40%, mAP; etc.), low-latency (e.g., faster than 30 fps, faster than 10 fps, etc.) image processing engine. The second image processing engine can be an object detector, classifier, or recognition algorithm (e.g., you only look once (YOLO), fast YOLO, YOLOv2, etc.), an object tracker (e.g., optical flow, point tracking, kernel tracking, silhouette tracking, etc.), or be any other suitable processor.


In one variation, the system includes a combination of two deep learning based detectors, one capable of outputting image prediction results at a relatively faster frame rate per second (fps) than the other. In another variation, the system includes a combination of a deep learning based detector (e.g., the slow detector) and a non-deep learning based tracker (e.g., the slow detector). A tracker follows one or multiple objects of interest within a scene or set of images to continuously provide their position. A tracker may estimate parameters of the dynamic system, including feature point positions and object position, using video from one or more cameras as the source of information. Detectors find objects of interest and provide their positions within an image. There is no assumption of system dynamics, nor is the response based on temporal consistency. Detectors use a single image, such as a single frame from a camera, as the source of information.


In some embodiments, the first and second image processing engines can perform tasks related to semantic segmentation. For example, the methods herein can be performed using semantic segmentation to calculate drivable area. In some embodiments, the first and second image processing engines perform tasks related to three-dimensional objects. For example, detecting boxes in three dimensions, segmentation in three dimensions, and predicting the three-dimensional orientation of objects can be performed using the methods herein.


Frame database 108 stores the frames from the camera into a database as they are outputted from the camera and sent to the image processing system 100. Output database 110 stores the output from the first image processing engine 104 and second image processing engine 106 based on the received images. In some embodiments, the output includes image prediction results, e.g., predictions of the future locations of objects in the images.


Client device(s) 112 are devices that send information to the image processing network 102, receive information from the image processing network 102, or both. A client device may include, e.g., one or more components of an autonomous vehicle, or a computer device associated with one or more users, organizations, or other entities.


Camera(s) 114 are devices that record visual information in video and/or image form and generate outputs of them in “frames” that represent a series of images in sequence for a given moment in time. In some embodiments, the camera(s) are positioned on one or more autonomous vehicles, and output frames while the vehicle is in operation. In some embodiments, the camera(s) are configured to output frames at a speed of 30 fps or higher.


The system can optionally include one or more chipsets or processing hardware that functions to execute all or a portion of the method. The processing hardware is preferably collocated with the processing hardware executing the endpoint application (e.g., executing the navigation method), but can additionally or alternatively be located on-board the component using the system outputs (e.g., on-board a vehicle or a robot), located remote from the component, or be otherwise arranged. The processing hardware can include one or more: embedded systems, microcontrollers, microprocessors, ASICs, CPUs, GPUs, TPUs, or any other suitable processing system.


In some embodiments, all or part of the processing hardware is located in an autonomous vehicle or across multiple autonomous vehicles, or in a central system or cloud associated with an autonomous vehicle fleet or network. In some embodiments, all of part of the processing within the system is performed in parallel across multiple processing components. In some embodiments, all or part of the processing within the system is performed in series across multiple processing components. In some embodiments, all or part of the processing tasks, camera output frames or image data are cached or available locally or offline for processing. In some embodiments, the system is partly or fully located within a cloud network.


In variants where the processing hardware's computation resources are limited (e.g., microcontrollers, ASICS, etc.), the system can automatically cluster layers of the first and/or second image processing engines into blocks (e.g., in variants wherein the first and/or second image processing engines include neural networks), such that the image processing engines can be interrupted when more urgent functions need to be executed. For example, the layers of the first image processing engine or slow detector (e.g., a DNN) can be clustered into blocks that are: intermittently run when the second image processing engine or fast detector is not consuming the computing resources; or constantly run but interrupted (e.g., at a break between sequential blocks) when a new video frame needs to be processed by the second image processing engine or fast detector. The blocks are preferably even, but can alternatively be uneven. However, limited computing resources can be otherwise managed. In variants where the processing hardware has multiple cores or can support multiple threads, the first image processing engine and second image processing engine can be executed in parallel (e.g., on different cores or threads). However, the computing resources can be otherwise allocated to different image processing processes of the method.


Example Methods/Block Diagrams



FIG. 2 is a flowchart representation of one embodiment of a method for detecting objects.


At step 202, system 100 receives a first frame from a camera. In some embodiments, the camera records video and outputs a first frame from the video, then sends it to system 100. In some embodiments, the camera takes a series of still images, and outputs a first frame representing a still image, then sends it to system 100. In some embodiments, system 100 stores the first frame in frame database 108.


At step 204, system 100 processes the first frame with a first image processing engine. In some embodiments, the first image processing engine is a detector capable of detecting objects within an image. In some embodiments, the first image processing engine is a tracker capable of tracking the locations of detected objects. In some embodiments, processing the first frame involves one or more of image classification, object detection, and object tracking. In some embodiments, the first image processing engine is a ‘slow’ detector, e.g., capable of generating image processing outputs at a speed of 20 fps or less. Slow detectors are typically highly accurate and provide infrequent high accuracy output. Slow detectors may be, for example, high compute, high resolution networks, or high compute, low resolution networks.


At step 206, system 100 receives a second frame from the camera while the first frame is being processed. In some embodiments, system 100 receives the second frame before the processing in step 204 is completed. In some embodiments, the second frame is the next frame in a series or sequence of frames the camera outputs in a video or other feed related to a sequence. The second frame is preferably processed by the second image processing engine (e.g., as discussed above, alternatively by any other suitable image processing engine) in real- or near-real time (e.g., substantially immediately, in 10% of the camera frame rate, etc.), but can be otherwise processed. In some embodiments, system 100 receives multiple frames in between the first frame and the second frame. The multiple frames can be used for image prediction, object detection, tracking, or other purposes within the image processing system 100.


At step 208, system 100 sends processed output of the first frame to a second image processing engine. In some embodiments, the second image processing engine has a faster processing speed than the first image processing engine. In some embodiments, the second image processing engine has a slower processing speed than the first image processing engine. In some embodiments, the first image processing engine may be a detector while the second is a tracker. In some embodiments, both the first and second image processing engines are detectors, with one capable of faster image processing speeds than the other. In some embodiments, the first image processing engine and the second image processing engine share the computational load within a single device or network of devices. In some embodiments, the first image processing engine is a deep learning based processor, while the second image processing engine is a non-deep learning based processor. In other embodiments, both processors are deep learning based processors. In some embodiments, the second image processing engine is a low compute, high resolution network. In some embodiments, the second image processing engine is a low compute, low resolution network. In some embodiments, the second image processing engine is a detector chosen from a predefined list of available detectors. In some embodiments, the detector may be chosen in such a way that the frame rate of the detector's output is equal to the camera's frame rate. In some embodiments, the frame rate of the detector's output is slightly faster than the camera's frame rate, with or without sharing the computational load of the first image processing engine. In some embodiments, the processed output includes results of the image processing of the first frame, such as detection of one or more images in the first frame, tracking of the locations of one or more images in the first frame, predicting the future locations of one or more images in the first frame, or other image processing results.


In some embodiments, the slow image processing engine does not receive every frame that is generated as output from the camera and sent to the system 100. In one variation, the slow image processing engine receives a frame from the camera (for processing) only when its processing and computational resources are not being expended on processing another frame. For example, if frame 1 is sent from the camera to the system 100 and the slow image processing engine has available resources that are not being used for processing a frame, then system 100 directs frame 1 to be sent to the slow image processing engine to be processed. When frame 2 is sent from the camera to the system 100 and the slow image processing engine is still using those resources to process frame 2, and insufficient resources remain to process frame 2, then system 100 does not direct frame 2 to be sent from the camera to the slow image processing engine. In a second variation, the slow image processing engine receives every Nth frame, wherein N can be selected based on: the camera sampling rate and the slow image processing engine's image processing rate (e.g., wherein sampling N frames can take longer than the time it takes for the slow processing engine to process a single frame), be predetermined, be determined based on the fast image processing engine's accuracy (e.g., wherein N can be smaller when the fast image processing engine is less accurate; larger when the fast image processing engine is more accurate; etc.), or be otherwise determined.


At step 210, system 100 combines the processed output of the first frame with the second frame to generate an object prediction result for the first frame. In some embodiments in which the first and second image processing engines are a detector and a tracker, system 100 combines the processed output of the first frame with the second frame by a method involving optical flow. Due to the tracker, the locations of objects from the first frame are known, and the image constituting the second frame has been received. In order to predict the locations of objects in the second frame, system 100 computes the optical flow between the two images of the first frame and second frame to obtain the motion vectors at each pixel. The motion vectors at each pixel provide information on how much each pixel has moved from the first frame to the second frame. System 100 averages the information on how much each pixel has moved to calculate how much the locations of objects have moved from the first frame to the second frame. System 100 then takes the boxes or boundaries of the objects in the first frame, and moves them by a certain amount to obtain a prediction of object locations in a future frame, such as a third or fourth frame. However, the processed output of the first frame and the second frame can be otherwise combined.


In some embodiments in which the first and second image processing engines are a slower detector and a faster detector, the faster detector can never predict new objects, it can only adjust the locations obtained by the slower detector for existing objects' locations. System 100 outputs the first frame's object's location after the second frame's object's location. This increases the accuracy compared to using only the output of the slower detector or the faster detector. No new objects will be detected until the slower detector has processed the frame and detected them. The faster detector will then make the locations of the objects more accurate, and can update the locations of the objects every frame. In this way, at every frame, the faster detector updates the locations of objects, and at every other frame, the slower detector looks for objects to detect. This leads to a more refined detection of the locations than methods including either using a fast detector that's comparatively less accurate on every frame, or using a slow detector with good detection of objects every other frame.


In some embodiments, the outputs of the first image processing engine, second image processing engine, or both are cached. If the image processing network 102 cannot run two neural networks at the same time, for example, then execution of the first image processing engine will have to be paused by system 100 while the second image processing engine runs. After the second image processing engine is finished running, system 100 will have to re-execute. In other embodiments, rather than caching the outputs of the image processing engines, system 100 executes the first and second image processing engine serially, or executes the first and second image processing engine in parallel.



FIG. 3 is an illustration of an example 300 of object detection using a detector and a tracker. Example 300 reads from left to right with frame 302 being outputted from a camera and received by system 100. Frame o, or the first frame, is received by system 100, and system 100 sends it to tracker 306. Tracker 306 in example 300 has a latency of 5 ms, which represents the latency time for the tracker 306 to produce an output after receiving a frame from the camera. System 100 also sends frame o to detector 308, which has a latency of 50 ms. Thus, detector 308 produces an output 50 ms after receiving a frame from the camera. The camera in example 300 produces output frames at 33 ms.


Tracker 306 receives frame o from the camera as an input, and, in some embodiments, receives an additional input constituting an output from detector 308. Tracker 306 processes the image of frame o by determining the object locations of identified objects within frame o, and produces an output 304 constituting output o for frame o.


Detector 308 receives frame o from the camera, and begins processing the image of frame o to identify objects within the frame as well as their locations. Detector 308 is slower than tracker 306, and thus the processing time takes longer. While detector 308 is processing frame o, frame 1, or the second frame, is received by system 100 from the camera. System 100 sends frame 1 to tracker 306.


Once detector 308 finishes processing frame o, detector 308 sends the output to system 100. System 100 sends the output of detector 308 processing frame o to tracker 306. At this point, tracker 306 has received the inputs of frame 1 from the camera and the output of frame o from the detector 308. Tracker 306 then takes in the identified objects and locations from frame o as determined by detector 308, and the image of frame 1 from the camera, and combines both of them into an output for frame 1. In some embodiments, the tracker 306 calculates the optical flow between the two images of frame o and frame 1 to get the motion vectors at each pixel. The motion vectors provide information of how much each pixel has moved. Tracker 306 averages the motion vectors for a particular object to determine how much that object has moved from frame o to frame 1 to generate bounding boxes. Tracker 306 then takes the bounding boxes from frame o and moves them by a set amount to predict image locations for frame 1. Tracker 306 then sends the output of that prediction as output 1.


In a similar fashion to the prior steps, frame 2 is received by system 100, which sends frame 2 to both tracker 306 and detector 308 as inputs. Tracker 306 receives frame 2 as an input, as well as the prediction result of frame 1 determined earlier. Tracker 306 then processes frame 2 and determines locations of identified objects within frame 2, then outputs that as output 2.


Detector 308 receives frame 2 as input, and begins processing frame 2 to identify objects and their locations within frame 2. While detector 308 is processing, frame 3 is received by system 100. System 100 sends frame 3 to tracker 306. When detector 308 is finished processing frame 2, the output is sent to tracker 306, which receives it as input. This processes continues for additional frames until the image processing task completes, the camera stops outputting frames and sending them to system 100, or some other triggering event occurs to stop the process.


The result of example 300 is that system 100 with tracker 306 and detector 308 produces outputs at 30 fps and 5-20 ms of latency, which is to be compared to 15 fps and 50 ms latency when a naive method of only the detector is used. Thus, superior speed is achieved, which allows for potentially better accuracy when compared to techniques such as shrinking a neural network down to the same speed.



FIG. 4 is an illustration of an example 400 of object detection using a first detector and a second detector. Detector 408 in example 400 produces outputs at 80 ms latency, and is paired with faster detector 406, which produces outputs at 15 ms latency. The camera produces output frames at 33 ms. In some embodiments, both detector 406 and detector 408 are deep learning neural networks. In some embodiments, detector 408 employs a high compute, low resolution neural network, while detector 406 employs a low compute, low resolution neural network. In some embodiments, the faster detector 406 has the ability to use information from the previous frame.


Frames 402 are produced as outputs by the camera. Frame o is received by system 100, which sends it to the faster detector 406. In some embodiments, detector 406 also receives an additional input from detector 408. Detector 406 performs image processing tasks to produce an output 404, in this case output o. Detector 406 reuses the information obtained from the previous frame o, and also receives frame 1 as input. Detector 408 also receives frame o and begins processing. Detector 406 receives additional frames 2, 3, and 4, processes them, and outputs them as outputs 2, 3, and 4. Meanwhile, slower detector 408 continues processing frame o.


Once detector 408 finishes processing frame o, it sends the output o, which is a high accuracy prediction of identified objects and their locations, to faster detector 406. System 100 sends frame 5 from the camera to faster detector 406 as additional input. Detector 406 uses the high accuracy prediction from five frames ago, as outputted by detector 408, and updates it with the more recent frame 5 to generate more accurate predictions of object locations. In this sense, detector 406 is trained on the high accuracy prediction every fifth frame and trained on the previous low accuracy prediction every frame other than the fifth frame. In some embodiments, detector 406 uses machine learning techniques to learn using this previous data, including high accuracy predictions when they are available, and lower accuracy predictions otherwise. This leads to different weights being assigned to the neural network based on the different level of accuracy. Weights are the parameters of the neural network that constitute the output of the change in accuracy. This process repeats until the image processing task is completed, the camera stops outputting frames to system 100, or some other triggering event occurs.


In some embodiments of examples 300 and 400, both examples use a slower image processing engine as a prior input to a faster image processing engine in order to get an output at the speed of the faster image processing engine, but with information from the slower image processing engine with the aim of boosting accuracy beyond what the faster image processing engine can achieve by itself.


OTHER EMBODIMENTS

Embodiments of the system and/or method can include every combination and permutation of the various system components and the various method processes, wherein one or more instances of the method and/or processes described herein can be performed asynchronously (e.g., sequentially), concurrently (e.g., in parallel), or in any other suitable order by and/or using one or more instances of the systems, elements, and/or entities described herein.


As a person skilled in the art will recognize from the previous detailed description and from the figures and claims, modifications and changes can be made to the preferred embodiments of the description without departing from the scope of this invention defined in the following claims.


Each of the processes, methods, and algorithms described in the preceding sections may be embodied in, and fully or partially automated by, code modules executed by one or more computer systems or computer processors comprising computer hardware. The code modules (or “engines”) may be stored on any type of non-transitory computer-readable medium or computer storage device, such as hard drives, solid state memory, optical disc, and/or the like. The systems and modules may also be transmitted as generated data signals (for example, as part of a carrier wave or other analog or digital propagated signal) on a variety of computer-readable transmission mediums, including wireless-based and wired/cable-based mediums, and may take a variety of forms (for example, as part of a single or multiplexed analog signal, or as multiple discrete digital packets or frames). The processes and algorithms may be implemented partially or wholly in application-specific circuitry. The results of the disclosed processes and process steps may be stored, persistently or otherwise, in any type of non-transitory computer storage such as, for example, volatile or non-volatile storage.


In general, the terms “engine” and “module”, as used herein, refer to logic embodied in hardware or firmware, or to a collection of software instructions, possibly having entry and exit points, written in a programming language, such as, for example, Java, Lua, C or C++. A software module may be compiled and linked into an executable program, installed in a dynamic link library, or may be written in an interpreted programming language such as, for example, BASIC, Perl, or Python. It will be appreciated that software modules may be callable from other modules or from themselves, and/or may be invoked in response to detected events or interrupts. Software modules configured for execution on computing devices may be provided on one or more computer readable media, such as a compact discs, digital video discs, flash drives, or any other tangible media. Such software code may be stored, partially or fully, on a memory device of the executing computing device. Software instructions may be embedded in firmware, such as an EPROM. It will be further appreciated that hardware modules may be comprised of connected logic units, such as gates and flip-flops, and/or may be comprised of programmable units, such as programmable gate arrays or processors. The modules described herein are preferably implemented as software modules, but may be represented in hardware or firmware. Generally, the modules described herein refer to logical modules that may be combined with other modules or divided into sub-modules despite their physical organization or storage. Electronic Data Sources can include databases, volatile/non-volatile memory, and any memory system or subsystem that maintains information.


The various features and processes described above may be used independently of one another, or may be combined in various ways. All possible combinations and subcombinations are intended to fall within the scope of this disclosure. In addition, certain method or process blocks may be omitted in some implementations. The methods and processes described herein are also not limited to any particular sequence, and the blocks or states relating thereto can be performed in other sequences that are appropriate. For example, described blocks or states may be performed in an order other than that specifically disclosed, or multiple blocks or states may be combined in a single block or state. The example blocks or states may be performed in serial, in parallel, or in some other manner. Blocks or states may be added to or removed from the disclosed example embodiments. The example systems and components described herein may be configured differently than described. For example, elements may be added to, removed from, or rearranged compared to the disclosed example embodiments.


Conditional language used herein, such as, among others, “can,” “could,” “might,” “may,” “for example,” and the like, unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without author input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular embodiment. The terms “comprising,” “including,” “having,” and the like are synonymous and are used inclusively, in an open-ended fashion, and do not exclude additional elements, features, acts, operations, and so forth. Also, the term “or” is used in its inclusive sense (and not in its exclusive sense) so that when used, for example, to connect a list of elements, the term “or” means one, some, or all of the elements in the list. Conjunctive language such as the phrase “at least one of X, Y and Z,” unless specifically stated otherwise, is otherwise understood with the context as used in general to convey that an item, term, etc. may be either X, Y or Z. Thus, such conjunctive language is not generally intended to imply that certain embodiments require at least one of X, at least one of Y and at least one of Z to each be present.


The term “a” as used herein should be given an inclusive rather than exclusive interpretation. For example, unless specifically noted, the term “a” should not be understood to mean “exactly one” or “one and only one”; instead, the term “a” means “one or more” or “at least one,” whether used in the claims or elsewhere in the specification and regardless of uses of quantifiers such as “at least one,” “one or more,” or “a plurality” elsewhere in the claims or specification.


The term “comprising” as used herein should be given an inclusive rather than exclusive interpretation. For example, a general purpose computer comprising one or more processors should not be interpreted as excluding other computer components, and may possibly include such components as memory, input/output devices, and/or network interfaces, among others.


While certain example embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the disclosure. Thus, nothing in the foregoing description is intended to imply that any particular element, feature, characteristic, step, module, or block is necessary or indispensable. Indeed, the novel methods and systems described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions, and changes in the form of the methods and systems described herein may be made without departing from the spirit of the inventions disclosed herein. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of certain of the inventions disclosed herein.


Any process descriptions, elements, or blocks in the flow diagrams described herein and/or depicted in the attached figures should be understood as potentially representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps in the process. Alternate implementations are included within the scope of the embodiments described herein in which elements or functions may be deleted, executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those skilled in the art.


It should be emphasized that many variations and modifications may be made to the above-described embodiments, the elements of which are to be understood as being among other acceptable examples. All such modifications and variations are intended to be included herein within the scope of this disclosure. The foregoing description details certain embodiments of the invention. It will be appreciated, however, that no matter how detailed the foregoing appears in text, the invention can be practiced in many ways. As is also stated above, it should be noted that the use of particular terminology when describing certain features or aspects of the invention should not be taken to imply that the terminology is being re-defined herein to be restricted to including any specific characteristics of the features or aspects of the invention with which that terminology is associated.

Claims
  • 1. A method implemented by a system of one or more processors, the method comprising: obtaining a plurality of images at a threshold frequency, the images being obtained from one or more image sensors positioned about a vehicle;determining, based on the images, location information associated with objects classified in the images, wherein determining location information is based on analyzing the images via a first machine learning model at the threshold frequency, wherein a subset of the images is analyzed via a second machine learning model at less than the threshold frequency,wherein the first machine learning model is configured to periodically receive output information from the second machine learning model, the received output information being input into the first machine learning model in combination with a first image of the plurality of images, and the received output information being usable to increase an accuracy of determining location information associated with objects classified in the first image,wherein the plurality of images represents a sequence of images comprising at least the first image, and wherein prior to completion of the analysis by the second machine learning model, the first image is analyzed via the first machine learning model; andoutputting the determined location information, wherein the determined location information is configured for use in autonomous driving of the vehicle.
  • 2. The method of claim 1, wherein the first machine learning model is a tracker, and wherein the second machine learning model is a detector.
  • 3. The method of claim 2, wherein determining location information comprises: instructing the second machine learning model to analyze the first image;prior to completion of the analysis by the second machine learning model, analyzing the first image via the first machine learning model, wherein the first machine learning model is configured to update location information associated with objects classified by the second machine learning model in an image prior to the first image in the sequence.
  • 4. The method of claim 3, wherein the sequence of images further comprises a second image subsequent to the first image, and wherein the method further comprises: analyzing, via the second machine learning model, the first image of the plurality of images, wherein the second machine learning model determines output information comprising: one or more objects classified in the first image, andlocation information associated with the objects; andproviding the determined output information to the first machine learning model as an input, wherein the first machine learning model analyzes the second image using the output information.
  • 5. The method of claim 4, wherein for the second image, the first machine learning model determines updated location information associated with the one or more objects depicted in the first image.
  • 6. The method of claim 2, wherein the second machine learning model classifies objects in the subset of the images, and wherein the first machine learning model updates, for each of the images, location information associated with the classified objects.
  • 7. The method of claim 6, wherein the plurality of images represents a sequence of images, and wherein the first machine learning model determines optical flow information between images in the sequence of images.
  • 8. The method of claim 1, wherein the first machine learning model is a first detector associated with a first accuracy, and wherein the second machine learning model is a second detector associated with a second, greater, accuracy.
  • 9. The method of claim 8, wherein the sequence of images comprises at least the first image and one or more second images, and wherein determining location information comprises: instructing the second machine learning model to analyze the first image;prior to completion of the analysis by the second machine learning model, analyzing the first image and the one or more second images via the first machine learning model, wherein the first machine learning model is configured to determine location information associated with objects, wherein the first machine learning model is configured to use output information from the second machine learning model, the output information being based on an image prior to the first image in the sequence.
  • 10. The method of claim 9, wherein the sequence of images further comprises a third image subsequent to the first image and the one or more second images, and wherein the method further comprises: analyzing, via the second machine learning model, the first image of the plurality of images, wherein the second machine learning model determines output information comprising: one or more objects classified in the first image, andlocation information associated with the objects; andproviding the determined output information to the first machine learning model as an input, wherein the first machine learning model analyzes the third image using the output information, such that an accuracy associated with the first machine learning model analyzing the third image is increased.
  • 11. The method of claim 8, wherein the first machine learning model is trained based on periodically using output information from the second machine learning model as an input.
  • 12. A system comprising one or more processors and non-transitory computer storage media storing instructions that, when executed by the one or more processors, cause the one or more processors to perform operations comprising: obtaining a plurality of images at a threshold frequency, the images being obtained from one or more image sensors positioned about a vehicle;determining, based on the images, location information associated with objects classified in the images, wherein determining location information is based on analyzing the images via a first machine learning model at the threshold frequency, wherein, for a subset of the analyzed images, the first machine learning model is configured to use output information from a second machine learning model, the second machine learning model being configured to perform analyses on the subset of the analyzed images at less than the threshold frequency,wherein the second machine learning model classifies objects in the subset of the images, wherein the first machine learning model updates location information associated with the objects based on each of the analyzed images,wherein prior to completion of the classification by the second machine learning model, at least one image included in the subset is analyzed via the first machine learning model; andoutputting the determined location information, wherein the determined location information is configured for use in autonomous driving of the vehicle.
  • 13. The system of claim 12, wherein the first machine learning model is configured to periodically receive output information from the second machine learning model, the received output information being usable to increase an accuracy associated with determining location information.
  • 14. The system of claim 12, wherein the first machine learning model is a tracker, and wherein the second machine learning model is a detector.
  • 15. The system of claim 12, wherein the first machine learning model is a first detector associated with a first accuracy, and wherein the second machine learning model is a second detector associated with a second, greater, accuracy.
  • 16. Non-transitory computer storage media storing instructions that when executed by a system of one or more processors, cause the one or more processors to perform operations comprising: obtaining a plurality of images at a threshold frequency, the images being obtained from one or more image sensors positioned about a vehicle;determining, based on the images, location information associated with objects classified in the images, wherein determining location information is based on analyzing the images via a first machine learning model at the threshold frequency, wherein, for a subset of the analyzed images, the first machine learning model is configured to use output information from a second machine learning model, the second machine learning model being configured to perform analyses on the subset of the analyzed images at less than the threshold frequency,wherein the second machine learning model classifies objects in the subset of the images, wherein the first machine learning model updates location information associated with the objects based on each of the analyzed images,wherein prior to completion of the classification by the second machine learning model, at least one image included in the subset is analyzed via the first machine learning model; andoutputting the determined location information, wherein the determined location information is configured for use in autonomous driving of the vehicle.
  • 17. The computer-storage media of claim 16, wherein the first machine learning model is a tracker, and wherein the second machine learning model is a detector.
  • 18. The computer-storage media of claim 16, wherein the first machine learning model is a first detector associated with a first accuracy, and wherein the second machine learning model is a second detector associated with a second, greater, accuracy.
US Referenced Citations (586)
Number Name Date Kind
6882755 Silverstein et al. May 2005 B2
7209031 Nakai et al. Apr 2007 B2
7747070 Puri Jun 2010 B2
7904867 Burch et al. Mar 2011 B2
7974492 Nishijima Jul 2011 B2
8165380 Choi et al. Apr 2012 B2
8369633 Lu et al. Feb 2013 B2
8406515 Cheatle et al. Mar 2013 B2
8509478 Haas et al. Aug 2013 B2
8588470 Rodriguez et al. Nov 2013 B2
8744174 Hamada et al. Jun 2014 B2
8773498 Lindbergh Jul 2014 B2
8912476 Fogg et al. Dec 2014 B2
8913830 Sun et al. Dec 2014 B2
8928753 Han et al. Jan 2015 B2
8972095 Furuno et al. Mar 2015 B2
8976269 Duong Mar 2015 B2
9008422 Eid et al. Apr 2015 B2
9081385 Ferguson et al. Jul 2015 B1
9275289 Li et al. Mar 2016 B2
9586455 Sugai et al. Mar 2017 B2
9672437 McCarthy Jun 2017 B2
9710696 Wang et al. Jul 2017 B2
9738223 Zhang et al. Aug 2017 B2
9754154 Craig et al. Sep 2017 B2
9767369 Furman et al. Sep 2017 B2
9965865 Agrawal et al. May 2018 B1
10133273 Linke Nov 2018 B2
10140252 Fowers et al. Nov 2018 B2
10140544 Zhao et al. Nov 2018 B1
10146225 Ryan Dec 2018 B2
10152655 Krishnamurthy et al. Dec 2018 B2
10167800 Chung et al. Jan 2019 B1
10169680 Sachdeva et al. Jan 2019 B1
10192016 Ng et al. Jan 2019 B2
10216189 Haynes Feb 2019 B1
10228693 Micks et al. Mar 2019 B2
10242293 Shim et al. Mar 2019 B2
10248121 VandenBerg, III Apr 2019 B2
10262218 Lee et al. Apr 2019 B2
10282623 Ziyaee et al. May 2019 B1
10296828 Viswanathan May 2019 B2
10303961 Stoffel et al. May 2019 B1
10310087 Laddha et al. Jun 2019 B2
10311312 Yu et al. Jun 2019 B2
10318848 Dijkman et al. Jun 2019 B2
10325178 Tang et al. Jun 2019 B1
10331974 Zia et al. Jun 2019 B2
10338600 Yoon et al. Jul 2019 B2
10343607 Kumon et al. Jul 2019 B2
10359783 Williams et al. Jul 2019 B2
10366290 Wang et al. Jul 2019 B2
10372130 Kaushansky et al. Aug 2019 B1
10373019 Nariyambut Murali et al. Aug 2019 B2
10373026 Kim et al. Aug 2019 B1
10380741 Yedla et al. Aug 2019 B2
10394237 Xu et al. Aug 2019 B2
10395144 Zeng et al. Aug 2019 B2
10402646 Klaus Sep 2019 B2
10402986 Ray et al. Sep 2019 B2
10414395 Sapp et al. Sep 2019 B1
10423934 Zanghi et al. Sep 2019 B1
10436615 Agarwal et al. Oct 2019 B2
10452905 Segalovitz et al. Oct 2019 B2
10460053 Olson et al. Oct 2019 B2
10467459 Chen et al. Nov 2019 B2
10468008 Beckman et al. Nov 2019 B2
10468062 Levinson et al. Nov 2019 B1
10470510 Koh et al. Nov 2019 B1
10474160 Huang et al. Nov 2019 B2
10474161 Huang et al. Nov 2019 B2
10474928 Sivakumar et al. Nov 2019 B2
10489126 Kumar et al. Nov 2019 B2
10489972 Atsmon Nov 2019 B2
10503971 Dang et al. Dec 2019 B1
10514711 Bar-Nahum et al. Dec 2019 B2
10528824 Zou Jan 2020 B2
10529078 Abreu et al. Jan 2020 B2
10529088 Fine et al. Jan 2020 B2
10534854 Sharma et al. Jan 2020 B2
10535191 Sachdeva et al. Jan 2020 B2
10542930 Sanchez et al. Jan 2020 B1
10546197 Shrestha et al. Jan 2020 B2
10546217 Albright et al. Jan 2020 B2
10552682 Jonsson et al. Feb 2020 B2
10559386 Neuman Feb 2020 B1
10565475 Lecue et al. Feb 2020 B2
10567674 Kirsch Feb 2020 B2
10568570 Sherpa et al. Feb 2020 B1
10572717 Zhu et al. Feb 2020 B1
10574905 Srikanth et al. Feb 2020 B2
10579058 Oh et al. Mar 2020 B2
10579063 Haynes et al. Mar 2020 B2
10579897 Redmon et al. Mar 2020 B2
10586280 McKenna et al. Mar 2020 B2
10591914 Palanisamy et al. Mar 2020 B2
10592785 Zhu et al. Mar 2020 B2
10599701 Liu Mar 2020 B2
10599930 Lee et al. Mar 2020 B2
10599958 He et al. Mar 2020 B2
10606990 Tuli et al. Mar 2020 B2
10609434 Singhai et al. Mar 2020 B2
10614344 Anthony et al. Apr 2020 B2
10621513 Deshpande et al. Apr 2020 B2
10627818 Sapp et al. Apr 2020 B2
10628432 Guo et al. Apr 2020 B2
10628686 Ogale et al. Apr 2020 B2
10628688 Kim et al. Apr 2020 B1
10629080 Kazemi et al. Apr 2020 B2
10636161 Uchigaito Apr 2020 B2
10636169 Estrada et al. Apr 2020 B2
10642275 Silva et al. May 2020 B2
10645344 Marman et al. May 2020 B2
10649464 Gray May 2020 B2
10650071 Asgekar et al. May 2020 B2
10652565 Zhang et al. May 2020 B1
10656657 Djuric et al. May 2020 B2
10657391 Chen et al. May 2020 B2
10657418 Marder et al. May 2020 B2
10657934 Kolen et al. May 2020 B1
10661902 Tavshikar May 2020 B1
10664750 Greene May 2020 B2
10671082 Huang et al. Jun 2020 B2
10671886 Price et al. Jun 2020 B2
10678244 Iandola et al. Jun 2020 B2
10678839 Gordon et al. Jun 2020 B2
10678997 Ahuja et al. Jun 2020 B2
10679129 Baker Jun 2020 B2
10685159 Su et al. Jun 2020 B2
10685188 Zhang et al. Jun 2020 B1
10692000 Surazhsky et al. Jun 2020 B2
10692242 Morrison et al. Jun 2020 B1
10693740 Coccia et al. Jun 2020 B2
10698868 Guggilla et al. Jun 2020 B2
10699119 Lo et al. Jun 2020 B2
10699140 Kench et al. Jun 2020 B2
10699477 Levinson et al. Jun 2020 B2
10713502 Tiziani Jul 2020 B2
10719759 Kutliroff Jul 2020 B2
10725475 Yang et al. Jul 2020 B2
10726264 Sawhney et al. Jul 2020 B2
10726279 Kim et al. Jul 2020 B1
10726374 Engineer et al. Jul 2020 B1
10732261 Wang et al. Aug 2020 B1
10733262 Miller et al. Aug 2020 B2
10733482 Lee et al. Aug 2020 B1
10733638 Jain et al. Aug 2020 B1
10733755 Liao et al. Aug 2020 B2
10733876 Moura et al. Aug 2020 B2
10740563 Dugan Aug 2020 B2
10740914 Xiao et al. Aug 2020 B2
10748062 Rippel et al. Aug 2020 B2
10748247 Paluri Aug 2020 B2
10751879 Li et al. Aug 2020 B2
10755112 Mabuchi Aug 2020 B2
10755575 Johnston et al. Aug 2020 B2
10757330 Ashrafi Aug 2020 B2
10762396 Vallespi et al. Sep 2020 B2
10768628 Martin et al. Sep 2020 B2
10768629 Song et al. Sep 2020 B2
10769446 Chang et al. Sep 2020 B2
10769483 Nirenberg et al. Sep 2020 B2
10769493 Yu et al. Sep 2020 B2
10769494 Xiao et al. Sep 2020 B2
10769525 Redding et al. Sep 2020 B2
10776626 Lin et al. Sep 2020 B1
10776673 Kim et al. Sep 2020 B2
10776939 Ma et al. Sep 2020 B2
10779760 Lee et al. Sep 2020 B2
10783381 Yu et al. Sep 2020 B2
10783454 Shoaib et al. Sep 2020 B2
10789402 Vemuri et al. Sep 2020 B1
10789544 Fiedel et al. Sep 2020 B2
10790919 Kolen et al. Sep 2020 B1
10796221 Zhang et al. Oct 2020 B2
10796355 Price et al. Oct 2020 B1
10796423 Goja Oct 2020 B2
10798368 Briggs et al. Oct 2020 B2
10803325 Bai et al. Oct 2020 B2
10803328 Bai et al. Oct 2020 B1
10803743 Abari et al. Oct 2020 B2
10805629 Liu et al. Oct 2020 B2
10809730 Chintakindi Oct 2020 B2
10810445 Kangaspunta Oct 2020 B1
10816346 Wheeler et al. Oct 2020 B2
10816992 Chen Oct 2020 B2
10817731 Vallespi et al. Oct 2020 B2
10817732 Porter et al. Oct 2020 B2
10819923 McCauley et al. Oct 2020 B1
10824122 Mummadi et al. Nov 2020 B2
10824862 Qi et al. Nov 2020 B2
10828790 Nemallan Nov 2020 B2
10832057 Chan et al. Nov 2020 B2
10832093 Taralova et al. Nov 2020 B1
10832414 Pfeiffer Nov 2020 B2
10832418 Karasev et al. Nov 2020 B1
10833785 O'Shea et al. Nov 2020 B1
10836379 Xiao et al. Nov 2020 B2
10838936 Cohen Nov 2020 B2
10839230 Charette et al. Nov 2020 B2
10839578 Coppersmith et al. Nov 2020 B2
10843628 Kawamoto et al. Nov 2020 B2
10845820 Wheeler Nov 2020 B2
10845943 Ansari et al. Nov 2020 B1
10846831 Raduta Nov 2020 B2
10846888 Kaplanyan et al. Nov 2020 B2
10853670 Sholingar et al. Dec 2020 B2
10853739 Truong et al. Dec 2020 B2
10860919 Kanazawa et al. Dec 2020 B2
10860924 Burger Dec 2020 B2
10867444 Russell et al. Dec 2020 B2
10871444 Al Shehri et al. Dec 2020 B2
10871782 Milstein et al. Dec 2020 B2
10872204 Zhu et al. Dec 2020 B2
10872254 Mangla et al. Dec 2020 B2
10872326 Garner Dec 2020 B2
10872531 Liu et al. Dec 2020 B2
10885083 Moeller-Bertram et al. Jan 2021 B2
10887433 Fu et al. Jan 2021 B2
10890898 Akella et al. Jan 2021 B2
10891715 Li Jan 2021 B2
10891735 Yang et al. Jan 2021 B2
10893070 Wang et al. Jan 2021 B2
10893107 Callari et al. Jan 2021 B1
10896763 Kempanna et al. Jan 2021 B2
10901416 Khanna et al. Jan 2021 B2
10901508 Laszlo et al. Jan 2021 B2
10902551 Mellado et al. Jan 2021 B1
10908068 Amer et al. Feb 2021 B2
10908606 Stein et al. Feb 2021 B2
10909368 Guo et al. Feb 2021 B2
10909453 Myers et al. Feb 2021 B1
10915783 Hallman et al. Feb 2021 B1
10917522 Segalis et al. Feb 2021 B2
10921817 Kangaspunta Feb 2021 B1
10922578 Banerjee et al. Feb 2021 B2
10924661 Vasconcelos et al. Feb 2021 B2
10928508 Swaminathan Feb 2021 B2
10929757 Baker et al. Feb 2021 B2
10930065 Grant et al. Feb 2021 B2
10936908 Ho et al. Mar 2021 B1
10937186 Wang et al. Mar 2021 B2
10943101 Agarwal et al. Mar 2021 B2
10943132 Wang et al. Mar 2021 B2
10943355 Fagg et al. Mar 2021 B2
20030035481 Hahm Feb 2003 A1
20050162445 Sheasby et al. Jul 2005 A1
20060072847 Chor et al. Apr 2006 A1
20060224533 Thaler Oct 2006 A1
20060280364 Ma et al. Dec 2006 A1
20090016571 Tijerina et al. Jan 2009 A1
20100118157 Kameyama May 2010 A1
20120109915 Kamekawa May 2012 A1
20120110491 Cheung May 2012 A1
20120134595 Fonseca et al. May 2012 A1
20150104102 Carreira et al. Apr 2015 A1
20160132786 Balan et al. May 2016 A1
20160328856 Mannino et al. Nov 2016 A1
20170011281 Dihkman et al. Jan 2017 A1
20170158134 Shigemura Jun 2017 A1
20170206434 Nariyambut et al. Jul 2017 A1
20180012411 Richey et al. Jan 2018 A1
20180018590 Szeto et al. Jan 2018 A1
20180039853 Liu et al. Feb 2018 A1
20180067489 Oder et al. Mar 2018 A1
20180068459 Zhang et al. Mar 2018 A1
20180068540 Romanenko et al. Mar 2018 A1
20180074506 Branson Mar 2018 A1
20180121762 Han et al. May 2018 A1
20180150081 Gross et al. May 2018 A1
20180211403 Hotson et al. Jul 2018 A1
20180308012 Mummadi et al. Oct 2018 A1
20180314878 Lee et al. Nov 2018 A1
20180357511 Misra et al. Dec 2018 A1
20180374105 Azout et al. Dec 2018 A1
20190023277 Roger et al. Jan 2019 A1
20190025773 Yang et al. Jan 2019 A1
20190042894 Anderson Feb 2019 A1
20190042919 Peysakhovich et al. Feb 2019 A1
20190042944 Nair et al. Feb 2019 A1
20190042948 Lee et al. Feb 2019 A1
20190057314 Julian et al. Feb 2019 A1
20190065637 Bogdoll et al. Feb 2019 A1
20190072978 Levi Mar 2019 A1
20190079526 Vallespi et al. Mar 2019 A1
20190080602 Rice et al. Mar 2019 A1
20190095780 Zhong et al. Mar 2019 A1
20190095946 Azout et al. Mar 2019 A1
20190101914 Coleman et al. Apr 2019 A1
20190108417 Talagala et al. Apr 2019 A1
20190122111 Min et al. Apr 2019 A1
20190130255 Yim et al. May 2019 A1
20190145765 Luo et al. May 2019 A1
20190146497 Urtasun et al. May 2019 A1
20190147112 Gordon May 2019 A1
20190147250 Zhang et al. May 2019 A1
20190147254 Bai et al. May 2019 A1
20190147255 Homayounfar et al. May 2019 A1
20190147335 Wang et al. May 2019 A1
20190147372 Luo et al. May 2019 A1
20190158784 Ahn et al. May 2019 A1
20190180154 Orlov et al. Jun 2019 A1
20190185010 Ganguli et al. Jun 2019 A1
20190189251 Horiuchi et al. Jun 2019 A1
20190197357 Anderson et al. Jun 2019 A1
20190204842 Jafari et al. Jul 2019 A1
20190205402 Sernau et al. Jul 2019 A1
20190205667 Avidan et al. Jul 2019 A1
20190217791 Bradley et al. Jul 2019 A1
20190227562 Mohammadiha et al. Jul 2019 A1
20190228037 Nicol et al. Jul 2019 A1
20190230282 Sypitkowski et al. Jul 2019 A1
20190235499 Kazemi et al. Aug 2019 A1
20190236437 Shin et al. Aug 2019 A1
20190243371 Nister et al. Aug 2019 A1
20190244138 Bhowmick et al. Aug 2019 A1
20190250622 Nister et al. Aug 2019 A1
20190250626 Ghafarianzadeh et al. Aug 2019 A1
20190250640 O'Flaherty et al. Aug 2019 A1
20190258878 Koivisto et al. Aug 2019 A1
20190266418 Xu et al. Aug 2019 A1
20190266610 Ghatage et al. Aug 2019 A1
20190272446 Kangaspunta et al. Sep 2019 A1
20190276041 Choi et al. Sep 2019 A1
20190279004 Kwon et al. Sep 2019 A1
20190286652 Habbecke et al. Sep 2019 A1
20190286972 El Husseini et al. Sep 2019 A1
20190287028 St Amant et al. Sep 2019 A1
20190289281 Badrinarayanan et al. Sep 2019 A1
20190294177 Kwon et al. Sep 2019 A1
20190294975 Sachs Sep 2019 A1
20190311290 Huang et al. Oct 2019 A1
20190318099 Carvalho et al. Oct 2019 A1
20190325088 Dubey et al. Oct 2019 A1
20190325266 Klepper et al. Oct 2019 A1
20190325269 Bagherinezhad et al. Oct 2019 A1
20190325580 Lukac et al. Oct 2019 A1
20190325595 Stein et al. Oct 2019 A1
20190329790 Nandakumar et al. Oct 2019 A1
20190332875 Vallespi-Gonzalez et al. Oct 2019 A1
20190333232 Vallespi-Gonzalez et al. Oct 2019 A1
20190336063 Dascalu Nov 2019 A1
20190339989 Liang et al. Nov 2019 A1
20190340462 Pao et al. Nov 2019 A1
20190340492 Burger et al. Nov 2019 A1
20190340499 Burger et al. Nov 2019 A1
20190347501 Kim et al. Nov 2019 A1
20190349571 Herman et al. Nov 2019 A1
20190354782 Kee et al. Nov 2019 A1
20190354786 Lee Nov 2019 A1
20190354808 Park et al. Nov 2019 A1
20190354817 Shlens et al. Nov 2019 A1
20190354850 Watson et al. Nov 2019 A1
20190370398 He et al. Dec 2019 A1
20190370575 Nandakumar et al. Dec 2019 A1
20190370935 Chang et al. Dec 2019 A1
20190373322 Rojas-Echenique et al. Dec 2019 A1
20190377345 Bachrach et al. Dec 2019 A1
20190377965 Totolos et al. Dec 2019 A1
20190378049 Widmann et al. Dec 2019 A1
20190378051 Widmann et al. Dec 2019 A1
20190382007 Casas et al. Dec 2019 A1
20190384303 Muller et al. Dec 2019 A1
20190384304 Towal et al. Dec 2019 A1
20190384309 Silva et al. Dec 2019 A1
20190384994 Frossard et al. Dec 2019 A1
20190385048 Cassidy et al. Dec 2019 A1
20190385360 Yang et al. Dec 2019 A1
20200004259 Gulino et al. Jan 2020 A1
20200004351 Marchant et al. Jan 2020 A1
20200012936 Lee et al. Jan 2020 A1
20200017117 Milton Jan 2020 A1
20200025931 Liang et al. Jan 2020 A1
20200026282 Choe et al. Jan 2020 A1
20200026283 Barnes et al. Jan 2020 A1
20200026992 Zhang et al. Jan 2020 A1
20200027210 Haemel et al. Jan 2020 A1
20200033858 Xiao Jan 2020 A1
20200033865 Mellinger et al. Jan 2020 A1
20200034665 Ghanta et al. Jan 2020 A1
20200034710 Sidhu et al. Jan 2020 A1
20200036948 Song Jan 2020 A1
20200039520 Misu et al. Feb 2020 A1
20200051550 Baker Feb 2020 A1
20200060757 Ben-Haim et al. Feb 2020 A1
20200065711 Clément et al. Feb 2020 A1
20200065879 Hu et al. Feb 2020 A1
20200069973 Lou et al. Mar 2020 A1
20200073385 Jobanputra et al. Mar 2020 A1
20200074230 Englard et al. Mar 2020 A1
20200086880 Poeppel et al. Mar 2020 A1
20200089243 Poeppel et al. Mar 2020 A1
20200089969 Lakshmi et al. Mar 2020 A1
20200090056 Singhal et al. Mar 2020 A1
20200097841 Petousis et al. Mar 2020 A1
20200098095 Borcs et al. Mar 2020 A1
20200103894 Cella et al. Apr 2020 A1
20200104705 Bhowmick et al. Apr 2020 A1
20200110416 Hong et al. Apr 2020 A1
20200117180 Cella et al. Apr 2020 A1
20200117889 Laput et al. Apr 2020 A1
20200117916 Liu Apr 2020 A1
20200117917 Yoo Apr 2020 A1
20200118035 Asawa et al. Apr 2020 A1
20200125844 She et al. Apr 2020 A1
20200125845 Hess Apr 2020 A1
20200126129 Lkhamsuren et al. Apr 2020 A1
20200134427 Oh et al. Apr 2020 A1
20200134461 Chai et al. Apr 2020 A1
20200134466 Weintraub et al. Apr 2020 A1
20200134848 El-Khamy et al. Apr 2020 A1
20200143231 Fusi et al. May 2020 A1
20200143279 West et al. May 2020 A1
20200148201 King et al. May 2020 A1
20200149898 Felip et al. May 2020 A1
20200151201 Chandrasekhar et al. May 2020 A1
20200151619 Mopur et al. May 2020 A1
20200151692 Gao et al. May 2020 A1
20200158822 Owens et al. May 2020 A1
20200158869 Amirloo et al. May 2020 A1
20200159225 Zeng et al. May 2020 A1
20200160064 Wang et al. May 2020 A1
20200160104 Urtasun et al. May 2020 A1
20200160117 Urtasun et al. May 2020 A1
20200160178 Kar et al. May 2020 A1
20200160532 Urtasun et al. May 2020 A1
20200160558 Urtasun et al. May 2020 A1
20200160559 Urtasun et al. May 2020 A1
20200160598 Manivasagam et al. May 2020 A1
20200162489 Bar-Nahum et al. May 2020 A1
20200167438 Herring May 2020 A1
20200167554 Wang et al. May 2020 A1
20200174481 Van Heukelom et al. Jun 2020 A1
20200175326 Shen et al. Jun 2020 A1
20200175354 Volodarskiy et al. Jun 2020 A1
20200175371 Kursun Jun 2020 A1
20200175401 Shen Jun 2020 A1
20200183482 Sebot et al. Jun 2020 A1
20200184250 Oko Jun 2020 A1
20200184333 Oh Jun 2020 A1
20200192389 ReMine et al. Jun 2020 A1
20200193313 Ghanta et al. Jun 2020 A1
20200193328 Guestrin et al. Jun 2020 A1
20200202136 Shrestha et al. Jun 2020 A1
20200202196 Guo et al. Jun 2020 A1
20200209857 Djuric et al. Jul 2020 A1
20200209867 Valois et al. Jul 2020 A1
20200209874 Chen et al. Jul 2020 A1
20200210717 Hou et al. Jul 2020 A1
20200210769 Hou et al. Jul 2020 A1
20200210777 Valois et al. Jul 2020 A1
20200216064 du Toit et al. Jul 2020 A1
20200218722 Mai et al. Jul 2020 A1
20200218979 Kwon et al. Jul 2020 A1
20200223434 Campos et al. Jul 2020 A1
20200225758 Tang et al. Jul 2020 A1
20200226377 Campos et al. Jul 2020 A1
20200226430 Ahuja et al. Jul 2020 A1
20200238998 Dasalukunte et al. Jul 2020 A1
20200242381 Chao et al. Jul 2020 A1
20200242408 Kim et al. Jul 2020 A1
20200242511 Kale et al. Jul 2020 A1
20200245869 Sivan et al. Aug 2020 A1
20200249685 Elluswamy et al. Aug 2020 A1
20200250456 Wang et al. Aug 2020 A1
20200250515 Rifkin et al. Aug 2020 A1
20200250874 Assouline et al. Aug 2020 A1
20200257301 Weiser et al. Aug 2020 A1
20200257306 Nisenzon Aug 2020 A1
20200258057 Farahat et al. Aug 2020 A1
20200265247 Musk et al. Aug 2020 A1
20200272160 Djuric et al. Aug 2020 A1
20200272162 Hasselgren et al. Aug 2020 A1
20200272859 Iashyn et al. Aug 2020 A1
20200273231 Schied et al. Aug 2020 A1
20200279354 Klaiman Sep 2020 A1
20200279364 Sarkisian et al. Sep 2020 A1
20200279371 Wenzel et al. Sep 2020 A1
20200285464 Brebner Sep 2020 A1
20200286256 Houts et al. Sep 2020 A1
20200293786 Jia et al. Sep 2020 A1
20200293796 Sajjadi et al. Sep 2020 A1
20200293828 Wang et al. Sep 2020 A1
20200293905 Huang et al. Sep 2020 A1
20200294162 Shah Sep 2020 A1
20200294257 Yoo et al. Sep 2020 A1
20200294310 Lee et al. Sep 2020 A1
20200297237 Tamersoy et al. Sep 2020 A1
20200298891 Liang et al. Sep 2020 A1
20200301799 Manivasagam et al. Sep 2020 A1
20200302276 Yang et al. Sep 2020 A1
20200302291 Hong Sep 2020 A1
20200302627 Duggal et al. Sep 2020 A1
20200302662 Homayounfar et al. Sep 2020 A1
20200304441 Bradley et al. Sep 2020 A1
20200306640 Kolen et al. Oct 2020 A1
20200307562 Ghafarianzadeh et al. Oct 2020 A1
20200307563 Ghafarianzadeh et al. Oct 2020 A1
20200309536 Omari et al. Oct 2020 A1
20200309923 Bhaskaran et al. Oct 2020 A1
20200310442 Halder et al. Oct 2020 A1
20200311601 Robinson et al. Oct 2020 A1
20200312003 Borovikov et al. Oct 2020 A1
20200315708 Mosnier et al. Oct 2020 A1
20200320132 Neumann Oct 2020 A1
20200324073 Rajan et al. Oct 2020 A1
20200327192 Hackman et al. Oct 2020 A1
20200327443 Van et al. Oct 2020 A1
20200327449 Tiwari et al. Oct 2020 A1
20200327662 Liu et al. Oct 2020 A1
20200327667 Arbel et al. Oct 2020 A1
20200331476 Chen et al. Oct 2020 A1
20200334416 Vianu et al. Oct 2020 A1
20200334495 Al-Rfou et al. Oct 2020 A1
20200334501 Lin et al. Oct 2020 A1
20200334551 Javidi et al. Oct 2020 A1
20200334574 Ishida Oct 2020 A1
20200337648 Saripalli et al. Oct 2020 A1
20200341466 Pham et al. Oct 2020 A1
20200342350 Madar et al. Oct 2020 A1
20200342548 Mazed et al. Oct 2020 A1
20200342652 Rowell et al. Oct 2020 A1
20200348909 Das Sarma et al. Nov 2020 A1
20200350063 Thornton et al. Nov 2020 A1
20200351438 Dewhurst et al. Nov 2020 A1
20200356107 Wells Nov 2020 A1
20200356790 Jaipuria et al. Nov 2020 A1
20200356864 Neumann Nov 2020 A1
20200356905 Luk et al. Nov 2020 A1
20200361083 Mousavian et al. Nov 2020 A1
20200361485 Zhu et al. Nov 2020 A1
20200364481 Kornienko et al. Nov 2020 A1
20200364508 Gurel et al. Nov 2020 A1
20200364540 Elsayed et al. Nov 2020 A1
20200364746 Longano et al. Nov 2020 A1
20200364953 Simoudis Nov 2020 A1
20200372362 Kim Nov 2020 A1
20200372402 Kursun et al. Nov 2020 A1
20200380362 Cao et al. Dec 2020 A1
20200380383 Kwong et al. Dec 2020 A1
20200393841 Frisbie et al. Dec 2020 A1
20200394421 Yu et al. Dec 2020 A1
20200394457 Brady Dec 2020 A1
20200394495 Moudgill et al. Dec 2020 A1
20200394813 Theverapperuma et al. Dec 2020 A1
20200396394 Zlokolica et al. Dec 2020 A1
20200398855 Thompson Dec 2020 A1
20200401850 Bazarsky et al. Dec 2020 A1
20200401886 Deng et al. Dec 2020 A1
20200402155 Kurian et al. Dec 2020 A1
20200402226 Peng Dec 2020 A1
20200410012 Moon et al. Dec 2020 A1
20200410224 Goel Dec 2020 A1
20200410254 Pham et al. Dec 2020 A1
20200410288 Capota et al. Dec 2020 A1
20200410751 Omari et al. Dec 2020 A1
20210004014 Sivakumar Jan 2021 A1
20210004580 Sundararaman et al. Jan 2021 A1
20210004611 Garimella et al. Jan 2021 A1
20210004663 Park et al. Jan 2021 A1
20210006835 Slattery et al. Jan 2021 A1
20210011908 Hayes et al. Jan 2021 A1
20210012116 Urtasun et al. Jan 2021 A1
20210012210 Sikka et al. Jan 2021 A1
20210012230 Hayes et al. Jan 2021 A1
20210012239 Arzani et al. Jan 2021 A1
20210015240 Elfakhri et al. Jan 2021 A1
20210019215 Neeter Jan 2021 A1
20210026360 Luo Jan 2021 A1
20210027112 Brewington et al. Jan 2021 A1
20210027117 McGavran et al. Jan 2021 A1
20210030276 Li et al. Feb 2021 A1
20210034921 Pinkovich et al. Feb 2021 A1
20210042575 Firner Feb 2021 A1
20210042928 Takeda et al. Feb 2021 A1
20210046954 Haynes Feb 2021 A1
20210049378 Gautam et al. Feb 2021 A1
20210049455 Kursun Feb 2021 A1
20210049456 Kursun Feb 2021 A1
20210049548 Grisz et al. Feb 2021 A1
20210049700 Nguyen et al. Feb 2021 A1
20210056114 Price et al. Feb 2021 A1
20210056306 Hu et al. Feb 2021 A1
20210056317 Golov Feb 2021 A1
20210056420 Konishi et al. Feb 2021 A1
20210056701 Vranceanu et al. Feb 2021 A1
Foreign Referenced Citations (244)
Number Date Country
2019261735 Jun 2020 AU
2019201716 Oct 2020 AU
110599537 Dec 2010 CN
102737236 Oct 2012 CN
103366339 Oct 2013 CN
104835114 Aug 2015 CN
103236037 May 2016 CN
103500322 Aug 2016 CN
106419893 Feb 2017 CN
106504253 Mar 2017 CN
107031600 Aug 2017 CN
107169421 Sep 2017 CN
107507134 Dec 2017 CN
107885214 Apr 2018 CN
108122234 Jun 2018 CN
107133943 Jul 2018 CN
107368926 Jul 2018 CN
105318888 Aug 2018 CN
108491889 Sep 2018 CN
108647591 Oct 2018 CN
108710865 Oct 2018 CN
105550701 Nov 2018 CN
108764185 Nov 2018 CN
108845574 Nov 2018 CN
108898177 Nov 2018 CN
109086867 Dec 2018 CN
107103113 Jan 2019 CN
109215067 Jan 2019 CN
109359731 Feb 2019 CN
109389207 Feb 2019 CN
109389552 Feb 2019 CN
106779060 Mar 2019 CN
109579856 Apr 2019 CN
109615073 Apr 2019 CN
106156754 May 2019 CN
106598226 May 2019 CN
106650922 May 2019 CN
109791626 May 2019 CN
109901595 Jun 2019 CN
109902732 Jun 2019 CN
109934163 Jun 2019 CN
109948428 Jun 2019 CN
109949257 Jun 2019 CN
109951710 Jun 2019 CN
109975308 Jul 2019 CN
109978132 Jul 2019 CN
109978161 Jul 2019 CN
110060202 Jul 2019 CN
110069071 Jul 2019 CN
110084086 Aug 2019 CN
110096937 Aug 2019 CN
110111340 Aug 2019 CN
110135485 Aug 2019 CN
110197270 Sep 2019 CN
110310264 Oct 2019 CN
110321965 Oct 2019 CN
110334801 Oct 2019 CN
110399875 Nov 2019 CN
110414362 Nov 2019 CN
110426051 Nov 2019 CN
110473173 Nov 2019 CN
110516665 Nov 2019 CN
110543837 Dec 2019 CN
110569899 Dec 2019 CN
110599864 Dec 2019 CN
110619282 Dec 2019 CN
110619283 Dec 2019 CN
110619330 Dec 2019 CN
110659628 Jan 2020 CN
110688992 Jan 2020 CN
107742311 Feb 2020 CN
110751280 Feb 2020 CN
110826566 Feb 2020 CN
107451659 Apr 2020 CN
108111873 Apr 2020 CN
110956185 Apr 2020 CN
110966991 Apr 2020 CN
111027549 Apr 2020 CN
111027575 Apr 2020 CN
111047225 Apr 2020 CN
111126453 May 2020 CN
111158355 May 2020 CN
107729998 Jun 2020 CN
108549934 Jun 2020 CN
111275129 Jun 2020 CN
111275618 Jun 2020 CN
111326023 Jun 2020 CN
111428943 Jul 2020 CN
111444821 Jul 2020 CN
111445420 Jul 2020 CN
111461052 Jul 2020 CN
111461053 Jul 2020 CN
111461110 Jul 2020 CN
110225341 Aug 2020 CN
111307162 Aug 2020 CN
111488770 Aug 2020 CN
111539514 Aug 2020 CN
111565318 Aug 2020 CN
111582216 Aug 2020 CN
111598095 Aug 2020 CN
108229526 Sep 2020 CN
111693972 Sep 2020 CN
106558058 Oct 2020 CN
107169560 Oct 2020 CN
107622258 Oct 2020 CN
111767801 Oct 2020 CN
111768002 Oct 2020 CN
111783545 Oct 2020 CN
111783971 Oct 2020 CN
111797657 Oct 2020 CN
111814623 Oct 2020 CN
111814902 Oct 2020 CN
111860499 Oct 2020 CN
111881856 Nov 2020 CN
111882579 Nov 2020 CN
111897639 Nov 2020 CN
111898507 Nov 2020 CN
111898523 Nov 2020 CN
111899227 Nov 2020 CN
112101175 Dec 2020 CN
112101562 Dec 2020 CN
112115953 Dec 2020 CN
111062973 Jan 2021 CN
111275080 Jan 2021 CN
112183739 Jan 2021 CN
112232497 Jan 2021 CN
112288658 Jan 2021 CN
112308095 Feb 2021 CN
112308799 Feb 2021 CN
112313663 Feb 2021 CN
112329552 Feb 2021 CN
112348783 Feb 2021 CN
111899245 Mar 2021 CN
202017102235 May 2017 DE
202017102238 May 2017 DE
102017116017 Jan 2019 DE
102018130821 Jun 2020 DE
102019008316 Aug 2020 DE
1215626 Sep 2008 EP
2228666 Sep 2012 EP
2420408 May 2013 EP
2723069 Apr 2014 EP
2741253 Jun 2014 EP
3115772 Jan 2017 EP
261855981 Aug 2017 EP
3285485 Feb 2018 EP
2863633 Feb 2019 EP
3113080 May 2019 EP
3525132 Aug 2019 EP
3531689 Aug 2019 EP
3537340 Sep 2019 EP
3543917 Sep 2019 EP
3608840 Feb 2020 EP
3657387 May 2020 EP
2396750 Jun 2020 EP
3664020 Jun 2020 EP
3690712 Aug 2020 EP
3690742 Aug 2020 EP
3722992 Oct 2020 EP
3690730 Nov 2020 EP
3739486 Nov 2020 EP
3501897 Dec 2020 EP
3751455 Dec 2020 EP
3783527 Feb 2021 EP
2402572 Aug 2005 GB
2548087 Sep 2017 GB
2577485 Apr 2020 GB
2517270 Jun 2020 GB
2578262 Aug 1998 JP
3941252 Jul 2007 JP
4282583 Jun 2009 JP
4300098 Jul 2009 JP
2015004922 Jan 2015 JP
5863536 Feb 2016 JP
6044134 Dec 2016 JP
6525707 Jun 2019 JP
2019101535 Jun 2019 JP
2020101927 Jul 2020 JP
2020173744 Oct 2020 JP
100326702 Feb 2002 KR
101082878 Nov 2011 KR
101738422 May 2017 KR
101969864 Apr 2019 KR
101996167 Jul 2019 KR
102022388 Aug 2019 KR
102043143 Nov 2019 KR
102095335 Mar 2020 KR
102097120 Apr 2020 KR
1020200085490 Jul 2020 KR
102189262 Dec 2020 KR
1020200142266 Dec 2020 KR
200630819 Sep 2006 TW
I294089 Mar 2008 TW
I306207 Feb 2009 TW
WO 02052835 Jul 2002 WO
WO 16032398 Mar 2016 WO
WO 16048108 Mar 2016 WO
WO 16207875 Dec 2016 WO
WO 17158622 Sep 2017 WO
WO 19005547 Jan 2019 WO
WO 19067695 Apr 2019 WO
WO 19089339 May 2019 WO
WO 19092456 May 2019 WO
WO 19099622 May 2019 WO
WO 19122952 Jun 2019 WO
WO 19125191 Jun 2019 WO
WO 19126755 Jun 2019 WO
WO 19144575 Aug 2019 WO
WO 19182782 Sep 2019 WO
WO 19191578 Oct 2019 WO
WO 19216938 Nov 2019 WO
WO 19220436 Nov 2019 WO
WO 20006154 Jan 2020 WO
WO 20012756 Jan 2020 WO
WO 20025696 Feb 2020 WO
WO 20034663 Feb 2020 WO
WO 20056157 Mar 2020 WO
WO 20076356 Apr 2020 WO
WO 20097221 May 2020 WO
WO 20101246 May 2020 WO
WO 20120050 Jun 2020 WO
WO 20121973 Jun 2020 WO
WO 20131140 Jun 2020 WO
WO 20139181 Jul 2020 WO
WO 20139355 Jul 2020 WO
WO 20139357 Jul 2020 WO
WO 20142193 Jul 2020 WO
WO 20146445 Jul 2020 WO
WO 20151329 Jul 2020 WO
WO 20157761 Aug 2020 WO
WO 20163455 Aug 2020 WO
WO 20167667 Aug 2020 WO
WO 20174262 Sep 2020 WO
WO 20177583 Sep 2020 WO
WO 20185233 Sep 2020 WO
WO 20185234 Sep 2020 WO
WO 20195658 Oct 2020 WO
WO 20198189 Oct 2020 WO
WO 20198779 Oct 2020 WO
WO 20205597 Oct 2020 WO
WO 20221200 Nov 2020 WO
WO 20240284 Dec 2020 WO
WO 20260020 Dec 2020 WO
WO 20264010 Dec 2020 WO
Related Publications (1)
Number Date Country
20200175401 A1 Jun 2020 US
Provisional Applications (1)
Number Date Country
62774793 Dec 2018 US