Video interviews can be taped and used by recruiters to assist in representing candidates to potential employers. These videos can sometimes be one-dimensional and uninteresting.
Videos that cut between multiple views of the candidate can be more visually interesting, but editing and producing high-quality video is tedious and time-consuming.
A system and method for automatically producing audiovisual files containing video from multiple cameras is provided. In some examples, a system is provided having a first video input and a second video input; an audio input; a time counter providing a timeline associated with the first video input, the second video input, and the audio input, the timeline enables a time synchronization of the first video input, the second video input, and the audio input; a non-transitory computer memory and a computer processor; and computer instructions stored on the memory for instructing the processor to perform the steps of: sampling the audio input to identify a low noise audio segment in which the decibel level is below a threshold level for a predetermined period of time; and automatically assembling a combined audiovisual file by performing the steps of: retaining a first audiovisual clip can include a portion of the audio input and first video input occurring before the low noise audio segment, retaining a second audiovisual clip can include a portion of the audio input and second video input occurring after the low noise audio segment, and concatenating the first audiovisual clip and the second audiovisual clip to create a combined audiovisual file. In some examples, the first video input, the second video input, and the audio input are recorded synchronously, and the combined audiovisual file is a video interview of a job candidate.
In some examples, the first audiovisual clip ends at the low noise audio segment and the second audiovisual clip begins at the low noise audio segment. In some examples, the first audiovisual clip is earlier in the timeline than the second audiovisual clip, and the first audiovisual clip corresponds to a time immediately preceding the second audiovisual clip. In some examples, the predetermined period of time is at least two seconds. Some examples can further include computer instructions stored on the memory for instructing the processor to perform the steps of: sampling the audio input to identify a beginning of the low noise audio segment and an end of the low noise audio segment; removing portions of the audio input, the first video input, and the second video input that fall between the beginning and end of the low noise audio segment; and concatenating the first audiovisual clip and the second audiovisual clip to create a combined audiovisual file that does not contain the low noise audio segment; the first audiovisual clip includes a portion of the audio input and first video input occurring before the beginning of the low noise audio segment, and the second audiovisual clip includes a portion of the audio input and the second video input occurring after the end of the low noise audio segment.
In some examples, the low noise audio segment is at least four seconds long. Some examples further include computer instructions stored on the memory for instructing the processor to perform the steps of: sampling the audio input to identify multiple low noise audio segments in which the decibel level is below the threshold level for a predetermined period of time; and automatically concatenating alternating audiovisual clips that switch between the first video input and second video input after each low noise audio segment. Some examples further include computer instructions stored on the memory for instructing the processor to perform the steps of: sampling the audio input to identify multiple low noise audio segments in which the decibel level is below the threshold level for at least the predetermined period of time; extracting content data from the first video input, the second video input, or the audio input to identify one or more switch-initiating events; automatically assembling a combined audiovisual file that switches between the first video input and the second video input following a switch-initiating event. In some examples, the switch-initiating events include one or more of: a gesture recognition event; a facial recognition event; a length of time of at least 30 seconds since a most recent camera angle switch; or a keyword extracted from the audio input via speech-to-text.
In some examples, a computer-implemented method includes receiving first video input of an individual from a first camera, receiving second video input of the individual from a second camera, receiving audio input of the individual from a microphone, the first video input, the second video input, and the audio input are recorded synchronously; sampling the audio input, the first video input, or the second video input to identify an event; automatically assembling a combined audiovisual file by performing the steps of: retaining a first audiovisual clip can include a portion of the first video input occurring before the event; retaining a second audiovisual clip can include a portion of the second video input occurring after the event; and concatenating the first audiovisual clip and the second audiovisual clip to create a combined audiovisual file containing video of the individual from two camera angles.
In some examples, the combined audiovisual file is a video interview of a job candidate. In some examples, the event is a low noise audio segment. Some examples further include the steps of: sampling the audio input to identify a plurality of low noise audio segments; retaining video clips that alternately switch between the first video input and the second video input following the low noise audio segments; and concatenating the alternating video clips to create a combined audiovisual file containing video that alternates between two camera angles. Some examples further include the step of extracting content data from the first video input, the second video input, or the audio input to identify one or more switch-initiating events, switching between the first video input and the second video input is only performed for low noise audio segments that follow switch-initiating events.
In some examples, the content data is at least one of: facial recognition; gesture recognition; posture recognition; or keywords extracted using speech-to-text. Some examples further include the steps of: sampling the audio input to identify multiple extended low noise audio segments that are at least four seconds long; removing the portions of the audio input, the first video input, and the second video input that fall between the beginning and end of the extended low noise audio segments; concatenating video clips containing alternating portions of the first video input and portions of the second video input to create a combined audiovisual file that does not contain audio or video occurring between the beginning and end of extended low noise audio segments.
In some examples, a system is included having a first video input and a second video input; an audio input; a time counter providing a timeline associated with the first video input, the second video input, and the audio input, the timeline enables a time synchronization of the first video input, the second video input, and the audio input; a non-transitory computer memory and a computer processor; and computer instructions stored on the memory for instructing the processor to perform the steps of: sampling the audio input to identify a low noise audio segment in which the decibel level is below a threshold level for a predetermined period of time; and automatically assembling a combined audiovisual file by performing the steps of: retaining a first audiovisual clip can include a portion of the first video input and synchronized audio input occurring before the low noise audio segment; retaining a second audiovisual clip can include a portion of the second video input and synchronized audio input occurring after the low noise audio segment; and concatenating the first audiovisual clip and the second audiovisual clip to create a combined audiovisual file.
In some examples, the first video input, the second video input, and the audio input are recorded synchronously, and the combined audiovisual file is a video interview of a job candidate. Some examples further include computer instructions stored on the memory for instructing the processor to perform the steps of: sampling the audio input to identify a plurality of low noise audio segments in which the decibel level is below the threshold level for the predetermined period of time; and concatenating a plurality of audiovisual clips that switch between the first video input and the second video input after each low noise audio segment to create a combined audiovisual file containing video that alternates between two camera angles.
This summary is an overview of some of the teachings of the present application and is not intended to be an exclusive or exhaustive treatment of the present subject matter. Further details are found in the detailed description and appended claims. Other aspects will be apparent to persons skilled in the art upon reading and understanding the following detailed description and viewing the drawings that form a part thereof, each of which is not to be taken in a limiting sense.
The present disclosure relates to a system and method for producing audiovisual files containing video that automatically cuts between video footage from multiple cameras. The multiple cameras can be arranged during recording such that they each focus on a subject from a different camera angle, providing multiple viewpoints of the subject. The system can be used for recording a person who is speaking, such as in a video interview. Although the system will be described in the context of a video interview, other uses are contemplated and are within the scope of the technology. For example, the system could be used to record educational videos, entertaining or informative speaking, or other situations in which an individual is being recorded with video and audio.
Some implementations of the technology provide a kiosk or booth that houses multiple cameras and a microphone. The cameras each produce a video input to the system, and the microphone produces an audio input. A time counter provides a timeline associated with the multiple video inputs and the audio input. The timeline enables video input from each camera to be time-synchronized with the audio input from the microphone.
Multiple audiovisual clips are created by combining video inputs with a corresponding synchronized audio input. The system detects events in the audio input, video inputs, or both the audio and video inputs, such as a pause in speaking corresponding to low-audio input. The events correspond to a particular time in the synchronization timeline. To automatically assemble audiovisual files, the system concatenates a first audiovisual clip and a second audiovisual clip. The first audiovisual clip contains video input before the event, and the second audiovisual clip contains video input after the event. The system can further create audiovisual files that concatenate three or more audiovisual clips that switch between particular video inputs after predetermined events.
One example of an event that can be used as a marker for deciding when to cut between different video clips is a drop in the audio volume detected by the microphone. During recording, the speaker may stop speaking briefly, such as when switching between topics, or when pausing to collect their thoughts. These pauses can correspond to a significant drop in audio volume. In some examples, the system looks for these low-noise events in the audio track. Then, when assembling an audiovisual file of the video interview, the system can change between different cameras at the pauses. This allows the system to automatically produce high quality, entertaining, and visually interesting videos with no need for a human editor to edit the video interview. Because the quality of the viewing experience is improved, the viewer is likely to have a better impression of a candidate or other speaker in the video. A higher quality video better showcases the strengths of the speaker, providing benefits to the speaker as well as the viewer.
In another aspect, the system can remove unwanted portions of the video automatically based on the contents of the audio or video inputs, or both. For example, the system may discard portions of the video interview in which the individual is not speaking for an extended period of time. One way this can be done is by keeping track of the length of time that the audio volume is below a certain volume. If the audio volume is low for an extended period of time, such as a predetermined number of seconds, the system can note the time that the low noise segment begins and ends. A first audiovisual clip that ends at the beginning of the low noise segment can be concatenated with a second audiovisual clip that begins at the end of the low noise segment. The audio input and video inputs that occur between the beginning and end of the low noise segment can be discarded. In some examples, the system can cut multiple pauses from the video interview, and switch between camera angles multiple times. This eliminates dead air and improves the quality of the video interview for a viewer.
In another aspect, the system can choose which video input to use in the combined audiovisual file based on the content of the video input. For example, the video inputs from the multiple cameras can be analyzed to look for content data to determine whether a particular event of interest takes place. As just one example, the system can use facial recognition to determine which camera the individual is facing at a particular time. The system then can selectively prefer the video input from the camera that the individual is facing at that time in the video. As another example, the system can use gesture recognition to determine that the individual is using their hands when talking. The system can selectively prefer the video input that best captures the hand gestures. For example, if the candidate consistently pivots to the left while gesturing, a right camera profile shot might be subjectively better than minimizing the candidate's energy using the left camera feed. Content data such as facial recognition and gesture recognition can also be used to find events that the system can use to decide when to switch between different camera angles.
In another aspect, the system can choose which video input to use based on a change between segments of the interview, such as between different interview questions.
Turning now to the figures, an example implementation of the disclosed technology will be described in relation to a kiosk for recording video interviews. However, it should be understood that this implementation is only one possible example, and other set ups could be used to implement the disclosed technology.
Video Interview Kiosk (
The first, second, and third cameras 122, 124, 126 can be digital video cameras that record video in the visible spectrum using, for example, a CCD or CMOS image sensor. Optionally, the cameras can be provided with infrared sensors or other sensors to detect depth, movement, etc.
In some examples, the various pieces of hardware can be mounted to the walls of the enclosed booth 105 on a vertical support 151 and a horizontal support 152. The vertical support 151 can be used to adjust the vertical height of the cameras and user interface, and the horizontal support 152 can be used to adjust the angle of the cameras 122, 124, 126.
Schematic of Kiosk and Edge Server (
The kiosk 101 can further include the candidate user interface 133 in data communication with the edge server 201. An additional user interface 233 can be provided for a kiosk attendant. The attendant user interface 233 can be used, for example, to check in users, or to enter data about the users. The candidate user interface 133 and the attendant user interface 233 can be provided with a user interface application program interface (API) 235 stored in the memory 205 and executed by the processor 203. The user interface API 235 can access particular data stored in the memory 205, such as interview questions 237 that can be displayed to the individual 112 on in the user interface 133. The user interface API 235 can receive input from the individual 112 to prompt a display of a next question once the individual has finished answering a current question.
The system includes multiple types of data inputs. In one example, the camera 122 produces a video input 222, the camera 124 produces a video input 224, and the camera 126 produces a video input 226. The microphone 142 produces an audio input 242. The system also receives behavioral data input 228. The behavioral data input 228 can be from a variety of different sources. In some examples, the behavioral data input 228 is a portion of data received from one or more of the cameras 122, 124, 126. In other words, the system receives video data and uses it as the behavioral data input 228. In some examples, the behavioral data input 228 is a portion of data received from the microphone 142. In some examples, the behavioral data input 228 is sensor data from one or more infrared sensors provided on the cameras 122, 124, 126. The system can also receive text data input 221 that can include text related to the individual 112, and candidate materials 223 that can include materials related to the individual's job candidacy, such as a resume.
In some examples, the video inputs 222, 224, 226 are stored in the memory 205 of the edge server 201 as video files 261. In alternative examples, the video inputs 222, 224, 226 are processed by the processor 203, but are not stored separately. In some examples, the audio input 242 is stored as audio files 262. In alternative examples, the audio input 242 is not stored separately. The candidate materials input 223, text data input 221, and behavioral data input 228 can also be optionally stored or not stored as desired.
In some examples, the edge server 201 further includes a network communication device 271 that enables the edge server 201 to communicate with a remote network 281. This enables data that is received and/or processed at the edge server 201 to be transferred over the network 281 to a candidate database server 291.
The edge server 201 includes computer instructions stored on the memory 205 to perform particular methods. The computer instructions can be stored as software modules. As will be described below, the system can include an audiovisual file processing module 263 for processing received audio and video inputs and assembling the inputs into audiovisual files and storing the assembled audiovisual files 264. The system can include a data extraction module 266 that can receive one or more of the data inputs (video inputs, audio input, behavioral input, etc.) and extract behavior data 267 from the inputs and store the extracted behavior data 267 in the memory 205.
Automatically Creating Audiovisual Files from Two or More Video Inputs (
Audio inputs 242 can also be provided using any of a number of different types of audio compression formats. These can include but are not limited to MP1, MP2, MP3, AAC, ALAC, and Windows Media Audio.
The system takes audiovisual clips recorded during the video interview and concatenates the audiovisual clips to create a single combined audiovisual file containing video of an individual from multiple camera angles. In some implementations, a system clock 209 creates a timestamp associated with the video inputs 222, 224, 226 and the audio input 242 that allows the system to synchronize the audio and video based on the timestamp. A custom driver can be used to combine the audio input with the video input to create an audiovisual file.
As used herein, an “audiovisual file” is a computer-readable container file that includes both video and audio. An audiovisual file can be saved on a computer memory, transferred to a remote computer via a network, and played back at a later time. Some examples of video encoding formats for an audiovisual file compatible with this disclosure are MP4 (mp4, m4a, mov); 3GP (3gp, 3gp2, 3g2, 3gpp, 3gpp2); WMV (wmy, wma); AVI; and QuickTime.
As used herein, an “audiovisual clip” is a video input combined with an audio input that is synchronized with the video input. For example, the system can record an individual 112 speaking for a particular length of time, such as 30 seconds. In a system that has three cameras, three audiovisual clips could be created from that 30 second recording: a first audiovisual clip can contain the video input 224 from Vid1 synchronized with the audio input 242 from t=0 to t=30 seconds. A second audiovisual clip can contain the video input 222 from Vid2 synchronized with the audio input 242 from t=0 to t=30 seconds. A third audiovisual clip can contain the video input 226 from Vid3 synchronized with the audio input 242 from t=0 to t=30 seconds; Audiovisual clips can be created by processing a video input stream and an audio input stream which are then stored as an audiovisual file. An audiovisual clip as described herein can be, but is not necessarily stored in an intermediate state as a separate audiovisual file before being concatenated with other audiovisual clips. As will be described below, in some examples, the system will select one video input from a number of available video inputs, and use that video input to create an audiovisual clip that will later be saved in an audiovisual file. In some examples, the unused video inputs may be discarded.
Audiovisual clips can be concatenated. As used herein, “concatenated” means adding two audiovisual clips together sequentially in an audiovisual file. For example, two audiovisual clips that are each 30 seconds long can be combined to create a 60-second long audiovisual file. In this case, the audiovisual file would cut from the first audiovisual clip to the second audiovisual clip at the 30 second mark.
During use, each camera in the system records an unbroken sequence of video, and the microphone records an unbroken sequence of audio. An underlying time counter provides a timeline associated with the video and audio so that the video and audio can be synchronized.
In one example of the technology, the system samples the audio track to automatically find events that are used to triggered the system to cut between video inputs when producing an audiovisual file. In one example, the system looks for segments in the audio track in which the volume is below a threshold volume. These will be referred to as low noise audio segments.
Applying this method to
In some examples, the system marks the beginning and end of the low noise audio segments to find low noise audio segments of a particular length. In this example, the system computes the average (mean) volume over each four second interval, and as soon the average volume is below the threshold volume (in this case 30 decibels), the system marks that interval as corresponding to the beginning of the low noise audio segment. The system continues to sample the audio volume until the average audio volume is above the threshold volume. The system then marks that interval as corresponding to the end of the low noise audio segment.
The system uses the low noise audio segments to determine when to switch between camera angles. After finding and interval corresponding to the beginning or end of the low noise audio segments, the system determines precisely at which time to switch. This can be done in a number of ways, depending upon the desired result.
In the example of
In some examples, the system is configured to discard portions of the video and audio inputs that correspond to a portion of the low noise audio segments. This eliminates dead air and makes the audiovisual file more interesting for the viewer. In some examples, the system only discards audio segments that our at least a predetermined length of time, such as at least 2 seconds, at least 4 seconds, at least 6 seconds, at least 8 seconds, or at least 10 seconds. This implementation will be discussed further in relation to
Automatically Concatenating Audiovisual Clips (
The system includes two video inputs: Video 1 and Video 2. The system also includes an Audio input. In the example of
In the example of
Sampling the audio track, the system determines that at time t1, a low noise audio event occurred. The time segment between t=t0 and t=t1 is denoted as Seg1. To assemble a combined audiovisual file 540, the system selects an audiovisual clip 541 combining one video input from Seg1 synchronized with the audio from Seg1, and saves this audiovisual clip 541 as a first segment of the audiovisual file 540—in this case, Vid1.Seg1 (Video 1 Segment 1) and Aud.Seg1 (audio Segment 1). In some examples, the system can use a default video input as the initial input, such as using the front-facing camera as the first video input for the first audiovisual clip. In alternative examples, the system may sample content received while the video and audio are being recorded to prefer one video input over another input. For example, the system may use facial or gesture recognition to determine that one camera angle is preferable over another camera angle for that time segment. Various alternatives for choosing which video input to use first are possible, and are within the scope of the technology.
The system continues sampling the audio track, and determines that at time t2, a second low noise audio event occurred. The time segment between t=t1 and t=t2 is denoted as Seg2. For this second time segment, the system automatically switches to the video input from Video 2, and saves a second audiovisual clip 542 containing Vid2.Seg2 and Aud.Seg2. The system concatenates the second audiovisual clip 542 and the first audiovisual clip 541 in the audiovisual file 540.
The system continues sampling the audio track, and determines that at time t3, a third low noise audio event occurred. The time segment between t=t2 and t=t3 is denoted as Seg3. For this third time segment, the system automatically cuts back to the video input from Video 1, and saves a third audiovisual clip 543 containing Vid1.Seg3 and Aud.Seg3. The system concatenates the second audiovisual clip 542 and the third audiovisual clip 543 in the audiovisual file 540.
The system continues sampling the audio track, and determines that at time t4, a fourth low noise audio event occurred. The time segment between t=t3 and t=t4 is denoted as Seg4. For this fourth time segment, the system automatically cuts back to the video input from Video 2, and saves a fourth audiovisual clip 544 containing Vid2.Seg4 and Aud.Seg4. The system concatenates the third audiovisual clip 543 and the fourth audiovisual clip 544 in the audiovisual file 540.
The system continues sampling the audio track, and determines that no additional low noise audio events occur, and the video input and audio input stop recording at time t1. The time segment between t=t4 and t=tn is denoted as Seg5. For this fifth time segment, the system automatically cuts back to the video input from Video 1, and saves a fifth audiovisual clip 545 containing Vid1.Seg5 and Aud.Seg5. The system concatenates the fourth audiovisual clip 544 and the fifth audiovisual clip 545 in the audiovisual file 540.
In some examples, audio sampling and assembling of the combined audiovisual file is performed in real-time as the video interview is being recorded. In alternative examples, the video input and audio input can be recorded, stored in a memory, and processed later to create a combined audiovisual file. In some examples, after the audiovisual file is created, the raw data from the video inputs and audio input is discarded.
Automatically Removing Pauses and Concatenating Audiovisual Clips (
In another aspect of the technology, the system can be configured to create combined audiovisual files that remove portions of the interview in which the subject is not speaking.
In the example of
As in the example of
The system continues sampling the audio track, and determines that at time t3, a second low noise audio segment begins, and at time t4, the second low noise audio segment ends. The time segment between t=t2 and t=t3 is denoted as Seg3. For this time segment, the system automatically switches to the video input from Video 2, and saves a second audiovisual clip 642 containing Vid2.Seg3 and Aud.Seg3. The system concatenates the second audiovisual clip 642 and the first audiovisual clip 641 in the audiovisual file 640.
The system continues sampling the audio input to determine the beginning and end of further low noise audio segments. In the example of
Automatically Concatenating Audiovisual Clips with Camera Switching in Response to Switch-Initiating Events (
In another aspect of the technology, the system can be configured to switch between the different video inputs in response to events other than low noise audio segments. These events will be generally categorized as switch-initiating events. A switch-initiating event can be detected in the content of any of the data inputs that are associated with the timeline. “Content data” refers to any of the data collected during the video interview that can be correlated or associated with a specific time in the timeline. These events are triggers that the system uses to decide when to switch between the different video inputs. For example, behavioral data input, which can be received from an infrared sensor or present in the video or audio, can be associated with the timeline in a similar manner that the audio and video images are associated with the timeline. Facial recognition data, gesture recognition data, and posture recognition data can be monitored to look for switch-initiating events. For example, if the candidate turns away from one of the video cameras to face a different video camera, the system can detect that motion and note it as a switch-initiating event. Hand gestures or changes in posture can also be used to trigger the system to cut from one camera angle to a different camera angle.
As another example, the audio input can be analyzed using speech to text software, and the resulting text can be used to find keywords that trigger a switch. In this example, the words used by the candidate during the interview would be associated with a particular time in the timeline.
Another type of switch-initiating event can be the passage of a particular length of time. A timer can be set for a number of seconds that is the maximum desirable amount of time for a single segment of video. For example, an audiovisual file can feel stagnant and uninteresting if the same camera has been focusing on the subject for more than 90 seconds. The system clock can set a 90 second timer every time that a camera switch occurs. If it is been greater than 90 seconds since the most recent switch-initiating event, expiration of the 90 second timer can be used as the switch-initiating event. Other amounts of time could be used, such as 30 seconds, 45 seconds, 60 seconds, etc., depending on the desired results.
Conversely, the system clock can set a timer corresponding to a minimum number of seconds that must elapse before a switch between two video inputs. For example, the system could detect multiple switch-initiating events in rapid succession, and it may be undesirable to switch back-and-forth between two video inputs too quickly. To prevent this, the system clock could set a timer for 30 seconds, and only register switch-initiating events that occur after expiration of the 30 second timer. Though resulting combined audiovisual file would contain audiovisual clip segments of 30 seconds or longer.
Another type of switch-initiating event is a change between interview questions that the candidate is answering, or between other segments of a video recording session. In the context of an interview, the user interface API 235 (
Turning to
In the example of
In
At time t2, the system detects a switch-initiating event. However, the system does not switch between camera angles at time t2, because switch-initiating events can occur at any time, including during the middle of a sentence. Instead, the system in
In some examples, instead of continuously sampling the audio track for low noise audio events, the system could wait to detect a switch-initiating event, then begin sampling the audio input immediately after the switch-initiating event. The system would then cut from one video input to the other video input at the next low noise audio segment.
At time t3, the system determines that another low noise audio segment has occurred. Because this low noise audio segment occurred after a switch-initiating event, the system begins assembling a combined audiovisual file 740 by using an audiovisual clip 741 combining one video input (in this case, Video 1) with synchronized audio input for the time segment t=t0 through t=t3.
The system then waits to detect another switch-initiating event. In the example of
The system then continues to wait for a switch-initiating event. In this case, no switch-initiating event occurs before the end of the video interview at time tn. The audiovisual file 740 is completed by concatenating an alternating audiovisual clip 743 containing video input from Video 1 to the end of the audiovisual file 740.
The various methods described above can be combined in a number of different ways to create entertaining and visually interesting audiovisual interview files. Multiple video cameras can be used to capture a candidate from multiple camera angles. Camera switching between different camera angles can be performed automatically with or without removing audio and video corresponding to long pauses when the candidate is not speaking. Audio, video, and behavioral inputs can be analyzed to look for content data to use as switch-initiating events, and/or to decide which video input to use during a particular segment of the audiovisual file. Some element of biofeedback can be incorporated to favor one video camera input over the others.
As used in this specification and the appended claims, the singular forms include the plural unless the context clearly dictates otherwise. The term “or” is generally employed in the sense of “and/or” unless the content clearly dictates otherwise. The phrase “configured” describes a system, apparatus, or other structure that is constructed or configured to perform a particular task or adopt a particular configuration. The term “configured” can be used interchangeably with other similar terms such as arranged, constructed, manufactured, and the like.
All publications and patent applications referenced in this specification are herein incorporated by reference for all purposes.
While examples of the technology described herein are susceptible to various modifications and alternative forms, specifics thereof have been shown by way of example and drawings. It should be understood, however, that the scope herein is not limited to the particular examples described. On the contrary, the intention is to cover modifications, equivalents, and alternatives falling within the spirit and scope herein.
This application is a Continuation of U.S. patent application Ser. No. 16/910,986, filed Jun. 24, 2020, which is a Continuation of U.S. patent application Ser. No. 16/366,746, filed Mar. 27, 2019, the content of which is herein incorporated by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
1173785 | Deagan | Feb 1916 | A |
1686351 | Spitzglass | Oct 1928 | A |
3152622 | Rothermel | Oct 1964 | A |
3764135 | Madison | Oct 1973 | A |
5109281 | Kobori et al. | Apr 1992 | A |
5410344 | Graves et al. | Apr 1995 | A |
5835667 | Wactlar et al. | Nov 1998 | A |
5867209 | Irie et al. | Feb 1999 | A |
5884004 | Sato et al. | Mar 1999 | A |
5886967 | Aramaki | Mar 1999 | A |
5897220 | Huang et al. | Apr 1999 | A |
5906372 | Recard | May 1999 | A |
5937138 | Fukuda et al. | Aug 1999 | A |
5949792 | Yasuda et al. | Sep 1999 | A |
6128414 | Liu | Oct 2000 | A |
6229904 | Huang et al. | May 2001 | B1 |
6289165 | Abecassis | Sep 2001 | B1 |
6484266 | Kashiwagi et al. | Nov 2002 | B2 |
6502199 | Kashiwagi et al. | Dec 2002 | B2 |
6504990 | Abecassis | Jan 2003 | B1 |
RE37994 | Fukuda et al. | Feb 2003 | E |
6600874 | Fujita et al. | Jul 2003 | B1 |
6618723 | Smith | Sep 2003 | B1 |
6981000 | Park et al. | Dec 2005 | B2 |
7095329 | Saubolle | Aug 2006 | B2 |
7146627 | Ismail et al. | Dec 2006 | B1 |
7293275 | Krieger et al. | Nov 2007 | B1 |
7313539 | Pappas et al. | Dec 2007 | B1 |
7336890 | Lu et al. | Feb 2008 | B2 |
7499918 | Ogikubo | Mar 2009 | B2 |
7606444 | Erol et al. | Oct 2009 | B1 |
7650286 | Obeid | Jan 2010 | B1 |
7702542 | Aslanian | Apr 2010 | B2 |
7725812 | Balkus et al. | May 2010 | B1 |
7797402 | Roos | Sep 2010 | B2 |
7810117 | Karnalkar et al. | Oct 2010 | B2 |
7865424 | Pappas et al. | Jan 2011 | B2 |
7895620 | Haberman et al. | Feb 2011 | B2 |
7904490 | Ogikubo | Mar 2011 | B2 |
7962375 | Pappas et al. | Jun 2011 | B2 |
7974443 | Kipman et al. | Jul 2011 | B2 |
7991635 | Hartmann | Aug 2011 | B2 |
7996292 | Pappas et al. | Aug 2011 | B2 |
8032447 | Pappas et al. | Oct 2011 | B2 |
8046814 | Badenell | Oct 2011 | B1 |
8099415 | Luo et al. | Jan 2012 | B2 |
8111326 | Talwar | Feb 2012 | B1 |
8169548 | Ryckman | May 2012 | B2 |
8185543 | Choudhry et al. | May 2012 | B1 |
8205148 | Sharpe | Jun 2012 | B1 |
8229841 | Pappas et al. | Jul 2012 | B2 |
8238718 | Toyama et al. | Aug 2012 | B2 |
8241628 | Diefenbach-Streiber et al. | Aug 2012 | B2 |
8266068 | Foss et al. | Sep 2012 | B1 |
8300785 | White | Oct 2012 | B2 |
8301550 | Pappas et al. | Oct 2012 | B2 |
8301790 | Morrison et al. | Oct 2012 | B2 |
8326133 | Lemmers | Dec 2012 | B2 |
8326853 | Richard et al. | Dec 2012 | B2 |
8331457 | Mizuno et al. | Dec 2012 | B2 |
8331760 | Butcher | Dec 2012 | B2 |
8339500 | Hattori et al. | Dec 2012 | B2 |
8358346 | Hikita et al. | Jan 2013 | B2 |
8387094 | Ho et al. | Feb 2013 | B1 |
8505054 | Kirley | Aug 2013 | B1 |
8508572 | Ryckman et al. | Aug 2013 | B2 |
8543450 | Pappas et al. | Sep 2013 | B2 |
8560482 | Miranda et al. | Oct 2013 | B2 |
8566880 | Dunker et al. | Oct 2013 | B2 |
8600211 | Nagano et al. | Dec 2013 | B2 |
8611422 | Yagnik et al. | Dec 2013 | B1 |
8620771 | Pappas et al. | Dec 2013 | B2 |
8633964 | Zhu | Jan 2014 | B1 |
8650114 | Pappas et al. | Feb 2014 | B2 |
8751231 | Larsen et al. | Jun 2014 | B1 |
8774604 | Torii et al. | Jul 2014 | B2 |
8792780 | Hattori | Jul 2014 | B2 |
8818175 | Dubin | Aug 2014 | B2 |
8824863 | Kitamura et al. | Sep 2014 | B2 |
8854457 | De Vleeschouwer et al. | Oct 2014 | B2 |
8856000 | Larsen et al. | Oct 2014 | B1 |
8902282 | Zhu | Dec 2014 | B1 |
8909542 | Montero et al. | Dec 2014 | B2 |
8913103 | Sargin | Dec 2014 | B1 |
8918532 | Lueth et al. | Dec 2014 | B2 |
8930260 | Pappas et al. | Jan 2015 | B2 |
8988528 | Hikita | Mar 2015 | B2 |
9009045 | Larsen et al. | Apr 2015 | B1 |
9015746 | Holmdahl et al. | Apr 2015 | B2 |
9026471 | Pappas et al. | May 2015 | B2 |
9026472 | Pappas et al. | May 2015 | B2 |
9047634 | Pappas et al. | Jun 2015 | B2 |
9064258 | Pappas et al. | Jun 2015 | B2 |
9070150 | Pappas et al. | Jun 2015 | B2 |
9092813 | Pappas et al. | Jul 2015 | B2 |
9106804 | Roberts | Aug 2015 | B2 |
9111579 | Meaney et al. | Aug 2015 | B2 |
9117201 | Kennell et al. | Aug 2015 | B2 |
9129640 | Hamer | Sep 2015 | B2 |
9135674 | Yagnik et al. | Sep 2015 | B1 |
9223781 | Pearson et al. | Dec 2015 | B2 |
9224156 | Moorer | Dec 2015 | B2 |
9305286 | Larsen et al. | Apr 2016 | B2 |
9305287 | Krishnamoorthy et al. | Apr 2016 | B2 |
9355151 | Cranfill et al. | May 2016 | B1 |
9378486 | Taylor et al. | Jun 2016 | B2 |
9398315 | Oks et al. | Jul 2016 | B2 |
9402050 | Recchia et al. | Jul 2016 | B1 |
9437247 | Pendergast et al. | Sep 2016 | B2 |
9438934 | Zhu | Sep 2016 | B1 |
9443556 | Cordell et al. | Sep 2016 | B2 |
9456174 | Boyle et al. | Sep 2016 | B2 |
9462301 | Paśko | Oct 2016 | B2 |
9501663 | Hopkins et al. | Nov 2016 | B1 |
9501944 | Boneta et al. | Nov 2016 | B2 |
9542452 | Ross et al. | Jan 2017 | B1 |
9544380 | Deng et al. | Jan 2017 | B2 |
9554160 | Han et al. | Jan 2017 | B2 |
9570107 | Boiman et al. | Feb 2017 | B2 |
9583144 | Ricciardi | Feb 2017 | B2 |
9600723 | Pantofaru | Mar 2017 | B1 |
9607655 | Bloch et al. | Mar 2017 | B2 |
9652745 | Taylor et al. | May 2017 | B2 |
9653115 | Bloch et al. | May 2017 | B2 |
9666194 | Ondeck et al. | May 2017 | B2 |
9684435 | Carr et al. | Jun 2017 | B2 |
9693019 | Fluhr et al. | Jun 2017 | B1 |
9710790 | Taylor et al. | Jul 2017 | B2 |
9723223 | Banta et al. | Aug 2017 | B1 |
9747573 | Shaburov et al. | Aug 2017 | B2 |
9792955 | Fleischhauer et al. | Oct 2017 | B2 |
9805767 | Strickland | Oct 2017 | B1 |
9823809 | Roos | Nov 2017 | B2 |
9876963 | Nakamura et al. | Jan 2018 | B2 |
9881647 | McCauley et al. | Jan 2018 | B2 |
9936185 | Delvaux et al. | Apr 2018 | B2 |
9940508 | Kaps et al. | Apr 2018 | B2 |
9940973 | Roberts et al. | Apr 2018 | B2 |
9979921 | Holmes | May 2018 | B2 |
10008239 | Eris | Jun 2018 | B2 |
10019653 | Wilf et al. | Jul 2018 | B2 |
10021377 | Newton et al. | Jul 2018 | B2 |
10108932 | Sung et al. | Oct 2018 | B2 |
10115038 | Hazur et al. | Oct 2018 | B2 |
10147460 | Ullrich | Dec 2018 | B2 |
10152695 | Chiu et al. | Dec 2018 | B1 |
10152696 | Thankappan et al. | Dec 2018 | B2 |
10168866 | Wakeen et al. | Jan 2019 | B2 |
10178427 | Huang | Jan 2019 | B2 |
10235008 | Lee et al. | Mar 2019 | B2 |
10242345 | Taylor et al. | Mar 2019 | B2 |
10268736 | Balasia et al. | Apr 2019 | B1 |
10296873 | Balasia et al. | May 2019 | B1 |
10310361 | Featherstone | Jun 2019 | B1 |
10318927 | Champaneria | Jun 2019 | B2 |
10325243 | Ross et al. | Jun 2019 | B1 |
10325517 | Nielson et al. | Jun 2019 | B2 |
10331764 | Rao et al. | Jun 2019 | B2 |
10346805 | Taylor et al. | Jul 2019 | B2 |
10346928 | Li et al. | Jul 2019 | B2 |
10353720 | Wich-vila | Jul 2019 | B1 |
10433030 | Packard et al. | Oct 2019 | B2 |
10438135 | Larsen et al. | Oct 2019 | B2 |
10489439 | Calapodescu et al. | Nov 2019 | B2 |
10607188 | Kyllonen et al. | Mar 2020 | B2 |
10657498 | Dey et al. | May 2020 | B2 |
10694097 | Shirakyan | Jun 2020 | B1 |
10728443 | Olshansky | Jul 2020 | B1 |
10735396 | Krstic et al. | Aug 2020 | B2 |
10748118 | Fang | Aug 2020 | B2 |
10796217 | Wu | Oct 2020 | B2 |
10825480 | Marco | Nov 2020 | B2 |
10963841 | Olshansky | Mar 2021 | B2 |
11023735 | Olshansky | Jun 2021 | B1 |
11127232 | Olshansky | Sep 2021 | B2 |
11144882 | Olshansky | Oct 2021 | B1 |
11184578 | Olshansky | Nov 2021 | B2 |
11457140 | Olshansky | Sep 2022 | B2 |
11636678 | Olshansky | Apr 2023 | B2 |
11720859 | Olshansky | Aug 2023 | B2 |
11783645 | Olshanksy | Oct 2023 | B2 |
20010001160 | Shoff et al. | May 2001 | A1 |
20010038746 | Hughes et al. | Nov 2001 | A1 |
20020097984 | Abecassis | Jul 2002 | A1 |
20020113879 | Battle et al. | Aug 2002 | A1 |
20020122659 | McGrath et al. | Sep 2002 | A1 |
20020191071 | Rui et al. | Dec 2002 | A1 |
20030005429 | Colsey | Jan 2003 | A1 |
20030027611 | Recard | Feb 2003 | A1 |
20030189589 | Leblanc et al. | Oct 2003 | A1 |
20030194211 | Abecassis | Oct 2003 | A1 |
20040033061 | Hughes et al. | Feb 2004 | A1 |
20040186743 | Cordero | Sep 2004 | A1 |
20040264919 | Taylor et al. | Dec 2004 | A1 |
20050095569 | Franklin | May 2005 | A1 |
20050137896 | Pentecost et al. | Jun 2005 | A1 |
20050187765 | Kim et al. | Aug 2005 | A1 |
20050232462 | Vallone et al. | Oct 2005 | A1 |
20050235033 | Doherty | Oct 2005 | A1 |
20050271251 | Russell et al. | Dec 2005 | A1 |
20060042483 | Work et al. | Mar 2006 | A1 |
20060045179 | Mizuno et al. | Mar 2006 | A1 |
20060100919 | Levine | May 2006 | A1 |
20060116555 | Pavlidis et al. | Jun 2006 | A1 |
20060229896 | Rosen et al. | Oct 2006 | A1 |
20070088601 | Money et al. | Apr 2007 | A1 |
20070124161 | Mueller et al. | May 2007 | A1 |
20070237502 | Ryckman et al. | Oct 2007 | A1 |
20070288245 | Benjamin | Dec 2007 | A1 |
20080086504 | Sanders et al. | Apr 2008 | A1 |
20080169929 | Albertson et al. | Jul 2008 | A1 |
20090083103 | Basser | Mar 2009 | A1 |
20090083670 | Roos | Mar 2009 | A1 |
20090087161 | Roberts | Apr 2009 | A1 |
20090144785 | Walker et al. | Jun 2009 | A1 |
20090171899 | Chittoor et al. | Jul 2009 | A1 |
20090248685 | Pasqualoni et al. | Oct 2009 | A1 |
20090258334 | Pyne | Oct 2009 | A1 |
20100086283 | Ramachandran et al. | Apr 2010 | A1 |
20100143329 | Larsen | Jun 2010 | A1 |
20100183280 | Beauregard | Jul 2010 | A1 |
20100191561 | Jeng et al. | Jul 2010 | A1 |
20100199228 | Latta et al. | Aug 2010 | A1 |
20100223109 | Hawn et al. | Sep 2010 | A1 |
20100325307 | Roos | Dec 2010 | A1 |
20110055098 | Stewart | Mar 2011 | A1 |
20110055930 | Flake et al. | Mar 2011 | A1 |
20110060671 | Erbey et al. | Mar 2011 | A1 |
20110076656 | Scott et al. | Mar 2011 | A1 |
20110088081 | Folkesson et al. | Apr 2011 | A1 |
20110135279 | Leonard | Jun 2011 | A1 |
20120036127 | Work et al. | Feb 2012 | A1 |
20120053996 | Galbavy | Mar 2012 | A1 |
20120084649 | Dowdell et al. | Apr 2012 | A1 |
20120114246 | Weitzman | May 2012 | A1 |
20120130771 | Kannan et al. | May 2012 | A1 |
20120257875 | Sharpe | Oct 2012 | A1 |
20120271774 | Clegg | Oct 2012 | A1 |
20130007670 | Roos | Jan 2013 | A1 |
20130016815 | Odinak et al. | Jan 2013 | A1 |
20130016816 | Odinak et al. | Jan 2013 | A1 |
20130016823 | Odinak et al. | Jan 2013 | A1 |
20130024105 | Thomas | Jan 2013 | A1 |
20130111401 | Newman et al. | May 2013 | A1 |
20130121668 | Meaney et al. | May 2013 | A1 |
20130124998 | Pendergast et al. | May 2013 | A1 |
20130124999 | Agnoli et al. | May 2013 | A1 |
20130125000 | Fleischhauer et al. | May 2013 | A1 |
20130176430 | Zhu et al. | Jul 2013 | A1 |
20130177296 | Geisner et al. | Jul 2013 | A1 |
20130212033 | Work et al. | Aug 2013 | A1 |
20130212180 | Work et al. | Aug 2013 | A1 |
20130216206 | Dubin | Aug 2013 | A1 |
20130218688 | Roos | Aug 2013 | A1 |
20130222601 | Engstroem et al. | Aug 2013 | A1 |
20130226578 | Bolton et al. | Aug 2013 | A1 |
20130226674 | Field et al. | Aug 2013 | A1 |
20130226910 | Work et al. | Aug 2013 | A1 |
20130254192 | Work et al. | Sep 2013 | A1 |
20130259447 | Sathish et al. | Oct 2013 | A1 |
20130266925 | Nunamaker et al. | Oct 2013 | A1 |
20130268452 | MacEwen et al. | Oct 2013 | A1 |
20130283378 | Costigan et al. | Oct 2013 | A1 |
20130290210 | Cline et al. | Oct 2013 | A1 |
20130290325 | Work et al. | Oct 2013 | A1 |
20130290420 | Work et al. | Oct 2013 | A1 |
20130290448 | Work et al. | Oct 2013 | A1 |
20130297589 | Work et al. | Nov 2013 | A1 |
20130332381 | Clark et al. | Dec 2013 | A1 |
20130332382 | Lapasta et al. | Dec 2013 | A1 |
20140036023 | Croen et al. | Feb 2014 | A1 |
20140089217 | McGovern et al. | Mar 2014 | A1 |
20140092254 | Mughal et al. | Apr 2014 | A1 |
20140123177 | Kim et al. | May 2014 | A1 |
20140125703 | Roveta et al. | May 2014 | A1 |
20140143165 | Posse et al. | May 2014 | A1 |
20140153902 | Pearson et al. | Jun 2014 | A1 |
20140186004 | Hamer | Jul 2014 | A1 |
20140191939 | Penn et al. | Jul 2014 | A1 |
20140192200 | Zagron | Jul 2014 | A1 |
20140198196 | Howard et al. | Jul 2014 | A1 |
20140214703 | Moody | Jul 2014 | A1 |
20140214709 | Greaney | Jul 2014 | A1 |
20140245146 | Roos | Aug 2014 | A1 |
20140258288 | Work et al. | Sep 2014 | A1 |
20140270706 | Pasko | Sep 2014 | A1 |
20140278506 | Rogers et al. | Sep 2014 | A1 |
20140278683 | Kennell et al. | Sep 2014 | A1 |
20140279634 | Seeker | Sep 2014 | A1 |
20140282709 | Hardy et al. | Sep 2014 | A1 |
20140317009 | Bilodeau et al. | Oct 2014 | A1 |
20140317126 | Work et al. | Oct 2014 | A1 |
20140325359 | Vehovsky et al. | Oct 2014 | A1 |
20140325373 | Kramer et al. | Oct 2014 | A1 |
20140327779 | Eronen et al. | Nov 2014 | A1 |
20140330734 | Sung et al. | Nov 2014 | A1 |
20140334670 | Guigues et al. | Nov 2014 | A1 |
20140336942 | Pe'Er et al. | Nov 2014 | A1 |
20140337900 | Hurley | Nov 2014 | A1 |
20140356822 | Hoque et al. | Dec 2014 | A1 |
20140358810 | Hardtke et al. | Dec 2014 | A1 |
20140359439 | Lyren | Dec 2014 | A1 |
20150003603 | Odinak et al. | Jan 2015 | A1 |
20150003605 | Odinak et al. | Jan 2015 | A1 |
20150006422 | Carter et al. | Jan 2015 | A1 |
20150012453 | Odinak et al. | Jan 2015 | A1 |
20150046357 | Danson et al. | Feb 2015 | A1 |
20150063775 | Nakamura et al. | Mar 2015 | A1 |
20150067723 | Bloch et al. | Mar 2015 | A1 |
20150099255 | Sinem Aslan et al. | Apr 2015 | A1 |
20150100702 | Krishna et al. | Apr 2015 | A1 |
20150127565 | Chevalier et al. | May 2015 | A1 |
20150139601 | Mate et al. | May 2015 | A1 |
20150154564 | Moon et al. | Jun 2015 | A1 |
20150155001 | Kikugawa et al. | Jun 2015 | A1 |
20150170303 | Geritz et al. | Jun 2015 | A1 |
20150199646 | Taylor et al. | Jul 2015 | A1 |
20150201134 | Carr et al. | Jul 2015 | A1 |
20150205800 | Work et al. | Jul 2015 | A1 |
20150205872 | Work et al. | Jul 2015 | A1 |
20150206102 | Cama et al. | Jul 2015 | A1 |
20150206103 | Larsen et al. | Jul 2015 | A1 |
20150222815 | Wang et al. | Aug 2015 | A1 |
20150228306 | Roberts et al. | Aug 2015 | A1 |
20150242707 | Wilf et al. | Aug 2015 | A1 |
20150269165 | Work et al. | Sep 2015 | A1 |
20150269529 | Kyllonen et al. | Sep 2015 | A1 |
20150269530 | Work et al. | Sep 2015 | A1 |
20150271289 | Work et al. | Sep 2015 | A1 |
20150278223 | Work et al. | Oct 2015 | A1 |
20150278290 | Work et al. | Oct 2015 | A1 |
20150278964 | Work et al. | Oct 2015 | A1 |
20150302158 | Morris et al. | Oct 2015 | A1 |
20150324698 | Karaoguz et al. | Nov 2015 | A1 |
20150339939 | Gustafson et al. | Nov 2015 | A1 |
20150356512 | Bradley | Dec 2015 | A1 |
20150380052 | Hamer | Dec 2015 | A1 |
20160005029 | Ivey et al. | Jan 2016 | A1 |
20160036976 | Odinak et al. | Feb 2016 | A1 |
20160104096 | Ovick et al. | Apr 2016 | A1 |
20160116827 | Tarres Bolos | Apr 2016 | A1 |
20160117942 | Marino et al. | Apr 2016 | A1 |
20160139562 | Crowder et al. | May 2016 | A1 |
20160154883 | Boerner | Jun 2016 | A1 |
20160155475 | Hamer | Jun 2016 | A1 |
20160180234 | Siebach et al. | Jun 2016 | A1 |
20160180883 | Hamer | Jun 2016 | A1 |
20160219264 | Delvaux et al. | Jul 2016 | A1 |
20160225409 | Eris | Aug 2016 | A1 |
20160225410 | Lee et al. | Aug 2016 | A1 |
20160247537 | Ricciardi | Aug 2016 | A1 |
20160267436 | Silber et al. | Sep 2016 | A1 |
20160313892 | Roos | Oct 2016 | A1 |
20160323608 | Bloch et al. | Nov 2016 | A1 |
20160330398 | Recchia et al. | Nov 2016 | A1 |
20160364692 | Bhaskaran et al. | Dec 2016 | A1 |
20170024614 | Sanil et al. | Jan 2017 | A1 |
20170026667 | Pasko | Jan 2017 | A1 |
20170039525 | Seidle et al. | Feb 2017 | A1 |
20170076751 | Hamer | Mar 2017 | A9 |
20170134776 | Ranjeet et al. | May 2017 | A1 |
20170148488 | Li et al. | May 2017 | A1 |
20170164013 | Abramov et al. | Jun 2017 | A1 |
20170164014 | Abramov et al. | Jun 2017 | A1 |
20170164015 | Abramov et al. | Jun 2017 | A1 |
20170171602 | Qu | Jun 2017 | A1 |
20170178688 | Ricciardi | Jun 2017 | A1 |
20170195491 | Odinak et al. | Jul 2017 | A1 |
20170206504 | Taylor et al. | Jul 2017 | A1 |
20170213190 | Hazan | Jul 2017 | A1 |
20170213573 | Takeshita et al. | Jul 2017 | A1 |
20170227353 | Brunner | Aug 2017 | A1 |
20170236073 | Borisyuk et al. | Aug 2017 | A1 |
20170244894 | Aggarwal et al. | Aug 2017 | A1 |
20170244984 | Aggarwal et al. | Aug 2017 | A1 |
20170244991 | Aggarwal et al. | Aug 2017 | A1 |
20170262706 | Sun et al. | Sep 2017 | A1 |
20170264958 | Hutten | Sep 2017 | A1 |
20170293413 | Matsushita et al. | Oct 2017 | A1 |
20170316806 | Warren et al. | Nov 2017 | A1 |
20170332044 | Marlow et al. | Nov 2017 | A1 |
20170353769 | Husain et al. | Dec 2017 | A1 |
20170372748 | McCauley et al. | Dec 2017 | A1 |
20180011621 | Roos | Jan 2018 | A1 |
20180025303 | Janz | Jan 2018 | A1 |
20180054641 | Hall et al. | Feb 2018 | A1 |
20180070045 | Holmes | Mar 2018 | A1 |
20180074681 | Roos | Mar 2018 | A1 |
20180082238 | Shani | Mar 2018 | A1 |
20180096307 | Fortier et al. | Apr 2018 | A1 |
20180109737 | Nakamura et al. | Apr 2018 | A1 |
20180109826 | McCoy et al. | Apr 2018 | A1 |
20180110460 | Danson et al. | Apr 2018 | A1 |
20180114154 | Bae | Apr 2018 | A1 |
20180130497 | McCauley et al. | May 2018 | A1 |
20180132014 | Khazanov et al. | May 2018 | A1 |
20180150604 | Arena et al. | May 2018 | A1 |
20180158027 | Venigalla | Jun 2018 | A1 |
20180182436 | Ullrich | Jun 2018 | A1 |
20180191955 | Aoki et al. | Jul 2018 | A1 |
20180218238 | Viirre et al. | Aug 2018 | A1 |
20180226102 | Roberts et al. | Aug 2018 | A1 |
20180227501 | King | Aug 2018 | A1 |
20180232751 | Terhark et al. | Aug 2018 | A1 |
20180247271 | Van Hoang et al. | Aug 2018 | A1 |
20180253697 | Sung et al. | Sep 2018 | A1 |
20180268868 | Salokannel et al. | Sep 2018 | A1 |
20180270613 | Park | Sep 2018 | A1 |
20180277093 | Carr et al. | Sep 2018 | A1 |
20180295428 | Bi et al. | Oct 2018 | A1 |
20180302680 | Cormican | Oct 2018 | A1 |
20180308521 | Iwamoto | Oct 2018 | A1 |
20180316947 | Todd | Nov 2018 | A1 |
20180336528 | Carpenter et al. | Nov 2018 | A1 |
20180336930 | Takahashi | Nov 2018 | A1 |
20180350405 | Marco | Dec 2018 | A1 |
20180353769 | Smith et al. | Dec 2018 | A1 |
20180374251 | Mitchell et al. | Dec 2018 | A1 |
20180376225 | Jones et al. | Dec 2018 | A1 |
20190005373 | Nims et al. | Jan 2019 | A1 |
20190019157 | Saha et al. | Jan 2019 | A1 |
20190057356 | Larsen et al. | Feb 2019 | A1 |
20190087558 | Mercury et al. | Mar 2019 | A1 |
20190096307 | Liang et al. | Mar 2019 | A1 |
20190141033 | Kaafar et al. | May 2019 | A1 |
20190220824 | Liu | Jul 2019 | A1 |
20190244176 | Chuang et al. | Aug 2019 | A1 |
20190259002 | Balasia et al. | Aug 2019 | A1 |
20190295040 | Clines | Sep 2019 | A1 |
20190311488 | Sareen | Oct 2019 | A1 |
20190325064 | Mathiesen et al. | Oct 2019 | A1 |
20200012350 | Tay | Jan 2020 | A1 |
20200110786 | Kim | Apr 2020 | A1 |
20200126545 | Kakkar et al. | Apr 2020 | A1 |
20200143329 | Gamaliel | May 2020 | A1 |
20200197793 | Yeh et al. | Jun 2020 | A1 |
20200311163 | Ma et al. | Oct 2020 | A1 |
20200311682 | Olshansky | Oct 2020 | A1 |
20200311953 | Olshansky | Oct 2020 | A1 |
20200396376 | Olshansky | Dec 2020 | A1 |
20210035047 | Mossoba et al. | Feb 2021 | A1 |
20210158663 | Buchholz et al. | May 2021 | A1 |
20210174308 | Olshansky | Jun 2021 | A1 |
20210233262 | Olshansky | Jul 2021 | A1 |
20210312184 | Olshansky | Oct 2021 | A1 |
20210314521 | Olshansky | Oct 2021 | A1 |
20220005295 | Olshansky | Jan 2022 | A1 |
20220019806 | Olshansky | Jan 2022 | A1 |
20220092548 | Olshansky | Mar 2022 | A1 |
Number | Date | Country |
---|---|---|
2002310201 | Mar 2003 | AU |
2007249205 | Mar 2013 | AU |
2206105 | Dec 2000 | CA |
2763634 | Dec 2012 | CA |
109146430 | Jan 2019 | CN |
1376584 | Jan 2004 | EP |
1566748 | Aug 2005 | EP |
1775949 | Dec 2007 | EP |
1954041 | Aug 2008 | EP |
2009258175 | Nov 2009 | JP |
2019016192 | Jan 2019 | JP |
9703366 | Jan 1997 | WO |
9713366 | Apr 1997 | WO |
9713367 | Apr 1997 | WO |
9828908 | Jul 1998 | WO |
9841978 | Sep 1998 | WO |
9905865 | Feb 1999 | WO |
0133421 | May 2001 | WO |
0117250 | Sep 2002 | WO |
03003725 | Jan 2003 | WO |
2004062563 | Jul 2004 | WO |
2005114377 | Dec 2005 | WO |
2006103578 | Oct 2006 | WO |
2006129496 | Dec 2006 | WO |
2007039994 | Apr 2007 | WO |
2007097218 | Aug 2007 | WO |
2008029803 | Mar 2008 | WO |
2008039407 | Apr 2008 | WO |
2009042858 | Apr 2009 | WO |
2009042900 | Apr 2009 | WO |
2009075190 | Jun 2009 | WO |
2009116955 | Sep 2009 | WO |
2009157446 | Dec 2009 | WO |
2010055624 | May 2010 | WO |
2010116998 | Oct 2010 | WO |
2011001180 | Jan 2011 | WO |
2011007011 | Jan 2011 | WO |
2011035419 | Mar 2011 | WO |
2011129578 | Oct 2011 | WO |
2011136571 | Nov 2011 | WO |
2012002896 | Jan 2012 | WO |
2012068433 | May 2012 | WO |
2012039959 | Jun 2012 | WO |
2012089855 | Jul 2012 | WO |
2013026095 | Feb 2013 | WO |
2013039351 | Mar 2013 | WO |
2013074207 | May 2013 | WO |
2013088208 | Jun 2013 | WO |
2013093176 | Jun 2013 | WO |
2013131134 | Sep 2013 | WO |
2013165923 | Nov 2013 | WO |
2014089362 | Jun 2014 | WO |
2014093668 | Jun 2014 | WO |
2014152021 | Sep 2014 | WO |
2014153665 | Oct 2014 | WO |
2014163283 | Oct 2014 | WO |
2014164549 | Oct 2014 | WO |
2015031946 | Apr 2015 | WO |
2015071490 | May 2015 | WO |
2015109290 | Jul 2015 | WO |
2016031431 | Mar 2016 | WO |
2016053522 | Apr 2016 | WO |
2016073206 | May 2016 | WO |
2016123057 | Aug 2016 | WO |
2016138121 | Sep 2016 | WO |
2016138161 | Sep 2016 | WO |
2016186798 | Nov 2016 | WO |
2016189348 | Dec 2016 | WO |
2017022641 | Feb 2017 | WO |
2017042831 | Mar 2017 | WO |
2017049612 | Mar 2017 | WO |
2017051063 | Mar 2017 | WO |
2017096271 | Jun 2017 | WO |
2017130810 | Aug 2017 | WO |
2017150772 | Sep 2017 | WO |
2017192125 | Nov 2017 | WO |
2018042175 | Mar 2018 | WO |
2018094443 | May 2018 | WO |
2019226051 | Nov 2019 | WO |
2020198230 | Oct 2020 | WO |
2020198240 | Oct 2020 | WO |
2020198363 | Oct 2020 | WO |
2021108564 | Jun 2021 | WO |
2021202293 | Oct 2021 | WO |
2021202300 | Oct 2021 | WO |
Entry |
---|
“International Preliminary Report on Patentability,” for PCT Application No. PCT/US2021/024450 dated Oct. 13, 2022 (11 pages). |
“Non-Final Office Action,” for U.S. Appl. No. 17/486,489 dated Oct. 20, 2022 (56 pages). |
“Response to Non Final Office Action,” for U.S. Appl. No. 17/490,713, filed Nov. 15, 2022 (8 pages). |
Dragsnes, Steinar J. “Development of a Synchronous, Distributed and Agent-Supported Framework: Exemplified by a Mind Map Application,” MS Thesis; The University of Bergen, 2003 (156 pages). |
Rizzo, Albert, et al. “Detection and Computational Analysis of Psychological Signals Using a Virtual Human Interviewing Agent,” Journal of Pain Management 9.3 (2016):311-321 (10 pages). |
Sen, Taylan, et al. “Automated Dyadic Data Recorder (ADDR) Framework and Analysis of Facial Cues in Deceptive Communication,” Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 1.4 (2018): 1-22 (11 pages). |
“Air Canada Keeping Your Points Active Aeroplan,” https://www.aircanada.com/us/en/aco/home/aeroplan/your-aeroplan/inactivity-policy.html, 6 pages. |
“American Express Frequently Asked Question: Why were Membership Rewards points forfeited and how can I reinstate them?”, https://www.americanexpress.com/us/customer-service/faq.membership-rewards-points-forfeiture.html, 2 pages. |
“DaXtra Parser (CVX) Technical Specifications,” DaXtra Parser Spec. available at URL: <https://cvxdemo.daxtra.com/cvx/download/Parser%20Technical%20Specifications.pdf> at least as early as Feb. 25, 2021 (3 pages). |
“Final Office Action,” for U.S. Appl. No. 16/366,703 dated Nov. 19, 2019 (25 pages). |
“Final Office Action,” for U.S. Appl. No. 16/696,781 dated Oct. 8, 2020 (26 pages). |
“Final Office Action,” for U.S. Appl. No. 16/828,578 dated Jan. 14, 2021 (27 pages). |
“Final Office Action,” for U.S. Appl. No. 16/910,986 dated Jan. 25, 2022 (40 pages). |
“Final Office Action,” for U.S. Appl. No. 17/230,692 dated Aug. 24, 2022 (31 pages). |
“International Preliminary Report on Patentability,” for PCT Application No. PCT/US2020/024470 dated Oct. 7, 2021 (9 pages). |
“International Preliminary Report on Patentability,” for PCT Application No. PCT/US2020/024488 dated Oct. 7, 2021 (9 pages). |
“International Preliminary Report on Patentability,” for PCT Application No. PCT/US2020/024722 dated Oct. 7, 2021 (8 pages). |
“International Preliminary Report on Patentability,” for PCT Application No. PCT/US2020/062246 dated Jun. 9, 2022 (12 pages). |
“International Search Report and Written Opinion,” for PCT Application No. PCT/US2020/024470 dated Jul. 9, 2020 (13 pages). |
“International Search Report and Written Opinion,” for PCT Application No. PCT/US2020/024488 dated May 19, 2020 (14 pages). |
“International Search Report and Written Opinion,” for PCT Application No. PCT/US2020/024722 dated Jul. 10, 2020 (13 pages). |
“International Search Report and Written Opinion,” for PCT Application No. PCT/US2020/062246 dated Apr. 1, 2021 (18 pages). |
“International Search Report and Written Opinion,” for PCT Application No. PCT/US2021/024423 dated Jun. 16, 2021 (13 pages). |
“International Search Report and Written Opinion,” for PCT Application No. PCT/US2021/024450 dated Jun. 4, 2021 (14 pages). |
“Invitation to Pay Additional Fees,” for PCT Application No. PCT/US2020/062246 dated Feb. 11, 2021 (14 pages). |
“Non-Final Office Action,” for U.S. Appl. No. 16/366,703 dated Jun. 10, 2019 (28 pages). |
“Non-Final Office Action,” for U.S. Appl. No. 16/366,703 dated May 6, 2020 (65 pages). |
“Non-Final Office Action,” for U.S. Appl. No. 16/366,746 dated Aug. 22, 2019 (53 pages). |
“Non-Final Office Action,” for U.S. Appl. No. 16/696,781 dated Apr. 7, 2020 (43 pages). |
“Non-Final Office Action,” for U.S. Appl. No. 16/696,781 dated Jan. 26, 2021 (28 pages). |
“Non-Final Office Action,” for U.S. Appl. No. 16/828,578 dated Sep. 24, 2020 (39 pages). |
“Non-Final Office Action,” for U.S. Appl. No. 16/910,986 dated Jun. 23, 2021 (70 pages). |
“Non-Final Office Action,” for U.S. Appl. No. 17/025,902 dated Jan. 29, 2021 (59 pages). |
“Non-Final Office Action,” for U.S. Appl. No. 17/180,381 dated Sep. 19, 2022 (64 pages). |
“Non-Final Office Action,” for U.S. Appl. No. 17/230,692 dated Feb. 15, 2022 (58 pages). |
“Non-Final Office Action,” for U.S. Appl. No. 17/490,713 dated Aug. 16, 2022 (41 pages). |
“Notice of Allowance,” for U.S. Appl. No. 16/366,703 dated Nov. 18, 2020 (19 pages). |
“Notice of Allowance,” for U.S. Appl. No. 16/366,746 dated Mar. 12, 2020 (40 pages). |
“Notice of Allowance,” for U.S. Appl. No. 16/696,781 dated May 17, 2021 (20 pages). |
“Notice of Allowance,” for U.S. Appl. No. 16/910,986 dated May 20, 2022 (17 pages). |
“Notice of Allowance,” for U.S. Appl. No. 16/931,964 dated Feb. 2, 2021 (42 pages). |
“Notice of Allowance,” for U.S. Appl. No. 17/025,902 dated May 11, 2021 (20 pages). |
“Notice of Allowance,” for U.S. Appl. No. 17/212,688 dated Jun. 9, 2021 (39 pages). |
“Nurse Resumes,” Post Job Free Resume Search Results for “nurse” available at URL <https://www.postjobfree.com/resumes?q=nurse&l=&radius=25> at least as early as Jan. 26, 2021 (2 pages). |
“Nurse,” LiveCareer Resume Search results available online at URL <https://www.livecareer.com/resume-search/search?jt=nurse> website published as early as Dec. 21, 2017 (4 pages). |
“Response to Advisory Action,” for U.S. Appl. No. 16/696,781, filed Jan. 8, 2021 (22 pages). |
“Response to Final Office Action,” for U.S. Appl. No. 16/366,703, filed Feb. 18, 2020 (19 pages). |
“Response to Final Office Action,” for U.S. Appl. No. 16/696,781, filed Dec. 8, 2020 (18 pages). |
“Response to Final Office Action,” for U.S. Appl. No. 16/910,986, filed Apr. 20, 2022 (13 pages). |
“Response to Non Final Office Action,” for U.S. Appl. No. 16/366,746, filed Nov. 21, 2019 (12 pages). |
“Response to Non Final Office Action,” for U.S. Appl. No. 16/696,781, filed Apr. 23, 2021 (16 pages). |
“Response to Non Final Office Action,” for U.S. Appl. No. 16/696,781, filed Jul. 6, 2020 (14 pages). |
“Response to Non Final Office Action,” for U.S. Appl. No. 16/828,578, filed Dec. 22, 2020 (17 pages). |
“Response to Non Final Office Action,” for U.S. Appl. No. 16/910,986, filed Sep. 30, 2021 (18 pages). |
“Response to Non Final Office Action,” for U.S. Appl. No. 17/025,902, filed Apr. 28, 2021 (16 pages). |
“Response to Non Final Office Action,” for U.S. Appl. No. 17/230,692, filed Jun. 14, 2022 (15 pages). |
“Response to Non-Final Rejection,” dated May 6, 2020 for U.S. Appl. No. 16/366,703, submitted via EFS-Web on Sep. 8, 2020, 25 pages. |
“Resume Database,” Mighty Recruiter Resume Database available online at URL <https://www.mightyrecruiter.com/features/resume-database> at least as early as Sep. 4, 2017 (6 pages). |
“Resume Library,” Online job board available at Resume-library.com at least as early as Aug. 6, 2019 (6 pages). |
“Television Studio,” Wikipedia, Published Mar. 8, 2019 and retrieved May 27, 2021 from URL <https://en.wikipedia.org/w/index/php?title=Television_studio&oldid=886710983> (3 pages). |
“Understanding Multi-Dimensionality in Vector Space Modeling,” Pythonic Excursions article published Apr. 16, 2019, accessible at URL <https://aegis4048.github.io/understanding_multi-dimensionality_in_vector_space_modeling> (29 pages). |
Advantage Video Systems “Jeffrey Stansfield of AVS interviews rep about Air-Hush products at the 2019 NAMM Expo,” YouTube video, available at https://www.youtube.com/watch?v=nWzrM99qk_o, accessed Jan. 17, 2021. |
Alley, E. “Professional Autonomy in Video Relay Service Interpreting: Perceptions of American Sign Language-English Interpreters,” (Order No. 10304259). Available from ProQuest Dissertations and Theses Professional. (Year: 2016), 209 pages. |
Bishop, Todd “Microsoft patents tech to score meetings using body language, facial expressions, other data,” Article published Nov. 28, 2020 at URL <https://www.geekwire.com/author/todd/> (7 pages). |
Brocardo, Marcelo Luiz, et al.“Verifying Online User Identity using Stylometric Analysis for Short Messages,” Journal of Networks, vol. 9, No. 12, Dec. 2014, pp. 3347-3355. |
Hughes, K. “Corporate Channels: How American Business and Industry Made Television Useful,” (Order No. 10186420). Available from ProQuest Dissertations and Theses Professional. (Year: 2015), 499 pages. |
Jakubowski, Kelly, et al.“Extracting Coarse Body Movements from Video in Music Performance: A Comparison of Automated Computer Vision Techniques with Motion Capture Data,” Front. Digit. Humanit. 2017, 4:9 (8 pages). |
Johnston, A. M, et al.“A Mediated Discourse Analysis of Immigration Gatekeeping Interviews,” (Order No. 3093235). Available from ProQuest Dissertations and Theses Professional (Year: 2003), 262 pages. |
Lai, Kenneth, et al.“Decision Support for Video-based Detection of Flu Symptoms,” Biometric Technologies Laboratory, Department of Electrical and Computer Engineering, University of Calgary, Canada, Aug. 24, 2020, available at URL <https://ucalgary.ca/labs/biometric-technologies/publications> (8 pages). |
Liu, Weihua, et al.“RGBD Video Based Human Hand Trajectory Tracking and Gesture Recognition System,” Mathematical Problems in Engineering vol. 2015, article ID 863732 (16 pages). |
Luenendonk, Martin “The Secrets to Interviewing for a Role That's Slightly Out of Reach,” Cleverism Article available at URL <https://www.cleverism.com/interviewing-for-a-role-thats-slightly-out-of-reach/> last updated Sep. 25, 2019 (13 pages). |
Pentland, S. J.“Human-Analytics in Information Systems Research and Applications in Personnel Selection,” (Order No. 10829600). Available from ProQuest Dissertations and Theses Professional. (Year: 2018), 158 pages. |
Ramanarayanan, Vikram, et al.“Evaluating Speech, Face, Emotion and Body Movement Time-series Features for Automated Multimodal Presentation Scoring,” In Proceedings of the 2015 ACM on (ICMI 2015). Association for Computing Machinery, New York, NY, USA, 23-30 (8 pages). |
Randhavane, Tanmay, et al.“Identifying Emotions from Walking Using Affective and Deep Features,” Jan. 9, 2020, Article available at Cornell University website URL <https://arxiv.org/abs/1906.11884v4> (15 pages). |
Swanepoel, De Wet, et al.“A Systematic Review of Telehealth Applications in Audiology,” Telemedicine and e-Health 16.2 (2010): 181-200 (20 pages). |
Wang, Jenny “How to Build a Resume Recommender like the Applicant Tracking System (ATS),” Towards Data Science article published Jun. 25, 2020, accessible at URL <https://towardsdatascience.com/resume-screening-tool-resume-recommendation-engine-in-a-nutshell-53fcf6e6559b> (14 pages). |
Yun, Jaeseok, et al.“Human Movement Detection and Identification Using Pyroelectric Infrared Sensors,” Sensors 2014, 14, 8057-8081 (25 pages). |
“Non-Final Office Action,” for U.S. Appl. No. 17/318,774 dated Apr. 5, 2023 (60 pages). |
“Notice of Allowance,” for U.S. Appl. No. 17/476,014 dated Apr. 28, 2023 (10 pages). |
“Notice of Allowance,” for U.S. Appl. No. 17/486,489, dated Mar. 17, 2023 (18 pages). |
“Response to Final Office Action,” for U.S. Appl. No. 17/180,381, filed Apr. 24, 2023 (15 pages). |
“Response to Non Final Office Action,” for U.S. Appl. No. 17/476,014, filed Apr. 18, 2023 (16 pages). |
“Final Office Action,” for U.S. Appl. No. 17/180,381 dated Jan. 23, 2023 (25 pages). |
“Non-Final Office Action,” for U.S. Appl. No. 17/476,014 dated Jan. 18, 2023 (62 pages). |
“Notice of Allowance,” for U.S. Appl. No. 17/490,713 dated Dec. 6, 2022 (9 pages). |
“Response to Non-Final Office Action,” for U.S. Appl. No. 17/180,381, filed Dec. 15, 2022, 2022 (13 pages). |
“Response to Non-Final Office Action,” for U.S. Appl. No. 17/486,489, filed Jan. 18, 2023 (11 pages). |
“Non-Final Office Action,” for U.S. Appl. No. 17/180,381 dated Jul. 14, 2023 (15 pages). |
“Notice of Allowance,” for U.S. Appl. No. 17/318,774 dated Aug. 16, 2023 (11 pages). |
Number | Date | Country | |
---|---|---|---|
20230091194 A1 | Mar 2023 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16910986 | Jun 2020 | US |
Child | 17951633 | US | |
Parent | 16366746 | Mar 2019 | US |
Child | 16910986 | US |