Various embodiments relate to systems and methods for linking and connecting two login entities over a computer network in real time. More specifically, the systems and methods automatically analyze and assign traits based on the actions of one login entity. If appropriate traits are assigned during the analysis, a live connection is established between that one login entity and a second login entity.
Various embodiments provide a method of connecting an employer with a candidate. The method can include receiving, at a system server, criteria data from the employer regarding a job opening, wherein the criteria data from the employer includes minimum attributes and real-time connection attributes. The method can include receiving, at the system server, background data from the candidate, recording audio data and video data of the candidate in a video interview of the candidate in a booth with a first camera, a second camera, and a microphone. The method can include analyzing prior to an end of the video interview, at the system server, the audio data of the candidate with speech-to-text analysis to identify textual video interview data, wherein candidate data includes the textual video interview data and the background data. The method can include comparing the minimum attributes to the candidate data to determine if the minimum attributes are satisfied by the candidate data. The method can include if the minimum attributes are present in the candidate data, then comparing the real-time connection attributes to the candidate data to determine if a threshold amount of the real-time connection attributes are satisfied by the candidate data. The method can include if the threshold amount of real-time connection attributes are satisfied by the candidate data, then sending, over a communication network, a first offer to the employer for a real-time connection with the candidate. The method can include receiving, prior to the end of the video interview, an employer acceptance of the first offer for a real-time connection with the candidate. The method can include after receiving the employer acceptance, sending, over the communication network, prior to the end of the video interview, a second offer to the candidate for a real-time connection with the employer. The method can include receiving a candidate acceptance from the candidate of the second offer for a real-time connection with the employer, and after receiving the candidate acceptance, connecting the candidate and the employer in real time by establishing a live audio connection or a live audio and video connection.
In an embodiment, the textual video interview data fulfills real-time connection criteria data that is not fulfilled by the background data.
In an embodiment, the method can include during a first time window, saving a first portion of candidate data and a second portion of candidate data in a raw database on the system server, wherein the first portion of candidate data is related to a first real-time connection attribute and the second portion of the candidate data is related to a second real-time connection attribute, storing the first portion of candidate data within a first cell associated with the first real-time connection attribute in a candidate database, storing the second portion of candidate data within a second cell associated with the second real-time connection attribute in the candidate database, during a second time window later than the first time window, saving a third portion of candidate data and a fourth portion of candidate data in the raw database, wherein the third portion of candidate data is related to the first real-time connection attribute and the fourth portion of candidate data is related to the second real-time connection attribute, comparing the first portion of candidate data with the third portion of candidate data to determine which is more favorable for satisfying the first real-time connection attribute, as a result of determining that the first portion of candidate data is more favorable, maintaining the first portion of candidate data in the first cell, comparing the second portion of candidate data with the fourth portion of candidate data to determine which is more favorable for satisfying the second real-time connection attribute, and as a result of determining that the fourth portion of candidate data is more favorable, replacing the second portion of candidate data with the fourth portion of candidate data in the second cell.
In an embodiment, the booth further includes a user interface configured to display prompts to the candidate for asking the candidate to speak and provide audio data and video data, the method further can include storing, at a system server, a first frame of prompts can include at least a first prompt and a second prompt, displaying the first prompt and second prompt to the candidate, wherein the step of recording audio data and video data of the candidate includes recording the candidate's responses to the first prompt and second prompt in the video interview, wherein a third prompt is displayed after the second prompt, wherein a decision to display a third prompt is based on textual video interview data received in response to the one of the first or second prompt.
In an embodiment, the first frame of prompts is associated with an industry of the job opening, the method further can include: receiving, at a system server, a second frame of prompts can include at least a fourth prompt and a fifth prompt, wherein the second frame of prompts is associated with the employer, receiving, at a system server, after receiving the criteria data, a third frame of prompts can include at least a sixth prompt and a seventh prompt, wherein the third frame of prompts is associated with the job opening, and displaying the fourth prompt, fifth prompt, and sixth prompt to the candidate, wherein the step of recording audio data and video data of the candidate includes recording the candidate's responses to the fourth prompt, fifth prompt, and sixth prompt in the video interview.
In an embodiment, the method can include prompting, via a first candidate interface, the candidate to talk more about an aspect of the textual video interview data in response to analysis of the textual video interview information.
In an embodiment, the threshold amount is a percentage of the real-time connection attributes being met.
In an embodiment, the method can further include eliminating a real-time connection attribute upon determining the candidate's experience level is above a threshold experience level.
In an embodiment, the method can further include reducing the threshold amount of real-time connection attributes upon determining the candidate's experience level is above a threshold experience level.
In an embodiment, the method can further include eliminating a real-time connection attribute from the criteria data upon determining the presence of a skill that fulfills a different real-time connection attribute.
In an embodiment, the method can further include reducing the threshold amount of real-time connection attributes upon determining the presence of a skill that fulfills a real-time connection attribute.
In an embodiment, the method can include analyzing the textual video interview data with a salary analysis module, based on the analysis of the salary analysis module, generating a predicted salary range for the candidate, and providing the predicted salary range to the candidate at the end of the video interview.
In an embodiment, a method of connecting an employer with a candidate, is provided. The method can include receiving, at a system server, criteria data from the employer regarding a job opening, wherein the criteria data from the employer includes minimum attributes and real-time connection attributes. The method can include receiving, at the system server, background data from the candidate. The method can include recording audio data and video data of the candidate in a video interview of the candidate in a booth with a first camera, a second camera, and a microphone. The method can include recording behavioral data of the candidate with at least one depth sensor disposed in the booth. The method can include analyzing prior to an end of the video interview, at the system server, the audio data of the candidate with speech-to-text analysis to identify textual interview data, wherein candidate data includes the textual interview data and the background data. The method can include analyzing prior to the end of the video interview, at the system server, the behavioral data of the candidate to identify behavioral interview data, wherein the candidate data further includes the behavioral data. The method can include comparing the minimum attributes to the candidate data to determine if the minimum attributes are satisfied by the candidate data, if the minimum attributes are present in the candidate data, then comparing the real-time connection attributes to the candidate data to determine if a threshold amount of the real-time connection attributes are satisfied by the candidate data, sending, over a communication network, an offer to the employer for a real-time connection with the candidate if the threshold amount of real-time connection attributes are satisfied by the candidate data, and after the employer accepts the offer for a real-time connection with the candidate. The method can include sending, over the communication network, an offer to the candidate for the real-time connection with the employer, and connecting the candidate and the employer in real time, if the candidate accepts the offer for a real-time connection by establishing a live audio connection or a live audio and video connection.
In an embodiment, a conclusion relating the textual interview data with the behavioral data fulfills a real-time connection attribute.
In an embodiment, the conclusion includes a level of excitement, engagement, or enthusiasm about a discussed subject matter.
This summary is an overview of some of the teachings of the present application and is not intended to be an exclusive or exhaustive treatment of the present subject matter. Further details are found in the detailed description and appended claims. Other aspects will be apparent to persons skilled in the art upon reading and understanding the following detailed description and viewing the drawings that form a part thereof, each of which is not to be taken in a limiting sense. The scope herein is defined by the appended claims and their legal equivalents.
Aspects may be more completely understood in connection with the following figures (FIGS.), in which:
While embodiments are susceptible to various modifications and alternative forms, specifics thereof have been shown by way of example and drawings and will be described in detail. It should be understood, however, that the scope herein is not limited to the particular aspects described. On the contrary, the intention is to cover modifications, equivalents, and alternatives falling within the spirit and scope herein.
In various embodiments, login entities are identified as authenticated users of a computerized system operating over a computer network. One login entity is subject to analysis based on the monitored activities of that login entity. The analysis can include the analysis of sensors that receive real-world data relating to an individual that initiated the connection to the system as the login entity. The sensors can receive three-dimensional spatial data related to the individual, and/or visual data received from cameras monitoring the individual. The analysis can also include electronic communications received from the login entity over the computer network, including textual inputs and vocal inputs. The system analyzes the sensor and electronic communications input to identify traits. The traits are then assigned to that login entity into a database managed by the system. The traits can include data elements pre-assigned to the login entity before the current session of the login entity on the computer system. The live session of the login entity allows the computer system to analyze the sensor data and electronic communications and to assign additional traits into the database system. If the traits in the database system for the login entity exceed a threshold only as a result of the new traits added by the live session analysis, a live communication is attempted between the analyzed login entity and a second login entity that is then communicated with the system over the computer network.
Various embodiments herein provide systems and methods for matching a highly qualified candidate (operating as a first login entity on the computer system) with an employer (a second login entity) in real time, while both parties are available. In these embodiments, upon establishing that a highly-qualified or desirable candidate is available and interested in a job opening, the system connects the two parties in real time over the computer network. In at least one embodiment, the connection provides for an interview process over the computer network. The establishment of the candidate as highly qualified can be accomplished by monitoring the sensor data and/or electronic communications during a live session of the candidate and combining newly assigned traits with existing database data accessed by the computer system. Further, the system is able to learn data about the candidate that is not able to be learned from a resume or other document. The system can learn the skills and traits of the candidate through responses to prompts via analysis of sensor data and electronic communications data.
In various embodiments, the employer can establish data defining one or more job openings that they wish to hire for. Prior to individuals accessing the system, the employer establishes data details in the system such as title and description. In many examples, the system defines additional criteria data regarding the job opening. Criteria data can define the skills or attributes that the employer wants a prospective candidate to have. Various embodiments can split the criteria data into two or more categories, such as minimum attributes and real-time connection attributes. The minimum attributes can be general traits or skills that candidate needs to have for the open position. Real-time connection attributes can define what attributes are associated with a top-tier candidate.
The provided systems and methods can include a candidate providing information about themselves that is stored in the databases of the system. This information can be divided into background data and content data. The system can analyze data from a video interview in real time, as the data is being provided to the system. The system can compare the information derived from the analyzed data with the criteria data from the employer. Upon reaching a threshold amount of similarities between the derived candidate data and the criteria data, the system can immediately notify the employer that a top-tier candidate is currently available and interested in the job opening or a related job opening. In some embodiments, the employer must be directly accessible by the system, such as by being logged into the system or by providing direct communication link between the system and the employer. If the employer is available and accepts the invitation to connect with the candidate, the system can ask the candidate if they want to connect with the employer to discuss the job opening at the moment. If the candidate also accepts, the systems provided herein can connect the two parties in real time. Connecting the two parties in real time, while the candidate is available, can substantially speed up the interview and hiring process. Further, connecting the parties in real time can also reduce the chances of a participating employer missing out on a top-tier candidate.
The system server 106 can request and/or receive 112 data from the employer 102 regarding a job opening. The data can be entered and/or sent using an employer's device 108, such as a computer or smart phone. The data includes criteria data, which is specific requirements or attributes that the employer 102 is searching for, such as experience, education, and skills. In some embodiments, the criteria data can include minimum attributes and real-time connection attributes, as discussed below in
The system server 106 can also request and/or receive 114 data from the candidate 104 regarding his/her employment qualifications, such as education, experience, certifications, and skills. This data can be compared to the criteria data to determine if the candidate 104 is likely qualified for the job opening.
The system 100 can include a kiosk or booth 110. The booth 110 can include one or more cameras, microphones, and depth sensors, as will be discussed below in reference to
In some embodiments, the candidate 104 can provide data, such as background data, to the system 100 prior to entering the booth. In some embodiments, the background data can be derived or recorded from the candidate's resume, a previous video interview, or another source. The booth 110 can provide prompts or questions to the candidate 104, through a user interface, for the candidate 104 to respond to. The candidate's response to each of the prompts can be recorded with the video cameras and microphones. The candidate's behavioral data can be recorded with a depth sensor.
The system server 106 can analyze, evaluate, and update the candidate's known information while the candidate 104 is participating in the video interview (
If there is a sufficient amount of overlap between the candidate data and the criteria data, and both parties are willing to connect with each other (
Booth (
In reference now to
Optionally, a seat 207 can be provided for the candidate 104. The booth 110 houses multiple cameras, such as a first camera 222, a second camera 224, and a third camera 226. Each of the cameras is capable of recording video of the candidate 104 from different angles. In the embodiment of
Instead of a tablet computer 218, a computer 218 can be used having the shape and size of a typical tablet computer. For example, computer 218 can be sized for easy movement and positioning by the user. In various embodiments, the computer 218 has a display screen size of at least about 5 inches, at least about 6 inches, at least about 7 inches, at most about 10 inches, at most about 12 inches, or a combination of these boundary conditions. In various embodiments, the computer 218 has a case depth of at least about 0.3 inch, at least about 0.4 inch, at most about 0.7 inch, at most about 1 inch, or a combination of these boundary conditions. A microphone 220 is provided for recording audio. In some examples, each camera 222, 224, 226 can include a microphone 220. In some embodiments, the microphones 220 are embedded into and form part of the same physical component as a camera 222, 224, 226. In other embodiments, one or more of the microphones 220 are separate components that can be mounted apart from the cameras within the kiosk 110.
The first, second, and third cameras 222, 224, 226 can be digital video cameras that record video in the visible spectrum using, for example, a CCD or CMOS image sensor. Optionally, the cameras can be provided with infrared sensors or other sensors to detect depth, movement, etc. In some examples, one or more depth sensors 228 can be included in the booth 110.
In some examples, the various pieces of hardware can be mounted to the walls of the enclosed booth 205 on a vertical support 230 and a horizontal support 232. The vertical support 230 can be used to adjust the vertical height of the cameras and user interface, and the horizontal support 232 can be used to adjust the angle of the cameras 222, 224, 226. In some examples, the cameras can automatically adjust to the vertical position along vertical supports 230, such as to position the cameras at a height that is not higher than 2 inches (5 centimeters) above the candidate's eye height. In some examples, the cameras can be adjusted to a height of no more than 52 inches (132 centimeters) or no more than 55 inches (140 centimeters).
The candidate 104 can participate in a recorded video interview while in the booth 110. The cameras 222, 224, 226, the depth sensor 228, and the microphone 220 can record video data, behavioral data, and audio data of the candidate 104 during the interview.
The user interface 216 can provide the candidate 104 with prompts during the video interview. The candidate 104 can respond to the prompts. The candidate's responses can be recorded. In some embodiments, the server 106 can be at least partially located at or within the booth 110. In other embodiments, the server 106 can be entirely or partially located at a remote location away from the booth 110 and the employer 102. Further examples of booth structures and hardware are described in U.S. patent application Ser. No. 16/828,578, titled “Multi-Camera Kiosk,” filed on Mar. 24, 2020, which claims the benefit of U.S. Provisional Application No. 62/824,755, filed Mar. 27, 2019, which are incorporated by reference herein.
Server and Data Storage (
In various embodiments, the criteria data can include skills, education, experience, certifications, and other attributes that the employer 102 is looking for in a candidate for the job opening. In some embodiments, the criteria data can include minimum attributes and real-time connection attributes. Minimum attributes can include attributes that the candidate needs to have to be qualified for the job. Real-time connection attributes can include attributes that an ideal candidate would have. For examples, a real-time connection attribute can require a rare skill, a rare personal quality, or a greater number of years of experience compared to a minimum attribute.
Data from a job opening profile can be analyzed and compared to data about the candidate 104 in a dynamic match evaluation module 342 on the server 106. The dynamic match evaluation module 342 can determine if the candidate 104 has the attributes to fulfill the minimum attributes, the real-time connection attributes, and/or a threshold amount of real-time connection attributes. If the candidate 104 is determined to have the minimum attributes and a sufficient number of real-time connection attributes to meet the threshold amount, the system 100 can propose connecting the employer 102 with the candidate 104 in real time.
In some embodiments, one or more prompts can be part of a frame or set of prompts that specifically intended to draw out desired information from the candidate, such as discussed below in reference to
As the candidate 104 is replying to prompts during the video interview, the system 100 can be recording the candidate's responses and actions. The microphone 220 can record what the candidate 104 is saying. The cameras 222, 224, 226 can record video of the interview, and the cameras, microphone, and depth sensor(s) 228 can record behavioral data of the candidate 104 during the interview. The data from these sources can be saved in a candidate profile 444 on the server 106.
Similar to the employer profiles 334, each candidate 104 can have a candidate profile 444 on the server. Each candidate profile 444 can include candidate data 446, such as a candidate identification number, name, contact information, background data, interview data, and the like. Background data can include data from the candidate's resume, previous video interviews, or other sources of already known data about the candidate 104. The candidate profile 444 can be updated throughout the video interview as the system learns and obtains more data about the candidate 104.
In some embodiments, the system server 106 can analyze interview data, such as audio data, of the candidate prior to the end of the video interview. The system 100 can analyze or process the audio data with a speech-to-text module to identify textual video interview data. The candidate data 446 can include the textual video interview data. In some embodiments, the textual video interview data can include a transcript of the video interview, or at least a partial transcript of the video interview.
Data from the candidate profile 444 can be compared and evaluated with data from employer profile 334 in the dynamic match evaluation module 342. The dynamic match evaluation module 342 can compare the minimum attributes of the job opening to the candidate data to determine if the minimum attributes are satisfied by the candidate data. In some embodiments, the minimum attributes are determined to be satisfied before the real-time connection attributes are analyzed. In some embodiments, if the minimum attributes are present in the candidate data, the dynamic match evaluation module 342 can compare the real-time connection attributes to the candidate data to determine if a threshold amount of the real-time connection attributes are satisfied by the candidate data. In some embodiments, the textual video interview data fulfills one or more real-time connection attributes that is not fulfilled by the background data.
In various embodiments, if it is unknown whether or not the candidate has a specific attribute after the candidate's response to a prompt intended to draw out that attribute, the system 100 can further prompt 448 the candidate to discuss or talk more about an aspect of the textual video interview data, such as in response to analysis of the textual video interview information being determined as incomplete.
In one example, the system stores a prompt table related to a specific attribute, and the prompt table stores multiple prompts that are designed to elicit information to satisfy the specific attribute. The prompt table defines if-then relationships between a particular answer that is received and the next prompt that is provided based on that answer. Alternatively, the questions can take the form of a flow chart of questions or a question tree. When the first prompt is answered, the answer is analyzed and a branch of the tree (or the next step in the flow chart) is taken to select the next prompt to be presented. Ideally, the result of following such a question tree is to assign a value to an attribute associated with the candidate. Such an attribute value can be an important or determinative factor in the dynamic match evaluation engine 342 making a determination to establish a real-time connection. The attribute values for a particular candidate are part of the candidate data for that candidate.
If a threshold amount of real-time connection attributes is met, the system can offer the employer 102 and candidate 104 the opportunity for a real-time connection 450.
Real-Time Connection System (
In reference now to
If the threshold amount of real-time connection attributes is fulfilled, the system 100 can send a first offer 552 to the employer 102 to connect to the candidate 104. In various embodiments, the identity of the candidate can be unknown to the employer 102, such that identification information can be withheld or not included in the offer 552. In some embodiments, an explanation of why the candidate 104 qualified for the real-time connection can be provided with the offer 552, such as informing the employer 102 that the candidate possesses a desired certification or skill.
The system can receive the employer's 102 acceptance of the offer 552 prior to the end of the video interview. In many embodiments, the video interview can continue while the offer 552 is sent and the system is waiting for a response from the employer 102. In some embodiments, the candidate 104 is not informed that an offer for real-time connection with an employer 102 has been sent.
After receiving the employer's acceptance, a second offer 554 for a real-time connection can be sent, such as over the communication network or on the user interface 216, this time to the candidate 104. If the candidate 104 also accepts the offer, the server 106 can establish a network connection 656 between the candidate 104 and the employer 102 as shown in
Now referring to
If the employer 102 does not intend to send a job offer to the candidate 104, the system 100 can still send information that is helpful to the candidate 104, such as sending an expected salary range 760 to the candidate 104. The expected salary range can be an output from a salary analysis module. Inputs to the salary analysis module include the candidate data and market data. The market data can include government-published and industry-group published data for the particular job opening or other job openings that might fit the candidate's skills. The market data can also include internal market data that is not publicly available but is available within or to the system because of the many job openings posted in the system, many of which will include salary ranges, and past employee job offers extended using the system.
The internal market data includes geography information. It is also possible that the public market data can include geography information. Information regarding geographic salary differences relevant to the candidate and the job opening can be included in the output from the salary module. The system can decide what geographic salary ranges to present to the candidate based on candidate data, such as interview data collected by speech-to-text modules during the interview and by sensors during the video interview. For example, interview data can include that a candidate expresses a strong interest in working in a first geographic area and a tentative interest in working in a higher-demand, second geographic location. The output from the salary analysis module can include data related to both the first and second geographic areas. The output from the salary analysis module can include a differential income boost predicted from job searching in the second geographic area compared to the first geographic area.
The candidate data from the just-completed real-time interview and preceding video interview, as well as any candidate background data, can be used as input to the salary analysis module. As a result, the candidate leaves the video interview process with up-to-date detailed, valuable information. The candidate is more likely to invest time performing a video interview with a system that provides the candidate with helpful information. The candidate is also more likely to invest time with a system that guides the candidate to higher-value job opportunities.
As discussed above, the booth 110 can include an enclosed room that records high-quality audio and visual data of candidate 104. The booth 110 houses multiple visual cameras, including a first camera 222, a second camera 224, and a third camera 226. The booth 110 also houses at least one microphone 220 for recording audio. In
The sound recorded by the microphones 220 can also be used for behavioral analysis of the candidate 104. Speech recorded by the microphones 220 can be analyzed to extract behavioral data, such as vocal pitch and vocal tone, speech cadence, word patterns, word frequencies, total time spent speaking, and other information conveyed in the speaker's voice and speech.
The booth 110 can also incorporate one or more depth sensors 228 that can detect changes in the position of the candidate 104. Only one depth sensor 228 is shown in
A computer 864 at the booth 110 is able to capture visual data of the candidate 104 from the cameras, capture audio data of the candidate 104 from the microphones, and capture behavioral data input from the depth sensors. This data is all synchronized or aligned. This means, for example, that audio information recorded by all of the microphones 220 can be synchronized with the visual information recorded by all of the cameras 222, 224, 226 and the behavioral data taken from the sensors 228, so that all the data taken at the same time can be identified and compared for the same time segment.
The computer 864 is a computing device that includes a processor for processing computer programming instructions. In most cases, the processor is a CPU, such as the CPU devices created by Intel Corporation (Santa Clara, Calif.), Advanced Micro Devices, Inc. (Santa Clara, Calif.), or a RISC processer produced according to the designs of Arm Holdings PLC (Cambridge, England). Furthermore, computer 864 has memory, which generally takes the form of both temporary, random access memory (RAM) and more permanent storage such a magnetic disk storage, FLASH memory, or another non-transitory (also referred to as permanent) storage medium. The memory and storage (referred to collectively as “memory”) contain both programming instructions and data. In practice, both programming and data will be stored permanently on non-transitory storage devices and transferred into RAM when needed for processing or analysis. In some embodiments, the computer 864 may include a graphics processing unit (or GPU) for enhanced processing of visual input and outputs, or an audio processing board, a single chip audio processor, or a digital signal processor (or DSP) that accelerates the processing of audio inputs and outputs.
It should be understood that the receiving, processing, analyzing, and storage of data can take place at the computer 864 in the booth 110 or at a remote server, such as system server 106. Discussion of the steps taken with data can be understood to apply to both the computer 864 and the server 106.
In some embodiments, the computer 864 is tasked with receiving the raw visual data from the cameras, the raw audio data from the microphones, and the raw sensor data from the behavioral depth sensors. The computer 864 is also tasked with making sure that this data is safely stored. The data can be stored locally, or it can be stored remotely. In
Although this database 862 is showed as being connected to the booth 110 over network 866, this data 862 can be stored locally to the booth 110 and computer 864. To save storage space, audio and video compression formats can be utilized when storing data 862. These can include, but are not limited to, H.264, AVC, MPEG-4 Video, MP3, AAC, ALAC, and Windows Media Audio. Note that many of the video formats encode both visual and audio data. To the extent the microphones 220 are integrated into the cameras, the received audio and video data from a single integrated device can be stored as a single file. However, in some embodiments, audio data is stored separately the video data. Nonetheless,
Recorded data 868 can be processed and saved as candidate data 886. Candidate data 886 can include recorded data specific to the candidate 104. Candidate data 886 can further include background data of the candidate 104, such as resume information or personally identifying information.
The computer 864 is generally responsible for coordinating the various elements of the booth 110. For instance, the computer 864 is able to provide visual instructions or prompts to a candidate 104 through one or more interfaces 216 that are visible to the candidate 104 when using the booth 110. Furthermore, audio instructions can be provided to the candidate 104 either through speakers (not shown) integrated into the booth 110 or through earpieces or headphones (also not shown) worn by the candidate 104. In addition, the computer 864 can be responsible for receiving input data from the user, such as through a touchpad integrated into interface 216.
The system 100 shown in
In
In some embodiments, the employer computer system 108 takes the form of a mobile device such as a smartphone or tablet computer. If the employer computer 108 is a standard computer system, it will operate custom application software or browser software 882 that allows it to communicate over the network 866 as part of the system 100. In particular, the programming 882 can at least allow communication with the system server 106 over the network 866. The system 100 can also be designed to allow direct communication between the employer's computer system 108 and the booth's computer 864, such as for the real-time connection, or even between the employer computer system 108 and data 862. If the employer computer 108 is a mobile device, it will operate either a custom app or a browser app 882 that achieves the same communication over network 866. This network 866 can allow a user using employer computer system 108 to connect directly with the booth's computer 864, such as for a real-time connection between the employer 102 and the candidate 104.
Note that even though
Database 862 also contains criteria data 890. Criteria data 890 constitutes information that is of interest to the employer 102 and is relevant to the data 868 acquired by the booth 110. In the context of an employment search, the criteria data 890 may containing various attributes and experience requirements for the job opening.
Database 862 also includes information or data 892 about the employer 102. This information can be used to help a candidate decide if he/she wants to accept a real-time communication with an employer. Finally, the database 862 maintains historical information 894 about previous criteria data 890 (such as data about previous job openings) and previous actions by candidates or employers.
An employer using one of the employer computer systems 108 will authenticate themselves to the system server 106. One method of authentication is the use of a username and password. This authentication information, or login information, can be stored in the memory 874 of the employer computer system 108 so that it does not need to be entered every time that the employer interacts with the system 100. The system server 106 can identify a particular employer as an employer login entity accessing the system 100. Similarly, a candidate using the booth 110 will also authenticate themselves to the system 100, such as through the use of their own username and password. In this way, the connection between an employer using an employer computer system 108 and a candidate 104 using a booth can be considered a connection made by the server 106 between two different login entities to the system 100.
Candidate Data (
Information about the candidate 104 can be stored in the data store 862 as candidate data 886. As shown in
The candidate data can further include background data 997 and interview data 998. As previously mentioned, the background data 997 can be derived or recorded from the candidate's resume, a previous video interview, or another source prior to a particular video interview starting. Interview data 998 can include data that was recorded during the particular video interview, that is, a video interview that is currently happening. Attribute data for the candidate that is identified for specific content and helps determine if a particular attribute is met by a candidate, such as years of experience, skills, empathy score, other types of candidate scores, and certifications can be referred to as content data, as shown in
Recorded Data (
The prompt data 1002 contains information about the content 1004 of each prompt given during the recorded session of the candidate 104. In addition, the prompt data 1002 contains prompt segment information 1006 about the timing of these prompts. The timing of these prompts 1006 can be tracked in a variety of ways, such as in the form of minutes and seconds from the beginning of a recorded session. For instance, a first prompt may have been given to the candidate 104 at a time of 1 minute, 15 seconds (1:15) from the beginning of the recorded session. Note that this time may represent the time at which the first prompt was initiated (when the screen showing the prompt was first shown to the candidate 104 or when the audio containing the prompt began). Alternatively, this time may represent the time at which the prompt was finished (when the audio finished) or when the user first began to respond to the prompt. A second prompt may have been given to the candidate 104 at a time of 4 minutes, 2 seconds (4:02), the third prompt at 6 minutes, 48 seconds (6:48), etc. The time between prompts can be considered the prompt segment 1006. The prompt segment 1006 may constitute the time from the beginning of one prompt to the beginning of the next prompt, or from a time that a first prompt was finished to a time just before the beginning of a second prompt. This allows some embodiments to define the prompt segments 1006 to include the time during which the prompt was given to the candidate 104, while other embodiments define the prompt segments 1006 to include only the time during which the individual responded to the prompt. Regardless of these details, the prompt data 1002 contains the timing information necessary to define prompt segments 1006 for each prompt 1002.
Prompt data 1002 in data store 862 includes the text or audio of the instructions provided to the individual (or an identifier that uniquely identifies that content) and the timing information needed to define the prompt segments 1006.
In some contexts, prompts 1002 can be broad, general questions or instructions that are always provided to all candidates 104 (or at least to all candidate 104 that are using the booth 110 for the same purpose, such as applying for a particular job or type of job). In other contexts, the computer 864 can analyze the individual's response to the prompts 1002 to determine whether additional or follow up prompts 1002 should be given to the candidate 104 on the same topic. For instance, an individual looking for employment may indicate in response to a prompt that they have experience with data storage architecture. This particular technology may be of interest to current employers that are looking to hire employees in this area, such that it might fulfill a minimum attribute. However, this potential employer may want to hire an employee only with expertise in a particular sub-specialty relating to data storage architecture (a real-time connection attribute). The computer 864 may analyze the response of the candidate 104 in real time using speech-to-text technology and then determine that additional prompts 1002 on this same topic are required in order to learn more about the technical expertise of the candidate 104 compared to the criteria data 890. These related prompts 1008 can be considered “sub-prompts” of the original inquiry. In
In various embodiments, one or more prompts and sub-prompts can be designed by the employer, the system administrator, or both to elicit candidate data about a job attribute that is not addressed by other prompts, such as prompts that are typical for the particular job opening. In one example, a healthcare employer looking for a Marketing Director needs someone who can assist the Director of Clinical Services to prepare training materials for clinicians. An example of a prompt designed to elicit whether the candidate has skills in this area is “Do you have any experience preparing training materials? If so, can you give an example of a successful training program and the steps you took to produce these materials?”
The candidate 104 typically provides oral answers or responses to the prompts 1002, but in some circumstances the candidate 104 will be prompted to do a physical activity, or to perform a skill. In any event, the candidate's response to the prompt 1002 will be recorded by the booth using cameras, microphones, and depth sensors. The booth computer 864 can be responsible for providing prompts 1002 and therefore can easily ascertain the timing at which each prompt 1002 is presented. In other embodiments, the server 106 can provide the prompts to the booth 110.
As shown in
The audio segments 1012, 1014 can be processed with a speech to text processor 1030. The output from the speech to text processor 1030 is the content data 51031. Similarly, output from processing the visual segments 1016, 1018, 1020 is content data 61033, and output from processing the sensor segment 1022 is content data 71035. All of these types of content data 1031, 1033, 1035 can be used to assign values to particular attributes for the candidate.
In
Examples of criteria 1024, 1026, 1028 may relate to various candidate attributes that can be analyzed and rated using the data collected by the booth 110 or in the content provide by the candidate 104. For example, the criteria 1024, 1026, 1028 may be minimum scores for time management skills, team leadership skills, empathy, engagement, or technical competence. Other possible criteria include confidence, sincerity, assertiveness, comfort, or any other type of personality score that could be identified using known techniques based on an analysis of visual data, audio data, and depth sensor/movement data. Other possible criteria are a willingness to relocate in general, a willingness to relocate to the specific location of the job opening, the presence of a skill, or the presence of a certification.
In some embodiments, criteria attributes can be scored on a sliding scale scoring system. For example, the candidates can be asked “Would you be willing to move to Alaska?” A candidate's response of “Yes, I love Alaska. I have family there,” can be scored more favorably, than a response of “yes” or “sure.” The prompts can be configured to obtain data from the candidate that is not present in a resume.
For example, the response of “Yes, I love Alaska. I have family there,” can receive a score of two, the response of “yes” or “sure” can receive a score of one, and a response of “no” can receive a score of zero. A minimum attribute score for a particular job in Alaska might be a score of 1 or higher, while a minimum score for real-time connection attribute would be a score of two or higher. A longer response with three or more positive words could get an even higher score of three.
Here are example scoring criteria for evaluating a response to a question about relocation to a particular location:
Score 0: A negative word is detected, such as no or “not really” or “not.”
Score 1: At least one positive word is detected, such as “yes” or “sure”.
Score 2:
In still further embodiments, the audio segment data 1012, 1014 is converted to textual content data 1031 using speech-to-text technology 1030, and the textual data becomes part of the behavior analysis. The recorded data 868 can include audio data, video data, and sensor data. The dynamic match evaluation 342 can incorporate the content data in analysis of the candidate's attributes compared to the criteria data from the employer.
Content Data (
A third time segment 1156 can follow the second prompt 1134. The third time segment recorded data 868 can include third segment audio data 1158, third segment video data 1160, and third segment behavioral data 1162. The third time segment data 1158, 1160, 1162 can be saved and analyzed to produce content data 1164, which could be relevant to a third attribute, or the first or second attribute.
As mentioned above, in some embodiments, the second prompt 1134 can be added or modified based on content data 1152 or content data 1154. In some embodiments, the prompts can be part of a frame or series of prompts. In some embodiments, the prompts can include one or more of an industry-specific frame of prompts, a job-specific frame of prompts, and an employer-specific frame of prompts.
Database Structure (
In some embodiments, the speech to text processor 1265 can take audio data from the video interview and process it for a text output, such as by converting the audio file to a text file where the text file includes a textual transcript of the audio input. In some embodiments, a voice analog processor 1266 can process input audio to determine a voice score, such as a confidence score or a comfort score based on the candidate voice recording. In some embodiments, the text processor 1270 can evaluate text input either from the candidate directly or through the speech to text processor 1265. The text processor can determine subject matter that is present in the text input, such as skills the candidate discusses in an interview or skills the candidate discusses in a resume. The voice analog processor 1266 and text processor 1270 can evaluate the candidate's voice score (such as confidence or comfort) in view of the content or subject matter that the candidate is talking about from the text processor 1270.
In some embodiments, the video processor 1274 can process the video input, such as to determine the content of the video. Content of the video input can include analysis of the candidate's posture and other visual data.
In some embodiments, the spatial processor can process the sensor input, such as to determine the candidate's position in space during the video interview. The candidate's position in space can include hand gestures, posture, torso lean, shoulder position, and other body movements and positions.
In some embodiments, the feedback processor 1282 can process input from an employer after a video interview. The employer can provide feedback to the system, such as overall impression of the candidate and areas where the candidate was strong or weak.
The processors illustrated in
These cells contain candidate attribute data, and in some embodiments are stored in the candidate database 886. The candidate attribute data can be input to the dynamic match evaluation module 342 and compared to criteria data provided by the employer. The cells illustrated in
Selectively Updating Cells in Candidate Database
In various embodiments, the cells can be updated or replaced when new data is made available, such as while the video interview progresses. In various embodiments, the cells are selectively updated or replaced only if the new data better satisfies the criteria of the job opening, and therefore showcases the candidate's strengths. In various embodiments, during a first time window, candidate data can be saved in the raw database 888 on the system server 106. The candidate data can be related to real-time connection attributes or minimum attributes. In an example, a first portion of candidate data is stored in a first cell that is associated with a first attribute in the candidate database 886 and a second portion of the candidate data is stored in a second cell that is associated with a second attribute in the candidate database 886. During a second time window, a third portion and a fourth portion of candidate data is collected and saved in the raw database. The third portion of candidate data can be associated with the first attribute and the fourth portion of candidate data can be associated with the second attribute.
Next, the system can compare the first portion of candidate data with the third portion of candidate data to determine which is more favorable for satisfying the first attribute. Similarly, the system can compare the second portion of candidate data with the fourth portion of candidate data to determine which is more favorable for satisfying the second attribute. As a result of determining the first portion of the candidate data is more favorable than the third portion of candidate data, the first portion of candidate data can be maintained in the cell. In contrast, as a result of determining the fourth portion of candidate data is more favorable than the second portion of candidate data, the fourth portion of candidate data can replace the second portion of candidate data in the cell.
The raw database 888 keeps all data regardless of when it was received from the candidate and stores it with a time stamp. The cell values in the candidate database 886 also include a time stamp in various embodiments. The candidate database 886 stores the content data, score data, and other data used to determine whether attributes of the job opening are satisfied. Because the candidate database 886 is selectively updated and does not store every piece of data for a particular candidate, the candidate database 886 can function more efficiently. Also, the selective updating process leads to a candidate database 886 that showcases the candidate by memorializing and maintaining the evaluation results of the candidate's best answers over time.
The second time window occurs after the first time window. The second time window can be a later portion of a response to a prompt. For example, in the context of
Table 1 shows an example of score cells in the candidate database after a second video interview. A first video interview takes place in June 2018, and the system gives the candidate a high score for his/her time management skills and a medium score for his/her OASIS coding experience. In a later interview occurring in January 2020, the system only gives the candidate a medium score for time management and a high score for OASIS coding experience. The candidate database after the January 2020 interview can include high scores for both time management and OASIS coding experience. In some cases, a candidate might discuss an attribute, such as time management, more in an earlier interview, but might not discuss it as much in a later interview. In some cases, the candidate can remember what he/she said and does not want to repeat themselves. In such a case, the candidate will not be penalized as the cell will not be updated with a lower score. In other scenarios, there could be a specific reason for the lower score. In such a scenario the system could update the cell with the lower score. The raw database can retain all of the data from the first video interview and the second video interview, while the candidate database can retain the data that best fits with the criteria data.
Table 2 shows an example of a cell being updated during a video interview, where the attribute cells related to time management and team leadership skills are being updated. The team leadership attribute cell might be updated, for example, according to how often the candidate was using team-oriented language (for example, using “we” more frequently than “I”), using positive language, and providing a detailed and complex response to the prompts. In some embodiments, a higher usage of team-oriented language can be indicative of a candidate that works well with others. In a first portion of the video interview, such as in response to a first prompt, the candidate can score highly for time management skills, but does not use much team-oriented language. However, in the second portion of the interview, such as responding to a second prompt, the candidate only shows medium time management skills, but frequently uses team-orientated language. The first cell for time management can remain high after the second portion. The second cell can be updated for the increase usage of team-oriented language after the second portion. The raw database can retain all of the data from the first portion of the video interview and the second portion of the video interview, while the candidate database can retain the data that best fits with the criteria data.
Table 3 shows an example of a cells being updated during a video interview in comparison with background data. Background data is typically entered by the candidate when setting up their profile in the system or by inputting a resume, transcript, or other documentation. The cells shown in Table 3 relate to Spanish language fluency and Excel skill. From the background data, the raw data indicates that Spanish language fluency is present, but there is not any information on Excel skills. However, during or after the video interview, it is learned that the candidate has high Excel skills. Spanish language fluency was not mentioned during the video interview, such as because the data was already known so a prompt directed at Spanish language fluency was removed from the frame or never included. The first cell for Spanish language fluency can remain present after the video interview. The second cell can be updated for that candidate with high Excel skills, which was learned during the video interview. The raw database can retain all of the background data and the video interview data, while the candidate database can retain the data that best satisfies the criteria for the job opening.
Frames of Prompts (
Employer data 892 can result in specific prompts being presented to the candidate 104 to determine if the candidate 104 fulfills the desired criteria attributes. In various embodiments, the entity 1386 of the employer can result in specific entity prompts 1392 being presented to the candidate 104. Similarly, the type of job opening 1388 can also result in specific job opening related prompts 1394. The type of industry the employer is in can also result in industry prompts 1390. The industry prompts 1390 can be a frame of prompts related to the industry. The entity prompts 1392 can be a frame of prompts related to the entity. The job opening prompts 1394 can be related to the job opening that the candidate is applying for. In some embodiments, additional prompts can be added to the frame of prompts based on textual video interview data received in response to a previous prompt.
The prompts are designed to gather information needed to analyze if the candidate would be a good match for the job (objective) 1396. The criteria data 890 is the criteria defined by the employer to assess a candidate's match with the job opening. The criteria includes minimum attributes 1397 and real-time connection attributes 1398 for the job 1396.
Criteria Data and Threshold Analysis (
In some embodiments, the threshold 1434 for qualifying for a real-time connection opportunity can be a percentage of real-time connection attributes being met or fulfilled in the candidate data. In the example shown in
In some embodiments, it is required that all of the real-time connection attributes are fulfilled before a first offer is sent to the employer.
It should be understood that different number of real-time connection attributes and/or minimum attributes can be used. Six minimum attributes and eight real-time connection attributes is shown as one example.
In some embodiments, the threshold amount 1434 can be reduced or modified based on one of the real-time connection attributes being fulfilled, such as a very desirable or rare skill.
As an example of various minimum attributes,
As an example of various real-time connection attributes,
In
In some embodiments, the list of real-time connection attributes can be reduced or modified based on one of the real-time connection attributes being fulfilled, such as a very desirable or rare skill.
In
Prequalification for Real Time Connection (
After the initial steps 1702 to 1708, the candidate can review job openings 1710. If a candidate is applying for a job opening, such as through a video interview, the system can determine the candidate and the position are sufficient match to result in a real-time connection 1712. The system can determine if both parties are willing and able to connect 1714, as discussed in more detail elsewhere herein. If the employer accepts the offer for a real-time connection, then the system can offer a real-time connection to the candidate. If both parties are willing and able to connect, they can be connected in real time 1716. A real-time interview can be conducted via the real-time connection at step 1718. After the real-time connection, ends the candidate can rate the employer 1720. In some embodiments, the candidate can receive an offer from the employer 1722, feedback, or an expected salary range for a position that they would be qualified for according to the employer.
If both parties are qualified and the system has identified candidate that meets the real-time connection attributes of the employer for a particular job opening, the two parties can be offered a real-time connection 1816, as discussed in more detail elsewhere herein. If both parties accept the offer, the two parties will be connected in real time 1818, such as by fulfilling the threshold of real-time connection attributes and both being prequalified. The real-time interview can take place 1820. During the interview, in some embodiments, the system can provide prompts to the employer 1822, such as questions to ask. After the real-time connection ends, the employer can rate the candidate 1824. The rating can be saved or stored in the candidate database 1826. In some embodiments, the system can finally provide an offer to the candidate 1828.
This application is related to U.S. patent application Ser. No. 16/828,578, titled “Multi-Camera Kiosk,” filed on Mar. 24, 2020, which claims the benefit of U.S. Provisional Application No. 62/824,755, filed Mar. 27, 2019. This application is also related to U.S. patent application Ser. No. 16/366,746, titled “Automatic Camera Angle Switching to Create Combined Audiovisual File,” filed on Mar. 27, 2019, and U.S. patent application Ser. No. 16/366,703, titled “Employment Candidate Empathy Scoring System,” filed on Mar. 27, 2019, and U.S. patent application Ser. No. 16/696,781, titled “Multi-Camera, Multi-Sensor Panel Data Extraction System and Method,” filed on Nov. 27, 2019. This application is also related to provisional patent application 63/004,329, titled “Audio and Video Recording and Streaming in a Three-Computer Booth,” filed on May 1, 2020. This application is also related to U.S. patent application Ser. No. 16/931,964, titled “Automatic Versioning of Video Presentations,” filed on Jul. 17, 2020. Each of these related applications are also hereby incorporated by reference in their entireties.
It should be noted that, as used in this specification and the appended claims, the singular forms “a,” “an,” and “the” include plural referents unless the content clearly dictates otherwise. It should also be noted that the term “or” is generally employed in its sense including “and/or” unless the content clearly dictates otherwise.
It should also be noted that, as used in this specification and the appended claims, the phrase “configured” describes a system, apparatus, or other structure that is constructed or configured to perform a particular task or adopt a particular configuration. The phrase “configured” can be used interchangeably with other similar phrases such as arranged and configured, constructed and arranged, constructed, manufactured and arranged, and the like.
All publications and patent applications in this specification are indicative of the level of ordinary skill in the art to which this invention pertains. All publications and patent applications are herein incorporated by reference to the same extent as if each individual publication or patent application was specifically and individually indicated by reference.
As used herein, the recitation of numerical ranges by endpoints shall include all numbers subsumed within that range (e.g., 2 to 8 includes 2.1, 2.8, 3, 5, 5.3, 7, etc.).
The headings used herein are provided for consistency with suggestions under 37 CFR 1.77 or otherwise to provide organizational cues. These headings shall not be viewed to limit or characterize the invention(s) set out in any claims that may issue from this disclosure. As an example, although the headings refer to a “Field,” such claims should not be limited by the language chosen under this heading to describe the so-called technical field. Further, a description of a technology in the “Background” is not an admission that technology is prior art to any invention(s) in this disclosure. Neither is the “Summary” to be considered as a characterization of the invention(s) set forth in issued claims.
The embodiments described herein are not intended to be exhaustive or to limit the invention to the precise forms disclosed in the following detailed description. Rather, the embodiments are chosen and described so that others skilled in the art can appreciate and understand the principles and practices. As such, aspects have been described with reference to various specific and preferred embodiments and techniques. However, it should be understood that many variations and modifications may be made while remaining within the spirit and scope herein.
This application is a Continuation of U.S. patent application Ser. No. 17/025,902, filed Sep. 18, 2020, the content of which is herein incorporated by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
1173785 | Deagan | Feb 1916 | A |
1686351 | Spitzglass | Oct 1928 | A |
3152622 | Rothermel | Oct 1964 | A |
3764135 | Madison | Oct 1973 | A |
5109281 | Kobori et al. | Apr 1992 | A |
5410344 | Graves et al. | Apr 1995 | A |
5835667 | Wactlar et al. | Nov 1998 | A |
5867209 | Irie et al. | Feb 1999 | A |
5884004 | Sato et al. | Mar 1999 | A |
5886967 | Aramaki | Mar 1999 | A |
5897220 | Huang et al. | Apr 1999 | A |
5906372 | Recard | May 1999 | A |
5937138 | Fukuda et al. | Aug 1999 | A |
5949792 | Yasuda et al. | Sep 1999 | A |
6128414 | Liu | Oct 2000 | A |
6229904 | Huang et al. | May 2001 | B1 |
6289165 | Abecassis | Sep 2001 | B1 |
6484266 | Kashiwagi et al. | Nov 2002 | B2 |
6502199 | Kashiwagi et al. | Dec 2002 | B2 |
6504990 | Abecassis | Jan 2003 | B1 |
RE37994 | Fukuda et al. | Feb 2003 | E |
6600874 | Fujita et al. | Jul 2003 | B1 |
6618723 | Smith | Sep 2003 | B1 |
6981000 | Park et al. | Dec 2005 | B2 |
7095329 | Saubolle | Aug 2006 | B2 |
7146627 | Ismail et al. | Dec 2006 | B1 |
7293275 | Krieger et al. | Nov 2007 | B1 |
7313539 | Pappas et al. | Dec 2007 | B1 |
7336890 | Lu et al. | Feb 2008 | B2 |
7499918 | Ogikubo | Mar 2009 | B2 |
7606444 | Erol et al. | Oct 2009 | B1 |
7650286 | Obeid | Jan 2010 | B1 |
7702542 | Aslanian | Apr 2010 | B2 |
7725812 | Balkus et al. | May 2010 | B1 |
7797402 | Roos | Sep 2010 | B2 |
7810117 | Karnalkar et al. | Oct 2010 | B2 |
7865424 | Pappas et al. | Jan 2011 | B2 |
7895620 | Haberman et al. | Feb 2011 | B2 |
7904490 | Ogikubo | Mar 2011 | B2 |
7962375 | Pappas et al. | Jun 2011 | B2 |
7974443 | Kipman et al. | Jul 2011 | B2 |
7991635 | Hartmann | Aug 2011 | B2 |
7996292 | Pappas et al. | Aug 2011 | B2 |
8032447 | Pappas et al. | Oct 2011 | B2 |
8046814 | Badenell | Oct 2011 | B1 |
8099415 | Luo et al. | Jan 2012 | B2 |
8111326 | Talwar | Feb 2012 | B1 |
8169548 | Ryckman | May 2012 | B2 |
8185543 | Choudhry et al. | May 2012 | B1 |
8229841 | Pappas et al. | Jul 2012 | B2 |
8238718 | Toyama et al. | Aug 2012 | B2 |
8241628 | Diefenbach-Streiber et al. | Aug 2012 | B2 |
8266068 | Foss et al. | Sep 2012 | B1 |
8300785 | White | Oct 2012 | B2 |
8301550 | Pappas et al. | Oct 2012 | B2 |
8301790 | Morrison et al. | Oct 2012 | B2 |
8326133 | Lemmers | Dec 2012 | B2 |
8326853 | Richard et al. | Dec 2012 | B2 |
8331457 | Mizuno et al. | Dec 2012 | B2 |
8331760 | Butcher | Dec 2012 | B2 |
8339500 | Hattori et al. | Dec 2012 | B2 |
8358346 | Hikita et al. | Jan 2013 | B2 |
8387094 | Ho et al. | Feb 2013 | B1 |
8505054 | Kirley | Aug 2013 | B1 |
8508572 | Ryckman et al. | Aug 2013 | B2 |
8543450 | Pappas et al. | Sep 2013 | B2 |
8560482 | Miranda et al. | Oct 2013 | B2 |
8566880 | Dunker et al. | Oct 2013 | B2 |
8600211 | Nagano et al. | Dec 2013 | B2 |
8611422 | Yagnik et al. | Dec 2013 | B1 |
8620771 | Pappas et al. | Dec 2013 | B2 |
8633964 | Zhu | Jan 2014 | B1 |
8650114 | Pappas et al. | Feb 2014 | B2 |
8751231 | Larsen et al. | Jun 2014 | B1 |
8774604 | Torii et al. | Jul 2014 | B2 |
8792780 | Hattori | Jul 2014 | B2 |
8824863 | Kitamura et al. | Sep 2014 | B2 |
8854457 | De Vleeschouwer et al. | Oct 2014 | B2 |
8856000 | Larsen et al. | Oct 2014 | B1 |
8902282 | Zhu | Dec 2014 | B1 |
8909542 | Montero et al. | Dec 2014 | B2 |
8913103 | Sargin et al. | Dec 2014 | B1 |
8918532 | Lueth et al. | Dec 2014 | B2 |
8930260 | Pappas et al. | Jan 2015 | B2 |
8988528 | Hikita | Mar 2015 | B2 |
9009045 | Larsen et al. | Apr 2015 | B1 |
9015746 | Holmdahl et al. | Apr 2015 | B2 |
9026471 | Pappas et al. | May 2015 | B2 |
9026472 | Pappas et al. | May 2015 | B2 |
9047634 | Pappas et al. | Jun 2015 | B2 |
9064258 | Pappas et al. | Jun 2015 | B2 |
9070150 | Pappas et al. | Jun 2015 | B2 |
9092813 | Pappas et al. | Jul 2015 | B2 |
9106804 | Roberts et al. | Aug 2015 | B2 |
9111579 | Meaney et al. | Aug 2015 | B2 |
9117201 | Kennell et al. | Aug 2015 | B2 |
9129640 | Hamer | Sep 2015 | B2 |
9135674 | Yagnik et al. | Sep 2015 | B1 |
9223781 | Pearson et al. | Dec 2015 | B2 |
9224156 | Moorer | Dec 2015 | B2 |
9305286 | Larsen et al. | Apr 2016 | B2 |
9305287 | Krishnamoorthy et al. | Apr 2016 | B2 |
9355151 | Cranfill et al. | May 2016 | B1 |
9378486 | Taylor et al. | Jun 2016 | B2 |
9398315 | Oks et al. | Jul 2016 | B2 |
9402050 | Recchia et al. | Jul 2016 | B1 |
9437247 | Pendergast et al. | Sep 2016 | B2 |
9438934 | Zhu | Sep 2016 | B1 |
9443556 | Cordell et al. | Sep 2016 | B2 |
9456174 | Boyle et al. | Sep 2016 | B2 |
9462301 | Paśko | Oct 2016 | B2 |
9501663 | Hopkins et al. | Nov 2016 | B1 |
9501944 | Boneta et al. | Nov 2016 | B2 |
9542452 | Ross et al. | Jan 2017 | B1 |
9544380 | Deng et al. | Jan 2017 | B2 |
9554160 | Han et al. | Jan 2017 | B2 |
9570107 | Boiman et al. | Feb 2017 | B2 |
9583144 | Ricciardi | Feb 2017 | B2 |
9600723 | Pantofaru et al. | Mar 2017 | B1 |
9607655 | Bloch et al. | Mar 2017 | B2 |
9652745 | Taylor et al. | May 2017 | B2 |
9653115 | Bloch et al. | May 2017 | B2 |
9666194 | Ondeck et al. | May 2017 | B2 |
9684435 | Carr et al. | Jun 2017 | B2 |
9693019 | Fluhr et al. | Jun 2017 | B1 |
9710790 | Taylor et al. | Jul 2017 | B2 |
9723223 | Banta et al. | Aug 2017 | B1 |
9747573 | Shaburov | Aug 2017 | B2 |
9792955 | Fleischhauer et al. | Oct 2017 | B2 |
9805767 | Strickland | Oct 2017 | B1 |
9823809 | Roos | Nov 2017 | B2 |
9876963 | Nakamura et al. | Jan 2018 | B2 |
9881647 | McCauley et al. | Jan 2018 | B2 |
9936185 | Delvaux et al. | Apr 2018 | B2 |
9940508 | Kaps et al. | Apr 2018 | B2 |
9940973 | Roberts et al. | Apr 2018 | B2 |
9979921 | Holmes | May 2018 | B2 |
10008239 | Eris | Jun 2018 | B2 |
10019653 | Wilf | Jul 2018 | B2 |
10021377 | Newton et al. | Jul 2018 | B2 |
10108932 | Sung et al. | Oct 2018 | B2 |
10115038 | Hazur et al. | Oct 2018 | B2 |
10147460 | Ullrich | Dec 2018 | B2 |
10152695 | Chiu et al. | Dec 2018 | B1 |
10152696 | Thankappan et al. | Dec 2018 | B2 |
10168866 | Wakeen et al. | Jan 2019 | B2 |
10178427 | Huang | Jan 2019 | B2 |
10235008 | Lee et al. | Mar 2019 | B2 |
10242345 | Taylor et al. | Mar 2019 | B2 |
10268736 | Balasia et al. | Apr 2019 | B1 |
10296873 | Balasia et al. | May 2019 | B1 |
10310361 | Featherstone | Jun 2019 | B1 |
10318927 | Champaneria | Jun 2019 | B2 |
10325243 | Ross et al. | Jun 2019 | B1 |
10325517 | Nielson et al. | Jun 2019 | B2 |
10331764 | Rao et al. | Jun 2019 | B2 |
10346805 | Taylor et al. | Jul 2019 | B2 |
10346928 | Li et al. | Jul 2019 | B2 |
10353720 | Wich-Vila | Jul 2019 | B1 |
10433030 | Packard et al. | Oct 2019 | B2 |
10438135 | Larsen et al. | Oct 2019 | B2 |
10489439 | Calapodescu et al. | Nov 2019 | B2 |
10607188 | Kyllonen et al. | Mar 2020 | B2 |
10657498 | Dey et al. | May 2020 | B2 |
10694097 | Shirakyan | Jun 2020 | B1 |
10728443 | Olshansky | Jul 2020 | B1 |
10735396 | Krstic et al. | Aug 2020 | B2 |
10748118 | Fang | Aug 2020 | B2 |
10796217 | Wu | Oct 2020 | B2 |
10963841 | Olshansky | Mar 2021 | B2 |
11023735 | Olshansky | Jun 2021 | B1 |
11127232 | Olshansky | Sep 2021 | B2 |
11144882 | Olshansky | Oct 2021 | B1 |
11184578 | Olshansky | Nov 2021 | B2 |
11457140 | Olshansky | Sep 2022 | B2 |
11636678 | Olshansky | Apr 2023 | B2 |
20010001160 | Shoff et al. | May 2001 | A1 |
20010038746 | Hughes et al. | Nov 2001 | A1 |
20020097984 | Abecassis | Jul 2002 | A1 |
20020113879 | Battle et al. | Aug 2002 | A1 |
20020122659 | McGrath et al. | Sep 2002 | A1 |
20020191071 | Rui et al. | Dec 2002 | A1 |
20030005429 | Colsey | Jan 2003 | A1 |
20030027611 | Recard | Feb 2003 | A1 |
20030189589 | LeBlanc et al. | Oct 2003 | A1 |
20030194211 | Abecassis | Oct 2003 | A1 |
20040033061 | Hughes et al. | Feb 2004 | A1 |
20040186743 | Cordero, Jr. | Sep 2004 | A1 |
20040264919 | Taylor et al. | Dec 2004 | A1 |
20050095569 | Franklin | May 2005 | A1 |
20050137896 | Pentecost et al. | Jun 2005 | A1 |
20050187765 | Kim et al. | Aug 2005 | A1 |
20050232462 | Vallone et al. | Oct 2005 | A1 |
20050235033 | Doherty | Oct 2005 | A1 |
20050271251 | Russell et al. | Dec 2005 | A1 |
20060042483 | Work et al. | Mar 2006 | A1 |
20060045179 | Mizuno et al. | Mar 2006 | A1 |
20060100919 | Levine | May 2006 | A1 |
20060116555 | Pavlidis | Jun 2006 | A1 |
20060229896 | Rosen | Oct 2006 | A1 |
20070088601 | Money et al. | Apr 2007 | A1 |
20070124161 | Mueller et al. | May 2007 | A1 |
20070237502 | Ryckman et al. | Oct 2007 | A1 |
20070288245 | Benjamin | Dec 2007 | A1 |
20080086504 | Sanders et al. | Apr 2008 | A1 |
20080169929 | Albertson et al. | Jul 2008 | A1 |
20090083103 | Basser | Mar 2009 | A1 |
20090083670 | Roos | Mar 2009 | A1 |
20090087161 | Roberts et al. | Apr 2009 | A1 |
20090144785 | Walker et al. | Jun 2009 | A1 |
20090171899 | Chittoor et al. | Jul 2009 | A1 |
20090248685 | Pasqualoni | Oct 2009 | A1 |
20090258334 | Pyne | Oct 2009 | A1 |
20100086283 | Ramachandran et al. | Apr 2010 | A1 |
20100143329 | Larsen | Jun 2010 | A1 |
20100183280 | Beauregard et al. | Jul 2010 | A1 |
20100191561 | Jeng et al. | Jul 2010 | A1 |
20100199228 | Latta et al. | Aug 2010 | A1 |
20100223109 | Hawn et al. | Sep 2010 | A1 |
20100325307 | Roos | Dec 2010 | A1 |
20110055098 | Stewart | Mar 2011 | A1 |
20110055930 | Flake et al. | Mar 2011 | A1 |
20110060671 | Erbey | Mar 2011 | A1 |
20110076656 | Scott et al. | Mar 2011 | A1 |
20110088081 | Folkesson et al. | Apr 2011 | A1 |
20110135279 | Leonard | Jun 2011 | A1 |
20120036127 | Work et al. | Feb 2012 | A1 |
20120053996 | Galbavy | Mar 2012 | A1 |
20120084649 | Dowdell et al. | Apr 2012 | A1 |
20120114246 | Weitzman | May 2012 | A1 |
20120130771 | Kannan et al. | May 2012 | A1 |
20120257875 | Sharpe et al. | Oct 2012 | A1 |
20120271774 | Clegg | Oct 2012 | A1 |
20130007670 | Roos | Jan 2013 | A1 |
20130016815 | Odinak et al. | Jan 2013 | A1 |
20130016816 | Odinak et al. | Jan 2013 | A1 |
20130016823 | Odinak et al. | Jan 2013 | A1 |
20130024105 | Thomas | Jan 2013 | A1 |
20130111401 | Newman et al. | May 2013 | A1 |
20130121668 | Meaney et al. | May 2013 | A1 |
20130124998 | Pendergast et al. | May 2013 | A1 |
20130124999 | Agnoli et al. | May 2013 | A1 |
20130125000 | Fleischhauer et al. | May 2013 | A1 |
20130176430 | Zhu et al. | Jul 2013 | A1 |
20130177296 | Geisner et al. | Jul 2013 | A1 |
20130212033 | Work et al. | Aug 2013 | A1 |
20130212180 | Work et al. | Aug 2013 | A1 |
20130216206 | Dubin et al. | Aug 2013 | A1 |
20130218688 | Roos | Aug 2013 | A1 |
20130222601 | Engstroem et al. | Aug 2013 | A1 |
20130226578 | Bolton et al. | Aug 2013 | A1 |
20130226674 | Field et al. | Aug 2013 | A1 |
20130226910 | Work et al. | Aug 2013 | A1 |
20130254192 | Work et al. | Sep 2013 | A1 |
20130259447 | Sathish et al. | Oct 2013 | A1 |
20130266925 | Nunamaker et al. | Oct 2013 | A1 |
20130268452 | MacEwen et al. | Oct 2013 | A1 |
20130283378 | Costigan et al. | Oct 2013 | A1 |
20130290210 | Cline et al. | Oct 2013 | A1 |
20130290325 | Work et al. | Oct 2013 | A1 |
20130290420 | Work et al. | Oct 2013 | A1 |
20130290448 | Work et al. | Oct 2013 | A1 |
20130297589 | Work et al. | Nov 2013 | A1 |
20130332381 | Clark et al. | Dec 2013 | A1 |
20130332382 | LaPasta | Dec 2013 | A1 |
20140036023 | Croen et al. | Feb 2014 | A1 |
20140089217 | McGovern et al. | Mar 2014 | A1 |
20140092254 | Mughal et al. | Apr 2014 | A1 |
20140123177 | Kim et al. | May 2014 | A1 |
20140125703 | Roveta et al. | May 2014 | A1 |
20140143165 | Posse et al. | May 2014 | A1 |
20140153902 | Pearson et al. | Jun 2014 | A1 |
20140186004 | Hamer | Jul 2014 | A1 |
20140191939 | Penn et al. | Jul 2014 | A1 |
20140192200 | Zagron | Jul 2014 | A1 |
20140198196 | Howard et al. | Jul 2014 | A1 |
20140214703 | Moody | Jul 2014 | A1 |
20140214709 | Greaney | Jul 2014 | A1 |
20140245146 | Roos | Aug 2014 | A1 |
20140258288 | Work et al. | Sep 2014 | A1 |
20140270706 | Pasko | Sep 2014 | A1 |
20140278506 | Rogers et al. | Sep 2014 | A1 |
20140278683 | Kennell et al. | Sep 2014 | A1 |
20140279634 | Seeker | Sep 2014 | A1 |
20140282709 | Hardy et al. | Sep 2014 | A1 |
20140317009 | Bilodeau et al. | Oct 2014 | A1 |
20140317126 | Work et al. | Oct 2014 | A1 |
20140325359 | Vehovsky et al. | Oct 2014 | A1 |
20140325373 | Kramer | Oct 2014 | A1 |
20140327779 | Eronen et al. | Nov 2014 | A1 |
20140330734 | Sung et al. | Nov 2014 | A1 |
20140334670 | Guigues et al. | Nov 2014 | A1 |
20140336942 | Pe'er et al. | Nov 2014 | A1 |
20140337900 | Hurley | Nov 2014 | A1 |
20140356822 | Hoque et al. | Dec 2014 | A1 |
20140358810 | Hardtke et al. | Dec 2014 | A1 |
20140359439 | Lyren | Dec 2014 | A1 |
20150003603 | Odinak et al. | Jan 2015 | A1 |
20150003605 | Odinak et al. | Jan 2015 | A1 |
20150006422 | Carter et al. | Jan 2015 | A1 |
20150012453 | Odinak et al. | Jan 2015 | A1 |
20150046357 | Danson | Feb 2015 | A1 |
20150063775 | Nakamura et al. | Mar 2015 | A1 |
20150067723 | Bloch et al. | Mar 2015 | A1 |
20150099255 | Aslan et al. | Apr 2015 | A1 |
20150100702 | Krishna et al. | Apr 2015 | A1 |
20150127565 | Chevalier et al. | May 2015 | A1 |
20150139601 | Mate et al. | May 2015 | A1 |
20150154564 | Moon et al. | Jun 2015 | A1 |
20150155001 | Kikugawa et al. | Jun 2015 | A1 |
20150170303 | Geritz et al. | Jun 2015 | A1 |
20150199646 | Taylor et al. | Jul 2015 | A1 |
20150201134 | Carr et al. | Jul 2015 | A1 |
20150205800 | Work et al. | Jul 2015 | A1 |
20150205872 | Work et al. | Jul 2015 | A1 |
20150206102 | Cama et al. | Jul 2015 | A1 |
20150206103 | Larsen et al. | Jul 2015 | A1 |
20150222815 | Wang et al. | Aug 2015 | A1 |
20150228306 | Roberts et al. | Aug 2015 | A1 |
20150242707 | Wilf | Aug 2015 | A1 |
20150269165 | Work et al. | Sep 2015 | A1 |
20150269529 | Kyllonen | Sep 2015 | A1 |
20150269530 | Work et al. | Sep 2015 | A1 |
20150271289 | Work et al. | Sep 2015 | A1 |
20150278223 | Work et al. | Oct 2015 | A1 |
20150278290 | Work et al. | Oct 2015 | A1 |
20150278964 | Work et al. | Oct 2015 | A1 |
20150302158 | Morris et al. | Oct 2015 | A1 |
20150324698 | Karaoguz et al. | Nov 2015 | A1 |
20150339939 | Gustafson et al. | Nov 2015 | A1 |
20150356512 | Bradley | Dec 2015 | A1 |
20150380052 | Hamer | Dec 2015 | A1 |
20160005029 | Ivey et al. | Jan 2016 | A1 |
20160036976 | Odinak et al. | Feb 2016 | A1 |
20160104096 | Ovick et al. | Apr 2016 | A1 |
20160116827 | Tarres Bolos | Apr 2016 | A1 |
20160117942 | Marino et al. | Apr 2016 | A1 |
20160139562 | Crowder et al. | May 2016 | A1 |
20160154883 | Boerner | Jun 2016 | A1 |
20160155475 | Hamer | Jun 2016 | A1 |
20160180234 | Siebach et al. | Jun 2016 | A1 |
20160180883 | Hamer | Jun 2016 | A1 |
20160219264 | Delvaux et al. | Jul 2016 | A1 |
20160225409 | Eris | Aug 2016 | A1 |
20160225410 | Lee et al. | Aug 2016 | A1 |
20160247537 | Ricciardi | Aug 2016 | A1 |
20160267436 | Silber et al. | Sep 2016 | A1 |
20160313892 | Roos | Oct 2016 | A1 |
20160323608 | Bloch et al. | Nov 2016 | A1 |
20160330398 | Recchia et al. | Nov 2016 | A1 |
20160364692 | Bhaskaran et al. | Dec 2016 | A1 |
20170024614 | Sanil et al. | Jan 2017 | A1 |
20170026667 | Pasko | Jan 2017 | A1 |
20170039525 | Seidle et al. | Feb 2017 | A1 |
20170076751 | Hamer | Mar 2017 | A9 |
20170134776 | Ranjeet et al. | May 2017 | A1 |
20170148488 | Li et al. | May 2017 | A1 |
20170164013 | Abramov et al. | Jun 2017 | A1 |
20170164014 | Abramov et al. | Jun 2017 | A1 |
20170164015 | Abramov et al. | Jun 2017 | A1 |
20170171602 | Qu | Jun 2017 | A1 |
20170178688 | Ricciardi | Jun 2017 | A1 |
20170195491 | Odinak et al. | Jul 2017 | A1 |
20170206504 | Taylor et al. | Jul 2017 | A1 |
20170213190 | Hazan | Jul 2017 | A1 |
20170213573 | Takeshita et al. | Jul 2017 | A1 |
20170227353 | Brunner | Aug 2017 | A1 |
20170236073 | Borisyuk et al. | Aug 2017 | A1 |
20170244894 | Aggarwal et al. | Aug 2017 | A1 |
20170244984 | Aggarwal et al. | Aug 2017 | A1 |
20170244991 | Aggarwal et al. | Aug 2017 | A1 |
20170262706 | Sun et al. | Sep 2017 | A1 |
20170264958 | Hutten | Sep 2017 | A1 |
20170293413 | Matsushita et al. | Oct 2017 | A1 |
20170316806 | Warren et al. | Nov 2017 | A1 |
20170332044 | Marlow et al. | Nov 2017 | A1 |
20170353769 | Husain et al. | Dec 2017 | A1 |
20170372748 | McCauley et al. | Dec 2017 | A1 |
20180011621 | Roos | Jan 2018 | A1 |
20180025303 | Janz | Jan 2018 | A1 |
20180054641 | Hall et al. | Feb 2018 | A1 |
20180070045 | Holmes | Mar 2018 | A1 |
20180074681 | Roos | Mar 2018 | A1 |
20180082238 | Shani | Mar 2018 | A1 |
20180096307 | Fortier et al. | Apr 2018 | A1 |
20180109737 | Nakamura et al. | Apr 2018 | A1 |
20180109826 | McCoy et al. | Apr 2018 | A1 |
20180110460 | Danson et al. | Apr 2018 | A1 |
20180114154 | Bae | Apr 2018 | A1 |
20180130497 | McCauley et al. | May 2018 | A1 |
20180132014 | Khazanov et al. | May 2018 | A1 |
20180150604 | Arena et al. | May 2018 | A1 |
20180158027 | Venigalla | Jun 2018 | A1 |
20180182436 | Ullrich | Jun 2018 | A1 |
20180191955 | Aoki et al. | Jul 2018 | A1 |
20180218238 | Viirre et al. | Aug 2018 | A1 |
20180226102 | Roberts et al. | Aug 2018 | A1 |
20180227501 | King | Aug 2018 | A1 |
20180232751 | Terhark et al. | Aug 2018 | A1 |
20180247271 | Van Hoang et al. | Aug 2018 | A1 |
20180253697 | Sung et al. | Sep 2018 | A1 |
20180268868 | Salokannel et al. | Sep 2018 | A1 |
20180270613 | Park | Sep 2018 | A1 |
20180277093 | Carr et al. | Sep 2018 | A1 |
20180295428 | Bi et al. | Oct 2018 | A1 |
20180302680 | Cormican | Oct 2018 | A1 |
20180308521 | Iwamoto | Oct 2018 | A1 |
20180316947 | Todd | Nov 2018 | A1 |
20180336528 | Carpenter et al. | Nov 2018 | A1 |
20180336930 | Takahashi | Nov 2018 | A1 |
20180350405 | Marco | Dec 2018 | A1 |
20180353769 | Smith et al. | Dec 2018 | A1 |
20180374251 | Mitchell et al. | Dec 2018 | A1 |
20180376225 | Jones | Dec 2018 | A1 |
20190005373 | Nims et al. | Jan 2019 | A1 |
20190019157 | Saha et al. | Jan 2019 | A1 |
20190057356 | Larsen et al. | Feb 2019 | A1 |
20190087558 | Mercury et al. | Mar 2019 | A1 |
20190096307 | Liang et al. | Mar 2019 | A1 |
20190141033 | Kaafar et al. | May 2019 | A1 |
20190220824 | Liu | Jul 2019 | A1 |
20190244176 | Chuang | Aug 2019 | A1 |
20190259002 | Balasia et al. | Aug 2019 | A1 |
20190295040 | Clines | Sep 2019 | A1 |
20190311488 | Sareen | Oct 2019 | A1 |
20190325064 | Mathiesen et al. | Oct 2019 | A1 |
20200012350 | Tay | Jan 2020 | A1 |
20200110786 | Kim | Apr 2020 | A1 |
20200126545 | Kakkar et al. | Apr 2020 | A1 |
20200143329 | Gamaliel | May 2020 | A1 |
20200197793 | Yeh et al. | Jun 2020 | A1 |
20200311163 | Ma et al. | Oct 2020 | A1 |
20200311682 | Olshansky | Oct 2020 | A1 |
20200311953 | Olshansky | Oct 2020 | A1 |
20200396376 | Olshansky | Dec 2020 | A1 |
20210035047 | Mossoba et al. | Feb 2021 | A1 |
20210158663 | Buchholz et al. | May 2021 | A1 |
20210174308 | Olshansky | Jun 2021 | A1 |
20210233262 | Olshansky | Jul 2021 | A1 |
20210312184 | Olshansky | Oct 2021 | A1 |
20210314521 | Olshansky | Oct 2021 | A1 |
20220005295 | Olshansky | Jan 2022 | A1 |
20220019806 | Olshansky | Jan 2022 | A1 |
20230091194 | Olshansky | Mar 2023 | A1 |
Number | Date | Country |
---|---|---|
2002310201 | Mar 2003 | AU |
2007249205 | Mar 2013 | AU |
2206105 | Dec 2000 | CA |
2763634 | Dec 2012 | CA |
109146430 | Jan 2019 | CN |
1376584 | Jan 2004 | EP |
1566748 | Aug 2005 | EP |
1775949 | Dec 2007 | EP |
1954041 | Aug 2008 | EP |
2009258175 | Nov 2009 | JP |
2019016192 | Jan 2019 | JP |
9703366 | Jan 1997 | WO |
9713366 | Apr 1997 | WO |
9713367 | Apr 1997 | WO |
9828908 | Jul 1998 | WO |
9841978 | Sep 1998 | WO |
9905865 | Feb 1999 | WO |
0133421 | May 2001 | WO |
0117250 | Sep 2002 | WO |
03003725 | Jan 2003 | WO |
2004062563 | Jul 2004 | WO |
2005114377 | Dec 2005 | WO |
2006103578 | Oct 2006 | WO |
2006129496 | Dec 2006 | WO |
2007039994 | Apr 2007 | WO |
2007097218 | Aug 2007 | WO |
2008029803 | Mar 2008 | WO |
2008039407 | Apr 2008 | WO |
2009042858 | Apr 2009 | WO |
2009042900 | Apr 2009 | WO |
2009075190 | Jun 2009 | WO |
2009116955 | Sep 2009 | WO |
2009157446 | Dec 2009 | WO |
2010055624 | May 2010 | WO |
2010116998 | Oct 2010 | WO |
2011001180 | Jan 2011 | WO |
2011007011 | Jan 2011 | WO |
2011035419 | Mar 2011 | WO |
2011129578 | Oct 2011 | WO |
2011136571 | Nov 2011 | WO |
2012002896 | Jan 2012 | WO |
2012068433 | May 2012 | WO |
2012039959 | Jun 2012 | WO |
2012089855 | Jul 2012 | WO |
2013026095 | Feb 2013 | WO |
2013039351 | Mar 2013 | WO |
2013074207 | May 2013 | WO |
2013088208 | Jun 2013 | WO |
2013093176 | Jun 2013 | WO |
2013131134 | Sep 2013 | WO |
2013165923 | Nov 2013 | WO |
2014089362 | Jun 2014 | WO |
2014093668 | Jun 2014 | WO |
2014152021 | Sep 2014 | WO |
2014153665 | Oct 2014 | WO |
2014163283 | Oct 2014 | WO |
2014164549 | Oct 2014 | WO |
2015031946 | Apr 2015 | WO |
2015071490 | May 2015 | WO |
2015109290 | Jul 2015 | WO |
2016031431 | Mar 2016 | WO |
2016053522 | Apr 2016 | WO |
2016073206 | May 2016 | WO |
2016123057 | Aug 2016 | WO |
2016138121 | Sep 2016 | WO |
2016138161 | Sep 2016 | WO |
2016186798 | Nov 2016 | WO |
2016189348 | Dec 2016 | WO |
2017022641 | Feb 2017 | WO |
2017042831 | Mar 2017 | WO |
2017049612 | Mar 2017 | WO |
2017051063 | Mar 2017 | WO |
2017096271 | Jun 2017 | WO |
2017130810 | Aug 2017 | WO |
2017150772 | Sep 2017 | WO |
2017192125 | Nov 2017 | WO |
2018042175 | Mar 2018 | WO |
2018094443 | May 2018 | WO |
2019226051 | Nov 2019 | WO |
2020198230 | Oct 2020 | WO |
2020198240 | Oct 2020 | WO |
2020198363 | Oct 2020 | WO |
2021108564 | Jun 2021 | WO |
2021202293 | Oct 2021 | WO |
2021202300 | Oct 2021 | WO |
Entry |
---|
Rizzo, Albert, et al. “Detection and computational analysis of psychological signals using a virtual human interviewing agent.” Journal of Pain Management 9.3 (2016): 311-321. (Year: 2016). |
Sen, Taylan, et al. “Automated dyadic data recorder (ADDR) framework and analysis of facial cues in deceptive communication.” Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 1.4 (2018): 1-22. (Year: 2018). |
Dragsnes, Steinar J. Development of a Synchronous, Distributed and Agent-supported Framework: Exemplified by a Mind map Application. MS thesis. The University of Bergen, 2003. (Year: 2003). |
“International Preliminary Report on Patentability,” for PCT Application No. PCT/US2020/062246 dated Jun. 9, 2022 (12 pages). |
“Non-Final Office Action,” for U.S. Appl. No. 17/490,713 dated Aug. 16, 2022 (41 pages). |
“Notice of Allowance,” for U.S. Appl. No. 16/910,986 dated May 20, 2022 (17 pages). |
“Response to Non Final Office Action,” for U.S. Appl. No. 17/230,692, filed Jun. 14, 2022 (15 pages). |
“International Preliminary Report on Patentability,” for PCT Application No. PCT/US2020/024470 dated Oct. 7, 2021 (9 pages). |
“International Preliminary Report on Patentability,” for PCT Application No. PCT/US2020/024488 dated Oct. 7, 2021 (9 pages). |
“International Preliminary Report on Patentability,” for PCT Application No. PCT/US2020/024722 dated Oct. 7, 2021 (8 pages). |
“Final Office Action,” for U.S. Appl. No. 16/910,986 dated Jan. 25, 2022 (40 pages). |
“Non-Final Office Action,” for U.S. Appl. No. 17/230,692 dated Feb. 15, 2022 (58 pages). |
“Response to Final Office Action,” for U.S. Appl. No. 16/910,986, filed Apr. 20, 2022 (13 pages). |
“Air Canada Keeping Your Points Active Aeroplan,” https://www.aircanada.com/us/en/aco/home/aeroplan/your-aeroplan/inactivity-policy.html, 6 pages. |
“American Express Frequently Asked Question: Why were Membership Rewards points forfeited and how can I reinstate them?,” https://www.americanexpress.com/us/customer-service/faq.membership-rewards-points-forfeiture.html, 2 pages. |
“DaXtra Parser (CVX) Technical Specifications,” DaXtra Parser Spec. available at URL: <https://cvxdemo.daxtra.com/cvx/download/Parser%20Technical%20Specifications.pdf> at least as early as Feb. 25, 2021 (3 pages). |
File History for U.S. Appl. No. 16/366,746 downloaded Sep. 7, 2021 (353 pages). |
File History for U.S. Appl. No. 16/828,578 downloaded Sep. 7, 2021 (353 pages). |
File History for U.S. Appl. No. 16/366,703 downloaded Sep. 7, 2021 (727 pages). |
File History for U.S. Appl. No. 17/212,688 downloaded Sep. 7, 2021 (311 pages). |
File History for U.S. Appl. No. 16/931,964 downloaded Sep. 7, 2021 (259 pages). |
File History for U.S. Appl. No. 16/696,781 downloaded Oct. 1, 2021 (434 pages). |
File History for U.S. Appl. No. 17/025,902 downloaded Oct. 1, 2021 (367 pages). |
“International Search Report and Written Opinion,” for PCT Application No. PCT/US2020/024470 dated Jul. 9, 2020 (13 pages). |
“International Search Report and Written Opinion,” for PCT Application No. PCT/US2020/024488 dated May 19, 2020 (14 pages). |
“International Search Report and Written Opinion,” for PCT Application No. PCT/US2020/024722 dated Jul. 10, 2020 (13 pages). |
“International Search Report and Written Opinion,” for PCT Application No. PCT/US2020/062246 dated Apr. 1, 2021 (18 pages). |
“International Search Report and Written Opinion,” for PCT Application No. PCT/US2021/024423 dated Jun. 16, 2021 (13 pages). |
“International Search Report and Written Opinion,” for PCT Application No. PCT/US2021/024450 dated Jun. 4, 2021 (14 pages). |
“Invitation to Pay Additional Fees,” for PCT Application No. PCT/US2020/062246 mailed Feb. 11, 2021 (14 pages). |
“Non-Final Office Action,” for U.S. Appl. No. 16/910,986 dated Jun. 23, 2021 (70 pages). |
“Nurse Resumes,” Post Job Free Resume Search Results for “nurse” available at URL <https://www.postjobfree.com/resumes?q=nurse&l=&radius=25> at least as early as Jan. 26, 2021 (2 pages). |
“Nurse,” LiveCareer Resume Search results available online at URL <https://www.livecareer.com/resume-search/search?jt=nurse> website published as early as Dec. 21, 2017 (4 pages). |
“Response to Non-Final Office Action,” for U.S. Appl. No. 16/910,986, filed Sep. 30, 2021 (18 pages). |
“Resume Database,” Mighty Recruiter Resume Database available online at URL <https://www.mightyrecruiter.com/features/resume-database> at least as early as Sep. 4, 2017 (6 pages). |
“Resume Library,” Online job board available at Resume-library.com at least as early as Aug. 6, 2019 (6 pages). |
“Television Studio,” Wikipedia, Published Mar. 8, 2019 and retrieved May 27, 2021 from URL <https://en.wikipedia.org/w/index/php?title=Television_studio&oldid=886710983> (3 pages). |
“Understanding Multi-Dimensionality in Vector Space Modeling,” Pythonic Excursions article published Apr. 16, 2019, accessible at URL <https://aegis4048.github.io/understanding_multi-dimensionality_in_vector_space_modeling> (29 pages). |
Advantage Video Systems “Jeffrey Stansfield of AVS interviews rep about Air-Hush products at the 2019 NAMM Expo,” YouTube video, available at https://www.youtube.com/watch?v=nWzrM99qk_o, accessed Jan. 17, 2021. |
Alley, E. “Professional Autonomy in Video Relay Service Interpreting: Perceptions of American Sign Language—English Interpreters,” (Order No. 10304259). Available from ProQuest Dissertations and Theses Professional. (Year: 2016), 209 pages. |
Bishop, Todd “Microsoft patents tech to score meetings using body language, facial expressions, other data,” Article published Nov. 28, 2020 at URL <https://www.geekwire.com/author/todd/> (7 pages). |
Brocardo, Marcelo Luiz, et al. “Verifying Online User Identity using Stylometric Analysis for Short Messages,” Journal of Networks, vol. 9, No. 12, Dec. 2014, pp. 3347-3355. |
Hughes, K. “Corporate Channels: How American Business and Industry Made Television Useful,” (Order No. 10186420). Available from ProQuest Dissertations and Theses Professional. (Year: 2015), 499 pages. |
Jakubowski, Kelly, et al. “Extracting Coarse Body Movements from Video in Music Performance: A Comparison of Automated Computer Vision Techniques with Motion Capture Data,” Front. Digit. Humanit. 2017, 4:9 (8 pages). |
Johnston, A. M, et al. “A Mediated Discourse Analysis of Immigration Gatekeeping Interviews,” (Order No. 3093235). Available from ProQuest Dissertations and Theses Professional (Year: 2003), 262 pages. |
Lai, Kenneth, et al. “Decision Support for Video-based Detection of Flu Symptoms,” Biometric Technologies Laboratory, Department of Electrical and Computer Engineering, University of Calgary, Canada, Aug. 24, 2020 available at URL <https://ucalgary.ca/labs/biometric-technologies/publications> (8 pages). |
Liu, Weihua, et al. “RGBD Video Based Human Hand Trajectory Tracking and Gesture Recognition System,” Mathematical Problems in Engineering vol. 2015, article ID 863732 (16 pages). |
Luenendonk, Martin “The Secrets to Interviewing for a Role That's Slightly Out of Reach,” Cleverism Article available at URL <https://www.cleverism.com/interviewing-for-a-role-thats-slightly-out-of-reach/> last updated Sep. 25, 2019 (13 pages). |
Pentland, S. J. “Human-Analytics in Information Systems Research and Applications in Personnel Selection,” (Order No. 10829600). Available from ProQuest Dissertations and Theses Professional. (Year: 2018), 158 pages. |
Ramanarayanan, Vikram, et al. “Evaluating Speech, Face, Emotion and Body Movement Time-series Features for Automated Multimodal Presentation Scoring,” In Proceedings of the 2015 ACM on (ICMI 2015). Association for Computing Machinery, New York, NY, USA, 23-30 (8 pages). |
Randhavane, Tanmay, et al. “Identifying Emotions from Walking Using Affective and Deep Features,” Jan. 9, 2020 Article available at Cornell University website URL <https://arxiv.org/abs/1906.11884v4> (15 pages). |
Swanepoel, De Wet, et al. “A Systematic Review of Telehealth Applications in Audiology,” Telemedicine and e-Health 16.2 (2010): 181-200 (20 pages). |
Wang, Jenny “How to Build a Resume Recommender like the Applicant Tracking System (ATS),” Towards Data Science article published Jun. 25, 2020, accessible at URL <https://towardsdatascience.com/resume-screening-tool-resume-recommendation-engine-in-a-nutshell-53fcf6e6559b> (14 pages). |
Yun, Jaeseok, et al. “Human Movement Detection and Identification Using Pyroelectric Infrared Sensors,” Sensors 2014, 14, 8057-8081 (25 pages). |
“Final Office Action,” for U.S. Appl. No. 17/230,692 dated Aug. 24, 2022 (31 pages). |
“International Preliminary Report on Patentability,” for PCT Application No. PCT/US2021/024450 dated Oct. 13, 2022 (11 pages). |
“Non-Final Office Action,” for U.S. Appl. No. 17/180,381 dated Sep. 19, 2022 (64 pages). |
“Response to Non Final Office Action,” for U.S. Appl. No. 17/490,713, filed Nov. 15, 2022 (8 pages). |
“Final Office Action,” for U.S. Appl. No. 17/180,381 dated Jan. 23, 2023 (25 pages). |
“Non-Final Office Action,” for U.S. Appl. No. 17/951,633 dated Feb. 3, 2023 (57 pages). |
“Non-Final Office Action,” for U.S. Appl. No. 17/476,014 dated Jan. 18, 2023 (62 pages). |
“Notice of Allowance,” for U.S. Appl. No. 17/490,713 dated Dec. 6, 2022 (9 pages). |
“Response to Non-Final Office Action,” for U.S. Appl. No. 17/180,381, filed Dec. 15, 2022, 2022 (13 pages). |
“Non-Final Office Action,” for U.S. Appl. No. 17/318,774 dated Apr. 5, 2023 (60 pages). |
“Notice of Allowance,” for U.S. Appl. No. 17/476,014 dated Apr. 28, 2023 (10 pages). |
“Response to Final Office Action,” for U.S. Appl. No. 17/180,381, filed Apr. 24, 2023 (15 pages). |
“Response to Non Final Office Action,” for U.S. Appl. No. 17/476,014, filed Apr. 18, 2023 (16 pages). |
“Response to Non Final Office Action,” for U.S. Appl. No. 17/951,633, filed May 3, 2023 (12 pages). |
Number | Date | Country | |
---|---|---|---|
20220092548 A1 | Mar 2022 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17025902 | Sep 2020 | US |
Child | 17486489 | US |