The subject technology generally relates to recommending matches between persons, and, in particular, relates to systems and methods to recommend likely candidates for a successful relationship.
It is useful to provide people assistance when they are seeking a successful relationship with a second person. Present assistance approaches largely consist of heuristics and rule based matching. However, these approaches do not leverage machine learning from data.
According to various aspects of the subject technology, a method for recommending matches between persons is provided. The method comprises training a supervised machine learning engine from empirical data about existing relationships which have been evaluated as to quality of the relationships. The method further comprises using the trained supervised machine learning engine to evaluate candidate relationships to calculate the quality of the candidate relationship. The method further comprises predicting the likelihood of a successful relationship by comparing the calculated quality of the candidate relationship against a threshold. The method further comprises notifying a user of a candidate match that is likely to become a successful relationship.
Additional features and advantages of the subject technology will be set forth in the description below, and in part will be apparent from the description, or may be learned by practice of the subject technology. The advantages of the subject technology will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are intended to provide further explanation of the invention as claimed.
Overall Process of Using Supervised Machine Learning to Recommend Matches
It is important that the training data include relationships of varying quality. For example, in the machine learning domain of classifying images as cat or non-cat, the training data includes images of cats labeled as cat and includes images of other animals such as dogs or horses labeled as non-cat. This range of training data allows the machine learning to recognize patterns that distinguish cats from other animals. In the application of technology in this description to relationships between persons the training data includes successful relationships with higher quality and unsuccessful relationships with lower quality.
The training data 101 is inputted to train an untrained supervised machine learning engine 102. The details of this training process will be described below. After training, the untrained supervised machine learning engine 102 becomes a trained supervised machine learning engine 103. The trained supervised machine learning engine 103 can then accept inputs of attributes of a first person in a relationship 104 and attributes of a second person in a relationship 105 to output a value predicting the quality of the relationship 106. The input of the attributes of a first person in a relationship 104 may be initiated in response to a request from that first person. The input of the attributes of a first person in a relationship 104 may be performed on a periodic basis.
This example is simplified to show four persons as candidates for a relationship, shown as B01, B02, B65 and B76. In an illustrative embodiment the database of candidate persons for a relationship would contain many persons. The data item for each candidate person such as B01 includes attributes of that person. Similarly the data item for the person A seeking a relationship 104 includes attributes of that person. The machine learning engine 103 is utilized repeatedly, once per candidate pairing between person A 104 and one candidate person 105. Each utilization of the machine learning engine 103 outputs a numerical evaluation of a relationship between the two persons. This produces a list of evaluated candidates 106 including the numerical evaluation of each relationship. Each entry in the list of evaluated candidates 106 records the identification of the matched persons and the numerical evaluation predicting the quality of that relationship. A prediction of the likelihood of a successful relationship may be made by comparing the numerical evaluation of each relationship against a threshold to assess whether it might be successful or unsuccessful.
As a final step the list of evaluated candidates 106 is sorted to recommend a match with the highest value for predicted relationship quality resulting in a selected candidate 107. In this example the candidate relationship between persons A and B65 resulted in the highest predicted value for relationship quality from the four candidate relationships in the list of evaluated candidates 106. Other embodiments might recommend matches with multiple selected candidates 107 for a relationship, such as three candidates or five candidates or a number that is selected by the system or selected by the person seeking a relationship 104.
The trained supervised machine learning engine 103 directly outputs a value evaluating the quality of a relationship between the two persons whose attributes data were input to the supervised machine learning engine 103. The outputted values for evaluated candidates 106 are directly used to recommend successful candidates 107. Use of the outputted values for evaluated candidates 106 does not use any heuristics such as “birds of a feather flock together”. Use of the outputted values for evaluated candidates 106 does not use any rule-based methods crafted around specific attributes of persons. Since the supervised machine learning engine 103 is directly driven by the nature of the data, the nature of the data determines the specific application.
Training a Supervised Machine Learning Engine
The attributes of each person in an existing relationship 401 or 402 may comprise data mined from at least one social networking account of that person. The attributes of each person in an existing relationship 401 or 402 may comprise at least one video of that person. A video may comprise a short video of the person speaking about themselves, for example ten to fifteen seconds long. Research shows that the opinions we form in the first moments after meeting someone play a major role in determining the course of a relationship.
In an illustrative embodiment survey questions about a person in a relationship between two people may be posed with a Likert scale with answers such as Strongly Agree, Agree, Neutral, Disagree or Strongly Disagree. Survey questions regarding personality may be taken from established questionnaires such as the Big Five test which includes 50 questions using a 3-point Likert scale (e.g., “I have a vivid imagination” or “I have frequent mood swings”) to provide percentile scores for extraversion, conscientiousness, agreeableness, neuroticism (emotional stability) and openness to experience. Example questions regarding a dating interpersonal relationship between two people may be “I like to try new things” or “I enjoy going to concerts.” Example questions regarding an intimate interpersonal relationship between two people may be “I value saving money for retirement” or “I want to have children.” Example questions regarding a relationship between an employer and an employee may be “I prefer to work independently” or “I have time to train new employees.” Example questions regarding a relationship between an advisor and a client may be “When the market goes down, I tend to sell some of my riskier investments and put money in safer investments” or “I prefer clients with greater than $500,000 in assets.” Example questions regarding a relationship between a teacher and a student may be “I need tutoring on a daily basis” or “I prefer to teach students online rather than in person.” Example questions regarding a relationship between two persons playing a multiplayer game may be “When playing Minecraft I have a passion to build complicated things” or “I am cooperative with other players.”
The attributes of the relationship between person 1 and person 2 403 may comprise answers provided to a list of survey questions. These answers might be provided separately by each of the two persons or might be provided as one set of answers by both persons.
In an illustrative embodiment survey questions about a relationship between two people may be posed with a Likert scale with answers such as Strongly Agree, Agree, Neutral, Disagree or Strongly Disagree. An example question regarding a dating interpersonal relationship may be “My date and I both laughed at the same things.” Survey questions regarding an intimate interpersonal relationship may be taken from established questionnaires for quality of relationships. An example question regarding an intimate interpersonal relationship may be “I feel that I can confide in my partner about virtually anything.” An example question regarding a relationship between an employer and an employee may be “My supervisor generally listens to employee opinions.” An example question regarding a relationship between an advisor and a client may be “My client usually follows my financial advice.” An example question regarding a relationship between a teacher and a student may be “My tutor cares about me.” An example question regarding a relationship between two persons playing a multiplayer game may be “My gaming colleague has a sense of humor.”
The attributes of the relationship between person 1 and person 2 403 are used to compute a numerical evaluation of that relationship 404. The computed relationship value 404 is used to label the relationship represented by the data of attributes of person 1 401 and attributes of person 2 402 to use that data as training data. This training data is used to train the supervised machine learning engine 405.
In an illustrative embodiment the attributes of the relationship between person 1 and person 2 may comprise answers to the Relationship Assessment Scale (RAS) containing seven questions rated on a 5-point Likert scale ranging from 1 to 5. Total summed scores range from 7 to 35, with higher scores reflecting better relationship satisfaction. In an alternate embodiment the attributes of the relationship between person 1 and person 2 may comprise answers to the Couples Satisfaction Index (CSI) which has versions such as the CSI-4 with four questions or the CSI-32 with 32 questions. The CSI-32 includes one question with the answer ranging from 0-6 (“Please indicate the degree of happiness, all things considered, of your relationship”) with the answers to the other 31 questions ranging from 0-5, thus the total summed score can range from 0 to 161. Higher scores indicate higher levels of relationship satisfaction. CSI-32 scores falling below 104.5 suggest notable relationship dissatisfaction.
The training process is controlled by a number of parameters known as hyperparameters. One hyperparameter is the learning rate. The learning rate is a factor applied to adjustments made to a supervised machine learning engine during training. Too low a learning rate can result in too long a training time. Too high a learning rate can result in the training oscillating rather than converging on improved performance. Different techniques can be used to initialize weights in a supervised machine learning engine (e.g., Normal, Xavier, Kaiming) and the type of initialization technique can be another hyperparameter. An approach to reduce overtraining to the training dataset is called regularization. One technique for regularization is called dropout, to randomly remove connections in a supervised machine learning engine and the dropout rate (e.g., 0.3, 0.8) can be another hyperparameter. Some choices for a hyperparameter value may be expressed on a linear scale or other choices such as for learning rate may be expressed on a logarithmic scale. A grid may be utilized to plot the candidate sets of hyperparameter choices. These hyperparameter choices in a grid may be searched in a coarse to fine tuning process to narrow in on high performance choices. The hyperparameter choices may be selected at random rather than from a grid. The hyperparameter tuning may be guided by bias which is the error between performance on the training dataset and ideal performance or performance by a human expert. The hyperparameter tuning may be guided by variance which is the difference in error rate between the training dataset 501 and the validation dataset 503. High bias may indicate a need to train for a longer time while high variance may indicate a need for more training data or increased regularization.
Should the evaluation 504 be deemed unsatisfactory 505 then the training hyperparameters are adjusted 506 and the training is repeated 502 using the training data 501. This process repeats until the hyperparameters are tuned so that the training is deemed satisfactory 505. Then the performance of the trained supervised machine learning engine is evaluated using the test data 507. This performance evaluation is to ensure that the machine learning engine has not been overtrained on the combination of the training data 501 and the validation data 503. Should the performance on the test data 507 exhibit any problems then it would be necessary to segment new divisions of data about existing relationships into training data 501, validation data 503 and test data 507 to repeat the process of completing tuning of the training hyperparameters.
Repeating the training process may be enhanced should additional data be obtained 509. The training process may be repeated should the performance evaluation 508 be unsatisfactory or may be repeated to update the training to further improve performance when additional training data 509 becomes available. The additional data 509 may be additional existing relationships of persons who have never interacted with the system using the supervised machine learning engine. The additional data 509 may be relationships of persons who have previously interacted with the system using the supervised machine learning engine, been matched and then gone on to form relationships that can be evaluated to become additional training data. In the field of machine learning training data can sometimes be augmented to form a larger training dataset. Augmentation can be performed by generating synthetic data from empirical data. In the machine learning domain of classifying images as cat or non-cat for example an image of a cat may be flipped right to left to provide a new training image or may be brightened or darkened to provide a new training image. The synthetic data label of cat or non-cat remains valid after these image transformations. In the application of technology in this description the training data may include answers provided by a person to a list of survey questions. Answers to questions would not be suitable for forming synthetic data. For example, consider a data element representing a relationship labeled as higher quality containing a survey question such as “I want to have children” answered as Strongly Agree, Agree, Neutral or Disagree or Strongly Disagree. This relationship might have both persons in the relationship answering Strongly Agree. Should synthetic data be generated changing an answer from one person to Strongly Disagree then the data element label of higher quality would no longer be valid. These considerations need to be taken into account to restrict the usage of augmented training data in the application of technology in this description to relationships between persons.
Using a Supervised Machine Learning Engine to Recommend Matches
The event diagram in
The control process 607 now sequences through candidate matches between the new user 606 and all the existing users included in the query result. The control process 607 provides the attributes of the new user 606 obtained from the registration process to the trained supervised machine learning engine 608. The control process 607 then repeatedly provides the attributes of each candidate person in the query results to the machine learning engine 608. This is performed for the first candidate user 611 in the query results, the second candidate user 612 and continuing through the last candidate user 613 in the query results. Each step such as 611 returns a value from the machine learning engine 608 predicting the quality of a relationship between the two persons being considered.
The values predicting the quality of each candidate relationship are now examined by the control process 607. The highest valued candidate users may be selected. The highest valued candidate users might be judged to be similar by being within a threshold of each other. In this case, priority may be provided to candidates who have waited longer for their most recent match notification than other candidates. The control process 607 performs a lookup 614 from a match table 605. The match table 605 is indexed by the identification of candidate users and records statistics about matches provided to each candidate, such as time since last match provided. The results of the table lookup 614 are returned to the control process 607. These lookup results can be used to sort the highest valued candidates judged to be similar to provide priority to candidates who have waited the longest. Alternatively or additionally a high valued candidate judged to be similar to other high valued candidates may be selected at random by the control process 607. This random selection may serve to distribute match notifications across the population of candidate users so that all candidates can be provided appropriate matches. The means for random selection may be forming a list of the high valued candidates judged to be similar to other high valued candidates, counting the number of candidates in this list, generating a random integer within the range of 1 and the total count, then selecting the high valued candidate in the list corresponding to this generated integer.
The control process 607 then sends a notification 615 to matched candidates. The new user 606 is notified that they are considered a match by designated persons. Each matching candidate already in the system 601 or 603 is notified that a new user has joined the system, the new user satisfying the candidate persons matching criteria and the attributes of the new user 606 and the matched candidate 601 or 603 evaluated by the machine learning engine 608 to be likely to become a successful relationship. In this example existing user A 601 and existing user C 603 receive notifications while existing user B 602 does not receive a notification.
Region C 705 is denoted by the smaller darker shaded square in the center of the figure. This region represents the logical AND of regions 701 and 703. Use of region C 705 for matching selects candidate persons who both satisfy the criteria provided by the person seeking a relationship and also the person seeking a relationship matches the criteria provided by the relationship candidates. Region D 706 represents a candidate person for a relationship who is close to matching the criteria provided by persons in the system.
As an illustrative example, consider a person age 35 seeking a relationship providing a match criterion 701 of a candidate being between ages 32-38. A candidate person age 39 does not match this criterion, but is close to matching as determined within a threshold, so could be considered a part of region D 706. Similarly, say a candidate person age 31 provides a match criterion 702 of a match being between ages 28-34. This candidate person does not match the criterion of ages 32-38 provided by the person age 35 seeking a relationship, and the person seeking a relationship age 35 does not match the criterion of ages 28-34 provided by the candidate person age 31. However both are close to matching as determined within a threshold, so the match between the person age 35 seeking a relationship and the candidate person age 31 could be considered a part of region D 706.
It should be noted that other combinations of matching criteria could be considered. One additional example is the logical OR of regions 701 and 702. This combination would constitute all candidate persons matching the criteria provided by the person seeking a relationship 701 plus all candidates whose matching criteria 702 are satisfied by the person seeking a relationship.
S=1−(1−R)1/M (1)
Equation (1) is used to plot the graph in
As one example say one desires a 90% probability of success to find a successful match, and is willing to try up to and including ten matches. One reads from 90% on 801 up to the line with the long dashes for ten matches to be tried 803 then across to the left to 20% required performance of the machine learning engine on 802. Another example for a person willing to try up to four matches might expect a 60% probability of finding a successful relationship with a supervised machine learning engine accuracy of 20%. This analysis illustrates that a supervised machine learning engine performing at less than 50% accuracy can result in a successful system, depending upon how many matches are willing to be tried by a person seeking a successful relationship. This is a simplified model which can be confounded by error factors in practice. For example in the field of online dating a person who falsifies their profile such as understating their weight is termed a “catfish”, and catfish behavior could degrade the performance of a machine learning engine in predicting relationship values based on incorrect data.
Architecture of a System Using a Supervised Machine Learning Engine to Recommend Matches
The server 901 is coupled to a network 908. Users with mobile devices 909 may be coupled to the network 908 to connect to the server 901. Users with laptop devices 910 may be coupled to the network 908 to connect to the server 901. Users with computer devices 911 may be coupled to the network 908 to connect to the server 901.
An illustrative embodiment may comprise a hosting system, including web servers, application servers, database servers, virtual machines, Storage Area Networks (SANs), cloud storage, Local Area Networks (LANs), LAN switches, storage switches, network gateways, and firewalls. An alternate illustrative embodiment may comprise high performance processing, including Graphics Processing Units (GPUs), Custom Systems on a Chip (SoC), Artificial Intelligence Chips (AI Chips), Artificial Intelligence Accelerators (AI Accelerators), Neural Network Processors (NNP), and Tensor Processing Units (TPUs).
The system architecture illustrated in
An illustrative embodiment of use of a ResNet-50 may employ attributes of person 1 and person 2 in a relationship implemented as answers to survey questions and facial images. Each color facial image may be a jpg file of 64×64 pixels in three pixel channels of RGB colors. Two persons in a relationship with one facial image for each person may be considered 64×64 pixels in six channels. Each pixel of an unsigned integer in the range of 0 to 255 is scaled to become a number between 0.0 and 1.0. The survey questions for each person in a relationship may consist of 192 questions posed in the form of a statement with an answer response choosing Strongly Agree, Agree, Neutral, Disagree or Strongly Disagree. These answers may be represented by the values 0.1, 0.3, 0.5, 0.7 and 0.9 to place them into the same range as the image data. It is desirable to combine the survey answers and the image data into a size of a power of two for efficiency reasons. This can be done by robbing the lower pixel line in each image to use a 63×64 pixel image, and using those positions for survey answer data. This frees up 192 data points per each image to store the 192 answers per each person.
Stage 1 1101 implements a convolution block, a batch normalization block, a rectified linear unit (ReLU) activation function and a maximum pooling block. Stage 2 1102 implements a convolution block and two identity blocks. Stage 3 1103 implements a convolution block and three identity blocks. Stage 4 1104 comprises a convolution block and five identity blocks. Stage 5 1105 comprises a convolution block and two identity blocks. Average pooling is implemented as illustrated in 1106. The output stage implements a flatten block and a fully connected block with a sigmoid activation function.
Multimodal data of survey answers by each of persons 1 and 2 1201 are input to neural network 1 1205 which has K layers. Multimodal data of facial images of persons 1 and 2 1202 are input to neural network 2 1206 which has L layers. Multimodal data of mined social networking data of persons 1 and 2 1203 are input to neural network 3 1207 which has M layers. Multimodal data of videos of persons 1 and 2 1204 are input to neural network 4 1208 which has N layers. The four neural networks 1205, 1206, 1207 and 1208 may have different numbers of layers and may be different types of neural networks. For example the neural network 2 1206 processing image data may be a residual neural network. For example the neural network 4 1204 processing video data may be a Long Short Term Memory (LSTM) neural network. The outputs of the four neural networks 1205, 1206, 1207 and 1208 are input to the final neural network 5 1209 where they are concatenated together. The neural network 5 1209 has at least one fully connected layer, here two fully connected layers are shown, with the activation function of the final fully connected layer being a sigmoid function to output a value between 0.0 and 1.0. This output value 1210 is the evaluation of the relationship between person 1 and person 2.
It should be noted that embodiments of
Once the neural networks 1301, 1302, 1303 and 1304 are individually trained then the entire neural network of these four neural networks combined with the top level neural network 1305 is trained. Each epoch in this final training step sequences through all the relationships in the training data forward propagating from the multimodal training data representing a relationship through to the sigmoid output value from the top level neural network 1305. Then the weights and biases in the neural networks are updated by back propagation. This means that each relationship in the training data must be represented by all multimodal data element types, so that all the lower level neural networks 1301, 1302, 1303 and 1304 can contribute to the forward propagation.
The HHNN training process may include two phases of training when including the top level neural network 1305. The first phase may only back propagate through the layers of the top level neural network 1305 for efficiency reasons, serving to initialize the weights in the top level neural network 1305. Once this first phase is completed then the training may be repeated back propagating through all the neural networks.
In the previous detailed description, numerous specific details are set forth to provide a full understanding of the subject technology. It will be apparent, however, to one ordinarily skilled in the art that the subject technology may be practiced without some of these specific details. In other instances, well-known structures and techniques have not been shown in detail so as not to obscure the subject technology.
These functions described above can be implemented in digital electronic circuitry, in computer software, firmware or hardware. The techniques can be implemented using one or more computer program products. Programmable processors and computers can be included in or packaged as mobile devices. The processes and logic flows can be performed by one or more programmable processors and by one or more programmable logic circuitry. General and special purpose computing devices and storage devices can be interconnected through communication networks.
Some implementations include electronic components, such as microprocessors, storage and memory that store computer program instructions in a machine-readable or computer-readable medium (alternatively referred to as computer-readable storage media, machine-readable media, or machine-readable storage media). Some examples of such computer-readable media include RAM, ROM, compact discs (CD), digital versatile discs (DVD), flash memory (e.g., SD cards), magnetic and/or solid state hard drives, ultra density optical discs, and any other optical or magnetic media. The computer-readable media can store a computer program that is executable by at least one processing unit and includes sets of instructions for performing various operations. Examples of computer programs or computer code include machine code, such as is produced by a compiler, and files including higher-level code that are executed by a computer, an electronic component, or a microprocessor using an interpreter.
While the above discussion primarily refers to microprocessor or multi-core processors that execute software, some implementations are performed by one or more integrated circuits, such as application specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs). In some implementations, such integrated circuits execute instructions that are stored on the circuit itself.
The features can be implemented in a computer system that includes a back-end component, such as a data server, or that includes a middleware component, such as an application server or an Internet server, or that includes a front-end component, such as a client computer having a graphical user interface or an Internet browser, or a client mobile device, or any combination of them. The components of the system can be connected by any form or medium of digital data communication such as a communication network. Examples of communication networks include, e.g., a LAN, a WAN, and the computers and networks forming the Internet. The computer system can include clients and servers. A client and server are generally remote from each other and typically interact through a network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
Some implementations include hosting systems, including web servers, application servers, database servers, virtual machines, Storage Area Networks (SANs), cloud storage, Local Area Networks (LANs), LAN switches, storage switches, network gateways, and firewalls. Some implementations include high performance processing, including Graphics Processing Units (GPUs), Custom Systems on a Chip (SoC), Artificial Intelligence Chips (AI Chips), Artificial Intelligence Accelerators (AI Accelerators), Neural Network Processors (NNP), and Tensor Processing Units (TPUs).
As used in this specification and any claims of this application, the terms “computer”, “server”, “processor”, and “memory” all refer to electronic or other technological devices. These terms exclude people or groups of people. For the purposes of the specification, the terms display or displaying means displaying on an electronic device. As used in this specification and any claims of this application, the terms “computer readable medium” and “computer readable media” are entirely restricted to tangible, physical objects that store information in a form that is readable by a computer. These terms exclude any wireless signals, wired download signals, and any other ephemeral signals.
It is understood that any specific order or hierarchy of steps in the processes disclosed is an illustration of exemplary approaches. Based upon design preferences, it is understood that the specific order or hierarchy of steps in the processes may be rearranged, or that all illustrated steps be performed. Some of the steps may be performed simultaneously. For example, in certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Thus, the claims are not intended to be limited to the aspects shown herein, but are to be accorded the full scope consistent with the language claims, wherein reference to an element in the singular is not intended to mean “one and only one” unless specifically so stated, but rather “one or more.” Unless specifically stated otherwise, the term “some” refers to one or more. Pronouns in the masculine (e.g., his) include the feminine and neuter gender (e.g., her and its) and vice versa. Headings and subheadings, if any, are used for convenience only and do not limit the subject disclosure.
A phrase such as an “aspect” does not imply that such aspect is essential to the subject technology or that such aspect applies to all configurations of the subject technology. A disclosure relating to an aspect may apply to all configurations, or one or more configurations. A phrase such as an aspect may refer to one or more aspects and vice versa. A phrase such as a “configuration” does not imply that such configuration is essential to the subject technology or that such configuration applies to all configurations of the subject technology. A disclosure relating to a configuration may apply to all configurations, or one or more configurations. A phrase such as a configuration may refer to one or more configurations and vice versa.
The word “exemplary” is used herein to mean “serving as an example or illustration.” Any aspect or design described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects or designs.
All structural and functional equivalents to the elements of the various aspects described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims.
This application is a continuation of application Ser. No. 17/192,845 filed on Mar. 4, 2021, and is incorporated by reference herein in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
6735568 | Buckwalter | May 2004 | B1 |
7454357 | Buckwalter | Nov 2008 | B2 |
7592910 | Tuck | Sep 2009 | B2 |
7613706 | Terrill | Nov 2009 | B2 |
7676466 | Terrill | Mar 2010 | B2 |
7814027 | Deluca | Oct 2010 | B2 |
7940174 | Tuck | May 2011 | B2 |
7945628 | Lunceford | May 2011 | B1 |
8010546 | Terrill | Aug 2011 | B2 |
8010566 | Terrill | Aug 2011 | B2 |
8014763 | Hymes | Sep 2011 | B2 |
8051013 | Terrill | Nov 2011 | B2 |
8144007 | Tuck | Mar 2012 | B2 |
8195668 | Drennan | Jun 2012 | B2 |
8204280 | Kluesing | Jun 2012 | B2 |
8332418 | Giordani | Dec 2012 | B1 |
8473490 | Bonilla | Jun 2013 | B2 |
8566327 | Carrico | Oct 2013 | B2 |
8583563 | Bonilla | Nov 2013 | B1 |
8635167 | Buckwalter | Jan 2014 | B2 |
8782038 | Mishra | Jul 2014 | B2 |
8898233 | Ganetakos | Nov 2014 | B2 |
8923574 | Kodesh | Dec 2014 | B2 |
8958778 | Lewis | Feb 2015 | B2 |
8984065 | Carter | Mar 2015 | B2 |
9064274 | Margines | Jun 2015 | B2 |
9070088 | Baveja | Jun 2015 | B1 |
9104905 | Whitehill | Aug 2015 | B2 |
9122759 | Carter | Sep 2015 | B2 |
9158821 | Qluisel | Oct 2015 | B1 |
9245301 | Klawitter | Jan 2016 | B2 |
9251220 | Klawitter | Feb 2016 | B2 |
9342855 | Bloom | May 2016 | B1 |
9348843 | Lewis | May 2016 | B1 |
9449282 | Diaz | Sep 2016 | B2 |
9495301 | Koh | Nov 2016 | B2 |
9503547 | Pedraza | Nov 2016 | B2 |
9536221 | Frind | Jan 2017 | B2 |
9537706 | Frind | Jan 2017 | B2 |
9560156 | Rana | Jan 2017 | B1 |
9672289 | Frind | Jun 2017 | B1 |
9679259 | Frind | Jun 2017 | B1 |
9684725 | Carter | Jun 2017 | B1 |
9710520 | Srivastava | Jul 2017 | B2 |
9753948 | Lo | Sep 2017 | B2 |
9775015 | Mishra | Sep 2017 | B1 |
9785703 | Carter | Oct 2017 | B1 |
9824123 | Ochandio | Nov 2017 | B2 |
9830669 | Frind | Nov 2017 | B1 |
9836533 | Levi | Dec 2017 | B1 |
9959023 | Rad | May 2018 | B2 |
10019553 | Wilf | Jul 2018 | B2 |
10019653 | Wilf | Jul 2018 | B2 |
10025835 | Lemer | Jul 2018 | B2 |
10146882 | Carter | Dec 2018 | B1 |
10149267 | Burrell | Dec 2018 | B2 |
10152193 | Hurwitz | Dec 2018 | B1 |
10169708 | Baveja | Jan 2019 | B2 |
10203354 | Rad | Feb 2019 | B2 |
10203854 | Rad | Feb 2019 | B2 |
10257676 | Mishra | Apr 2019 | B1 |
10286327 | Xue | May 2019 | B2 |
10320734 | Mishra | Jun 2019 | B1 |
10380158 | Diaz | Aug 2019 | B2 |
10387506 | Mishra | Aug 2019 | B2 |
10467677 | Wilson | Nov 2019 | B2 |
10489445 | Carter | Nov 2019 | B1 |
10523622 | Barfield | Dec 2019 | B2 |
10540607 | Oldridge | Jan 2020 | B1 |
10565276 | Finder | Feb 2020 | B2 |
10599734 | Ahn | Mar 2020 | B2 |
10610786 | Aghdaie | Apr 2020 | B2 |
10624054 | Burrell | Apr 2020 | B2 |
10726087 | Karakas | Jul 2020 | B2 |
10751629 | Xue | Aug 2020 | B2 |
10769221 | Frind | Sep 2020 | B1 |
10776758 | Benedict | Sep 2020 | B1 |
10810403 | Frolovichev | Oct 2020 | B2 |
10817804 | Ozoka | Oct 2020 | B1 |
10854336 | Neumann | Dec 2020 | B1 |
10868789 | Mishra | Dec 2020 | B2 |
10905962 | Kaethler et al. | Feb 2021 | B2 |
11276127 | Dirk | Mar 2022 | B1 |
20060018522 | Sunzeri | Jan 2006 | A1 |
20070050354 | Rosenberg | Mar 2007 | A1 |
20090049127 | Juan | Feb 2009 | A1 |
20090177496 | Tuck | Jul 2009 | A1 |
20100262611 | Frind | Oct 2010 | A1 |
20130090979 | Tuck | Apr 2013 | A1 |
20130212173 | Cathcart | Aug 2013 | A1 |
20140108308 | Stout | Apr 2014 | A1 |
20140180942 | Buckwalter | Jun 2014 | A1 |
20140221866 | Quy | Aug 2014 | A1 |
20140258260 | Rayborn | Sep 2014 | A1 |
20140297379 | Bicanin | Oct 2014 | A1 |
20150287146 | El Daher | Oct 2015 | A1 |
20150363751 | Covello, III | Dec 2015 | A1 |
20160005134 | Von Gontard | Jan 2016 | A1 |
20160063646 | Myhan | Mar 2016 | A1 |
20160063647 | Myhan | Mar 2016 | A1 |
20160086137 | Ackerman | Mar 2016 | A1 |
20160275531 | Martyn | Sep 2016 | A1 |
20170024519 | Farley | Jan 2017 | A1 |
20170228819 | Shenkar | Aug 2017 | A1 |
20170255907 | Page-Romer | Sep 2017 | A1 |
20180025440 | Wheeler | Jan 2018 | A1 |
20180053261 | Hershey | Feb 2018 | A1 |
20180337928 | Pelletier | Nov 2018 | A1 |
20190147366 | Sankaran | May 2019 | A1 |
20190179516 | Rad | Jun 2019 | A1 |
20190236722 | Bhat | Aug 2019 | A1 |
20190251640 | Sharp | Aug 2019 | A1 |
20190303140 | Kelly | Oct 2019 | A1 |
20190362440 | Williams | Nov 2019 | A1 |
20200034604 | Alrasheed | Jan 2020 | A1 |
20200065916 | Dahan | Feb 2020 | A1 |
20200092248 | Brennan | Mar 2020 | A1 |
20200104330 | Kairinos | Apr 2020 | A1 |
20200137019 | Barfield, Jr | Apr 2020 | A1 |
20200143483 | Smith | May 2020 | A1 |
20200175047 | Diaz | Jun 2020 | A1 |
20200228941 | Angapova | Jul 2020 | A1 |
20200265526 | Ogunsusi | Aug 2020 | A1 |
20200285700 | Narayanan | Sep 2020 | A1 |
20200351234 | Gaon | Nov 2020 | A1 |
20200364806 | Wang | Nov 2020 | A1 |
Entry |
---|
Finkel et al., “Online Dating: A Critical Analysis From the Perspective of Psychological Science,” Psychological Science in the Public Interest, Mar. 2012, 13(1):3-66. |
He et al., “Deep Residual Learning for Image Recognition,” arXiv:1512.03385v1, Dec. 2015, 12 pages. |
Maroufizadeh et al., “The Quality of Marriage Index (QMI): a Validation Study in Infertile Patients,” BMC Research Notes, Aug. 2019, 12:507. |
International Search Report and Written Opinion in International Appln. No. PCT/US2022/018782, dated Mar. 21, 2022, 11 pages. |
Baltrusaitis et al., “Multimodal Machine Learning: A Survey and Taxonomy,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Feb. 2019, 41(2):423-443. |
Extended European Search Report in European Appln No. 22764092.7, dated Jul. 31, 2024, 11 pages. |
Tay et al. “CoupleNet: Paying Attention to Couples with Coupled Attention for Relationship Recommendation,” CoRR, submitted on May 2018, arXiv:1805.11535, 11 pages. |
Number | Date | Country | |
---|---|---|---|
20220284520 A1 | Sep 2022 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17192845 | Mar 2021 | US |
Child | 17592454 | US |