This invention relates generally to human resource applications, and more specifically to enabling a candidate to apply anonymously for a job.
People are sometimes hesitant to apply for new jobs out of concern that their current employer may find out. For many people, the risk is worth taking only if there is a decent chance that they will get the job. To encourage people to apply, it would be helpful if applicants could apply anonymously and only have their identities revealed after the hiring organization has reviewed their resume and decided it is interested in speaking with them further. Therefore, there is a need for a system that facilitates anonymous job applications.
The present disclosure describes a system, method, and computer program for enabling a candidate to anonymously apply for a job position at an organization. The method is performed by a computer system (“the system”).
The system enables a candidate to submit candidate data to the system for the purpose of applying anonymously for a particular job at a particular organization. In response to receiving the candidate data, the system generates a non-anonymous profile for the candidate. For example, the system may generate a talent profile or an enhanced talent profile, as described in U.S. patent application Ser. No. 16/121,401, filed on Sep. 4, 2018, and titled “System, Method, and Computer Program for Automatically Predicting the Job Candidates Most Likely to be Hired and Successful in a Job,” the contents of which are incorporated by reference herein.
The system then generates an anonymous profile for the candidate from the non-anonymous profile. In certain embodiments, the anonymous profile is identical to the non-anonymous profile, but excludes the candidate's name. In other embodiments, the non-anonymous profile may also exclude the following:
The system provides the anonymous profile to the organization to which the candidate is applying and enables the organization to reject the candidate or explore the candidate further. In response to the organization electing to reject the candidate, the system notifies the candidate of the decision without revealing the candidate's identity or non-anonymous profile to the organization.
In response to the organization electing to explore the candidate further, the enables the organization to contact the candidate and/or view the non-anonymous profile at a current or later time.
In one embodiment, a method for enabling a candidate to anonymously apply for a job at an organization comprising the following:
On a website or in a mobile application that displays open job postings, the system provides a candidate with an option to apply anonymously for a posted job position (step 110). For example, there may be an “apply anonymously” button on an organization's website or in a job-search mobile application. In certain embodiments, selecting the button navigates the candidate to a user interface provided by the system, where the candidate is able to upload his/her candidate data to the system. Candidate data is the candidate's education and work experiences and may also include other information, such as a candidate's hobbies and interests. It is the type of data typically found in a resume. The candidate data may be in the form of a resume submitted by the candidate or professional profile data entered into a user interface of the system.
The system receives candidate data submitted by a candidate for an anonymous application for a particular job (step 120). The candidate data may be provided to the system directly by the candidate or through a third-party intermediary, such as a server for a career or job search website. The data received by the system also include the identity of the organization to which the candidate is applying.
The system creates a non-anonymous candidate profile for the candidate based on the candidate data (step 130). The non-anonymous candidate profile may be a talent profile or an enhanced talent profile, as described in U.S. patent application Ser. No. 16/121,401 (incorporated herein above).
The system then creates an anonymous profile for the candidate by automatically “anonymizing” the candidate data in the non-anonymous profile (step 140). Methods for “anonymizing” candidate data are described with respect to
The system enables the organization to which the candidate is applying to view the anonymous profile (step 150). For example, the system may generate a user interface with a ranked list of prospective candidates for the job and enable a user at the organization to view the anonymous profile through this user interface. For example, when a user “clicks” on a candidate in the list, the user will see an anonymous profile if the candidate applied anonymously.
The system provides the organization with the option to reject the candidate or explore the candidate further (step 160). For example, there may be buttons akin to “reject” and “explore further” in the user interface in which the anonymous profile is displayed.
In response to the organization rejecting the candidate, the system notifies the candidate of the rejection (step 170). Neither the identity nor the non-anonymous profile of the candidate is revealed to the organization. In other words, the rejected candidate remains anonymous to the organization.
In response to the organization electing to explore the candidate further, the system enables the organization to contact the candidate and/or to see the non-anonymous profile at the present time or a future time (step 180). For example, in response to the organization electing to explore the candidate further, the system may provide the organization with a name, email and/or phone number for candidate to enable the organization to schedule an interview or phone screen with the candidate. Alternatively, the system may notify the candidate that the organization wishes to speak with the candidate (or otherwise explore the candidate further) and obtain the candidate's permission to provide contact information or the candidate's non-anonymous profile to the organization. Whether the non-anonymous profile is displayed to the organization immediately upon the decision to explore the candidate further or at a later point is dependent on the configurations set by the organization or a system administrator of the system.
In some cases, the system outputs the anonymous profile after step 220. However, in other cases, the system performs steps 230 and/or 240 in creating the anonymous profile. Steps 230 and 240 involve identifying candidate data in the non-anonymous profile that may influence bias by a reviewer at an organization, and then removing or substituting such data in the anonymous profile.
In step 230, the system removes candidate data that may influence bias with respect to a defined class of bias. For example, the system may remove data that is indicative of the candidate's gender, race, and/or age from the anonymous profile. Methods for performing step 230 are described in more detail below with respect to
In step 240, the system removes from the anonymous profile any data that is not relevant to the job for which the candidate is applying. As described in more detail with respect to
As indicated above,
The system obtains candidate data for set of training candidates, preferably from across a plurality of organizations and a variety of professions (step 320). The training candidate data includes their non-anonymous profiles (e.g., a resume, talent profile, or enhanced talent profile) and data that enables each of the candidates to be classified with a class value (e.g., name or school graduation date). The system may obtain the training candidate data from a talent repository managed by the system (or an organization/company) or from public data sources that store job-candidate/employment profiles. The system classifies each of the training candidates with a class value (e.g., male or female) (step 330).
The system obtains key-value pairs from the non-anonymous profiles of the training candidates (step 340), and for each of a plurality of key-values pairs and combinations of key-value pairs, the system determines if the key-value pair or combination of key-value pairs is indicative of a particular class value (step 350). In response to a key-value pair or a combination of key-value pairs being indicative of a particular class value, the system concludes that the key-value pair or combination of key-value pairs may influence bias with respect to the defined class (step 360). In creating an anonymous profile for a candidate, the system removes or substitutes (with class-neutral data) the key-value pairs and combination of key-value pairs identified as influencing bias with respect to the defined class (step 370). For example, the system may remove key-value pairs from the anonymous profile that are indicative of race, gender, and/or age. “Neutral” data that serves as a substitute for key-value pairs may be an abstracted form of the key-value pair. For example, a particular US college may be replaced with an abstracted value of the college, such as “4-year US college.”
3.1 Determining whether a Key-Value Pair is Indicative of a Particular Class Value
The methods of
3.2 Example Method for Identifying Key-Value Pairs that May Influence Gender Bias.
For each of a plurality of key-value pairs and combinations of key-value pairs in the non-anonymous profiles of the training candidates, the system maintains a count of the number of times the key-value pair (or the combination) appears for male candidates and the number of times the key-value pair (or the combination) appears for female candidates (step 630), and determines whether the key-value pair or the combination (whichever is applicable) is associated with a particular gender for more than a threshold percentage (e.g., 80%) of candidates in the training set (step 640). If a key-value pair or a combination of key-value pairs is associated with a particular gender for more than the threshold percentage of candidates, the system concludes the key-value pair or the combination of key value-pairs (whichever is applicable) is indicative of the class value and, thus, may influence gender bias (step 650).
3.3 Example Method for Identifying Key-Value Pairs that may Influence Race Bias
For each of the training candidates, the system creates an input vector for the training candidate with a plurality of key-value pairs and combination of key-value pairs obtained from the training candidate's non-anonymous profile (step 730). To train a neural network, the system inputs the vectors for each of the training candidates into the neural network, along with the candidate's race value (step 740). The result is a neural network that is trained to predict the probability that a key-value pair or a combination of key-value pairs is associated with a particular race value (step 750). For a key-value pair or a combination of key-value pairs having more than a threshold probability (e.g., 90%) of being associated with a particular race value, the system concludes that the key-value pair or the combination of key-value pairs (whichever is applicable) is indicative of race value, and, thus, may influence racial bias (step 760).
3.4 Example Method for Identifying Key-Value Pairs that May Influence Age Bias.
For each of a plurality of key-value pairs and combinations of key-value pairs in the non-anonymous profiles of the training candidates, the system maintains a count of the number of times the key-value pair (or the combination) appears for each of the age ranges (step 820), and determines whether the key-value pair or the combination (whichever is applicable) is associated with a particular age range for more than a threshold percentage (e.g., 80%) of candidates in the training set (step 830). If a key-value pair or a combination of key-value pairs is associated with a particular age range for more than the threshold percentage of candidates, the system concludes the key-value pair or the combination of key value-pairs (whichever is applicable) is indicative of age and, thus, may influence age bias (step 840).
In certain embodiments, creating an anonymous profile for candidate also includes removing any data that is not relevant to the job role for which the candidate is applying. The methods describe above with respect to
For each of the relevant keys, the system identifies at what level the key matters most for the job role (step 920). In other words, for each of the relevant keys, the system identifies whether the actual value for the key matters most or whether an abstracted value for a key matters most. For example, for the “university” key, does the particular university attended by a candidate matter or is whether a candidate went to a top 20% school (an abstracted value) what matters?
In creating the anonymous profile for the candidate, the system excludes any key-value pairs that are in the non-anonymous profile but are irrelevant for the job role (step 930). For each of the relevant keys in which an abstracted value matters most for the job role, the system determines whether the candidate's actual value for the key is encompassed by the relevant abstracted value (step 940). For example, if what matters most for the “university” key is whether a candidate went to a top 20% school, then system determines whether the university attended by the candidate is a top 20% school. They system may use published or inputted university rankings to make this determination.
If the candidate's actual value in his/her non-anonymous profile is encompassed by the relevant abstracted value, the system replaces the key-value pair in the candidate's non-anonymous profile with the relevant abstracted value in the anonymous profile (step 950). For example, the system may replace “Massachusetts Institute of Technology” in a non-anonymous profile with “top 20% of engineering schools” in the anonymous profile. Otherwise, the system either excludes the key-value pair from the anonymous profile or replaces the key-value pair with an abstracted value relevant to the candidate in the anonymous profile, depending on how the system is configured (also step 950). For example, if the candidate attended a 4-year college that is not ranked in the top 20% of schools (according to the ranking(s) used by the system), then the system may not specify college information for the candidate or the system may replace the candidate's specific college with something like “US university.” Key-value pairs that are not abstracted or removed in accordance with step 950 (or other methods described herein) remain in the anonymous profile.
If a candidate applies for multiple job positions at an organization, then the system may create an anonymous profile for the candidate for each of the job roles, as what is relevant for one job role may not be relevant for another job role.
4.1 Identifying Relevant Keys and Abstraction Levels for Key-Value Pairs
Turning to
For each of a plurality of keys in the candidate data, the system computes, using the neural network, how well actual values for the keys and abstractions of values for the keys predict the job role (step 1040). The “abstracted values” may be preconfigured by a system administrator or may be determined automatically by clustering values for keys into groups, where a group encompasses multiple values. The system may test multiple abstraction levels for a key.
If an actual value for a key or an abstraction of a key's value is a good predictor for the job role, the system concludes that the key is relevant (step 1050). The level (i.e., the base level or an abstracted level) that is the best predictor of the job role is the level that matters most for the key. For example, for the “undergraduate college” key, if “top 20% school” is a better predictor than a particular university value (e.g., “Stanford”), then “top 20% school,” which is an abstracted value, is the level that matters most for that key. Conversely, if for the “skills” key, the value “java” is a better predictor for a job role than an abstracted value that encompasses a wider range of skills, then the base level (i.e., the actual value) for the key is the level that matters most for that key. If neither the actual values for a key, nor abstractions of values for a key, are good predictors, the system concludes that the key is not relevant for the job role (step 1060).
For each of a plurality of keys in the ideal candidate data, the system determines if any actual values for the key and any abstractions of values for the key apply to a least a threshold percentage (e.g., 80%) of ideal candidates (step 1130). If no actual value or abstraction of values is applicable to at least a threshold number of ideal candidates, the system concludes that the key is not relevant for the job role (step 1140). If one or more actual values or abstracted values is applicable to at least a threshold number of ideal candidates, the system concludes that the key is relevant for the job role (step 1150). The lowest level applicable to a threshold percentage of ideal candidates is the level that matters most for the job role. For example, if both an abstracted value for a key and a particular actual value for a key apply to at least a threshold percentage of ideal candidates, the base level (actual value) for the key is the level that matters most for that key.
As stated above, the method may be performed by a computer system that identifies candidates for job positions. In certain embodiments, for each open job position at an organization, the system displays a ranked list of candidates in a user interface for the organization, where the rankings are based on a match score. An example of the matching process is described in U.S. patent application Ser. No. 16/121,401 (incorporated herein above).
In one embodiment, the system calculates the match score and ranks candidates based on the data in the non-anonymous profile (e.g., an enriched talent profile), but, when a user at an organization clicks on a candidate in the ranked list, the system initially displays the anonymous profile for the candidate. The non-anonymous profile used for the ranking may be displayed at a later point if the organization decides to explore the candidate further.
The methods described herein are embodied in software and performed by a computer system (comprising one or more computing devices) executing the software. A person skilled in the art would understand that a computer system has one or more memory units, disks, or other physical, computer-readable storage media for storing software instructions, as well as one or more processors for executing the software instructions.
As stated an example of a computer system for performing the methods described herein is set forth in U.S. patent application Ser. No. 16/121,401 (incorporated herein above). In addition to the software modules described in U.S. patent application Ser. No. 16/121,401, the system may have a software module for performing the methods described herein.
As will be understood by those familiar with the art, the invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. Accordingly, the above disclosure is intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the following claims.
Number | Name | Date | Kind |
---|---|---|---|
7472097 | Scarborough | Dec 2008 | B1 |
9665641 | Zhang | May 2017 | B1 |
10185712 | Gidney | Jan 2019 | B2 |
20050004905 | Dresden | Jan 2005 | A1 |
20050086186 | Sullivan | Apr 2005 | A1 |
20050216295 | Abrahamsohn | Sep 2005 | A1 |
20050261956 | Kato | Nov 2005 | A1 |
20060235884 | Pfenniger et al. | Oct 2006 | A1 |
20060271421 | Steneker et al. | Nov 2006 | A1 |
20070022170 | Foulger | Jan 2007 | A1 |
20070033064 | Abrahamsohn | Feb 2007 | A1 |
20070047802 | Puri | Mar 2007 | A1 |
20070112585 | Breiter et al. | May 2007 | A1 |
20090144075 | Flinn et al. | Jun 2009 | A1 |
20100153149 | Prigge et al. | Jun 2010 | A1 |
20100153150 | Prigge et al. | Jun 2010 | A1 |
20110055098 | Stewart | Mar 2011 | A1 |
20110276505 | Schmitt | Nov 2011 | A1 |
20110276507 | O'Malley | Nov 2011 | A1 |
20130096991 | Gardner et al. | Apr 2013 | A1 |
20130290208 | Bonmassar | Oct 2013 | A1 |
20140039991 | Gates et al. | Feb 2014 | A1 |
20140040929 | Mears | Feb 2014 | A1 |
20140052648 | Pointer | Feb 2014 | A1 |
20140122355 | Hardtke et al. | May 2014 | A1 |
20140282586 | Shear et al. | Sep 2014 | A1 |
20140330734 | Sung et al. | Nov 2014 | A1 |
20150142711 | Pinckney et al. | May 2015 | A1 |
20150161567 | Mondal et al. | Jun 2015 | A1 |
20150178682 | Matthews | Jun 2015 | A1 |
20150244850 | Rodriguez et al. | Aug 2015 | A1 |
20150309986 | Brav et al. | Oct 2015 | A1 |
20150317610 | Rao et al. | Nov 2015 | A1 |
20160012395 | Omar | Jan 2016 | A1 |
20160034463 | Brewer | Feb 2016 | A1 |
20160034853 | Wang | Feb 2016 | A1 |
20160055457 | Mather et al. | Feb 2016 | A1 |
20160098686 | Younger | Apr 2016 | A1 |
20170061081 | Jagannathan | Mar 2017 | A1 |
20170243162 | Gavrielides | Aug 2017 | A1 |
20170344555 | Yan et al. | Nov 2017 | A1 |
20170357945 | Ashkenazi et al. | Dec 2017 | A1 |
20180039946 | Bolte et al. | Feb 2018 | A1 |
20180150484 | Dupey et al. | May 2018 | A1 |
20180218330 | Choudhary et al. | Aug 2018 | A1 |
20180232751 | Terhark | Aug 2018 | A1 |
20180308061 | Jadda et al. | Oct 2018 | A1 |
20180336501 | Le | Nov 2018 | A1 |
20180357557 | Williams et al. | Dec 2018 | A1 |
20180373691 | Alba et al. | Dec 2018 | A1 |
20190066056 | Gomez et al. | Feb 2019 | A1 |
20190114593 | Champaneria | Apr 2019 | A1 |
20190197487 | Jersin et al. | Jun 2019 | A1 |
20190205838 | Fang et al. | Jul 2019 | A1 |
20200007336 | Wengel | Jan 2020 | A1 |
20200065769 | Gupta et al. | Feb 2020 | A1 |
20200117582 | Srivastava | Apr 2020 | A1 |
20200160050 | Bhotika | May 2020 | A1 |
20200233910 | Bhide et al. | Jul 2020 | A1 |
Entry |
---|
IP.com Search Query Sep. 4, 2020 (Year: 2020). |
Hardt et al. “Equality of Opportunity in Supervised Learning,” arXiv:1610.02413v1, Oct. 7, 2016, 22 pages. |
Liu et al., “Delayed Impact of Fair Machine Learning,” arXiv:1803.04383v2, Apr. 7, 2018, 37 pages. |
Dixon et al., “Measuring and Mitigating Unintended Bias in Text Classification,” Proceeding of the 2018 AAAI/ACM Conf. on AI, Ethics, and Society, Feb. 2-3, 2018, 7 pages. |
Pedreschi et al., “Discrimination-Aware Data Mining,” Aug. 24-27, 2008, KDD 08, Las Vegas, Nevada, 9 pages. |
International Application No. PCT/US2020/012317, International Search Report and Written Opinion dated Apr. 9, 2020, 8 pages. |
CustomerGlu “Hire the best candidate for your Company using Artificial Intelligence” (2016), medium.com/SCustomerGlu, 2016, 5 pages. |
Barbara Depompa, “Time for a Diversity ‘Reboot’”, SC Magazine 29:4: 26-29, 2018, Haymarket Media, Inc., pp. 1-4. |
Elejalde-Ruiz, What Resume? Hiring is in the Midst of a Tech Revolution; Cutting Costs, Turnover; Eliminating Bias, South Florida Sun—Sentinel, 2018, pp. 1-3. |
Sarah K. White, 4 Ways Technology has Changed Recruitment—For Better (and Worse), CIO CXO Media, Inc., 2017, pp. 1-3. |
Sarah Dobson, “Feds Try to Blank Out Bias”, Canadian HR Reporter 30, 9, HAB Press Limited, 2017, pp. 1-3. |
David Hausman, “How Congress Could Reduce Job Discrimination by Promoting Anonymous Hiring”, Stanford Law Review 64.5, 2012, pp. 1343-1369, Stanford University, Stanford Law School. |