Generally, the disclosure relates to Artificial Intelligence (AI). More specifically, the disclosure relates to a method and system for evaluating performance of employees (such as, developers) using AI based technologies.
Typically, software developers may be evaluated for performance on various factors, such as, modules, applications or features of products developed by them, work experience in a current organization, and overall experience as a Subject Matter Expert (SME) in a particular technology. Traditionally in a software development process, multiple software developers may work on development of features of the product(s) assigned to them as per their skillsets. This makes it difficult for reviewers to provide feedback manually for performance evaluation to the software developers for issues faced in the developed modules and applications by keeping a track on individual performance of a software developer at the same time. However, such performance evaluation may be crucial in any organization for factors, such as, but not limited to, providing an appraisal or a rating that clearly distinguishes an individual software developer amongst others and also providing the training to the software developer to enhance the skillset, based on the performance evaluation.
Accordingly, there is a need for a method and system for evaluating the performance of software developers.
In accordance with one embodiment, a method of evaluating performance of developers using AI is disclosed. The method may include receiving each of a plurality of performance parameters associated with a set of developers. The method may include creating one or more feature vectors corresponding to each of the plurality of performance parameters, based on one or more features determined for each of the plurality of performance parameters. It should be noted that, the one or more feature vectors are created based on a first pre-trained machine learning model. The method may include assessing the one or more feature vectors, based on the first pre-trained machine learning model. The method may include classifying the set of developers into one of a set of performance categories based on the assessing of the one or more feature vectors. The method may include evaluating the performance of at least one of the set of developers, based on an associated category in the set of performance categories, in response to the classifying.
In another embodiment, a system for evaluating performance of developers using Artificial Intelligence (AI) is disclosed. The system includes a processor and a memory communicatively coupled to the processor. The memory may store processor-executable instructions, which, on execution, may causes the processor to receive each of a plurality of performance parameters associated with a set of developers. The processor-executable instructions, on execution, may further cause the processor to create one or more feature vectors corresponding to each of the plurality of performance parameters, based on one or more features determined for each of the plurality of performance parameters. It should be noted that, the one or more feature vectors are created based on a first pre-trained machine learning model. The processor-executable instructions, on execution, may further cause the processor to assess the one or more feature vectors, based on the first pre-trained machine learning model. The processor-executable instructions, on execution, may further cause the processor to classify the set of developers into one of a set of performance categories based on the assessing of the one or more feature vectors. The processor-executable instructions, on execution, may further cause the processor to evaluate the performance of at least one of the set of developers, based on an associated category in the set of performance categories, in response to the classifying.
In yet another embodiment, a non-transitory computer-readable medium storing computer-executable instruction for evaluating performance of developers using Artificial Intelligence (AI) is disclosed. The stored instructions, when executed by a processor, may cause the processor to perform operations including receiving each of a plurality of performance parameters associated with a set of developers. The operations may further include creating one or more feature vectors corresponding to each of the plurality of performance parameters, based on one or more features determined for each of the plurality of performance parameters. It should be noted that, the one or more feature vectors are created based on a first pre-trained machine learning model. The operations may further include assessing the one or more feature vectors, based on the first pre-trained machine learning model. The operations may further include classifying the set of developers into one of a set of performance categories based on the assessing of the one or more feature vectors. The operations may further include evaluating the performance of at least one of the set of developers, based on an associated category in the set of performance categories, in response to the classifying.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
The present disclosure can be best understood by reference to the following description taken in conjunction with the accompanying drawing figures, in which like parts may be referred to by like numerals
The following description is presented to enable a person of ordinary skill in the art to make and use the disclosure and is provided in the context of particular applications and their requirements. Various modifications to the embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the disclosure. Moreover, in the following description, numerous details are set forth for the purpose of explanation. However, one of ordinary skill in the art will realize that the disclosure might be practiced without the use of these specific details. In other instances, well-known structures and devices are shown in block diagram form in order not to obscure the description of the disclosure with unnecessary detail. Thus, the disclosure is not intended to be limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features disclosed herein.
While the disclosure is described in terms of particular examples and illustrative figures, those of ordinary skill in the art will recognize that the disclosure is not limited to the examples or figures described. Those skilled in the art will recognize that the operations of the various embodiments may be implemented using hardware, software, firmware, or combinations thereof, as appropriate. For example, some processes can be carried out using processors or other digital circuitry under the control of software, firmware, or hard-wired logic. (The term “logic” herein refers to fixed hardware, programmable logic and/or an appropriate combination thereof, as would be recognized by one skilled in the art to carry out the recited functions). Software and firmware can be stored on computer-readable storage media. Some other processes can be implemented using analog circuitry, as is well known to one of ordinary skill in the art. Additionally, memory or other storage, as well as communication components, may be employed in embodiments of the invention.
The present disclosure tackles limitations of existing systems to facilitate performance evaluation of software developers (hereinafter referred as developers) using an AI based evaluation system. The present disclosure evaluates performance of a set of developers based on a plurality of parameters associated with each of the set of developers. The plurality of performance parameters may include, but is not limited to, at least one of efficiency of a developed product associated with a module developed for a product, complexity of the developed product, types of support received from peers, feedback or rating received from managers, quality of the module developed for the product, and technical skills of each of the set of developers. Further, the present disclosure may facilitate computation of ranks of each of the set of developers.
In accordance with an embodiment, the present disclosure may train the AI based evaluation system by exposing to a new environment during initial training process. In addition, the AI based system may identify a plurality of bugs associated with a module of product developed by each of the set of developers based on which a feedback is generated. Further, based on the generated feedback, performance of each of the set of developers may be re-evaluated to re-rank each of the set of developers. This has been explained in detail in conjunction with
Referring now to
The AI based evaluation system 102 may be communicatively coupled to the server 114, and the external devices 118, via the network 120. Further, the AI based evaluation system 102 may be communicatively coupled to the database 116 of the server 114, via the network 120. A user or an administrator (not shown in the
The AI based evaluation system 102 may include suitable logic, circuitry, interfaces, and/or code that may be configured to evaluate performance of developers of an organization, based on a plurality of performance parameters associated with the developers. Such developers may be from a same team or a different team in the organization and working at different levels of a hierarchy in the organization. The plurality of performance parameters may include, but not limited to, at least one of efficiency of a developed product associated with a module developed for a product, complexity of the developed product, types of support received from peers, feedback or rating received from managers, quality of the module developed for the product, and technical skills of each of the set of developers.
Examples of the AI based evaluation system 102 may include, but are not limited to, a server, a desktop, a laptop, a notebook, a tablet, a smartphone, a mobile phone, an application server, or the like. By way of an example, the AI based evaluation system 102 may be implemented as a plurality of distributed cloud-based resources by use of several technologies that are well known to those skilled in the art. Other examples of implementation of the AI based evaluation system 102 may include, but are not limited to, a web/cloud server and a media server,
The I/O device 108 may be configured to provide inputs to the AI based evaluation system 102 and render output on user equipment. By way of an example, the user may provide inputs, i.e., the plurality of performance parameters via the I/O device 108. In addition, the I/O device 108 may be configured to provide ranks to the developers based on performance evaluation of the developers by the AI based evaluation system 102. Further, the I/O device 108 may be configured to display results (i.e., the set of performance category associated with each of the set of developers) based on the evaluation performed by the AI based evaluation system 102, to the user. By way of another example, the user interface 110 may be configured to provide inputs from users to the AI based evaluation system 102. Thus, for example, in some embodiment, the AI based evaluation system 102 may ingest the one or more performance parameters via the user interface 110. Further, for example, in some embodiments, the AI based evaluation device 102 may render intermediate results (e.g., one or more feature vectors created for each of the set of developers, the set of performance categories, and one or more features determined for each of the plurality of performance parameters) or final results (e.g., classification of each of the set of developers in one of the set of performance categories, and results of evaluation of each of the set of set of developers) to the user(s) via the user interface 110.
The memory 104 may store instructions that, when executed by the processor 106, may cause the processor 106 to evaluate performance of each of the set of developers. The processor 106 may evaluate the performance of each of the set of developers based on a plurality of performance parameters associated with the set of developers, in accordance with some embodiments. As will be described in greater detail in conjunction with
The memory 104 also store various data (e.g., the plurality of performance categories, the one or more feature vectors, ranks for each developer from the set of developers, and a predefined evaluation, etc.) that may be captured, processed, and/or required by the AI based evaluation system 102. The memory 104 may be a non-volatile memory (e.g., flash memory, Read Only Memory (ROM), Programmable ROM (PROM), Erasable PROM (EPROM), Electrically EPROM (EEPROM) memory, etc.) or a volatile memory (e.g., Dynamic Random-Access Memory (DRAM), Static Random-Access memory (SRAM), etc.).
In accordance with an embodiment, the AI based evaluation system 102 may be configured to deploy the ML model 112 to use output of the ML model to generate real or near-real time inferences, take decisions, or output prediction results. The ML model 112 may be deployed on the AI based evaluation system 102 once the ML model 112 is trained on the AI based evaluation system 102 for classification task associated with evaluation of performance of developers.
In accordance with one embodiment, the machine learning model 112 may correspond to a first pre-trained machine learning model. In accordance with an embodiment, the first pre-trained machine learning model may correspond to an attention based deep neural network model that classifies a developer into a particular category of evaluation. Examples of the attention based deep neural network model includes, but not limited to, Long Short-Term Memory (LSTM), LSTM-GRU (Long Short-Term Memory—Gated Recurrent Units) of Neural Network. The machine learning model 112 may be configured to determine one or more features for each of the plurality of performance parameters. The machine learning model 112 may be configured to determine one or more features in order to assist the AI based evaluation system 102 to create the one or more feature vectors. In accordance with another embodiment, the machine learning model 112 may correspond to a second machine learning model. The machine learning model 112 may be trained by assigning weights to the one or more features associated with the each of the plurality of performance parameters based on a predefined evaluation criterion. The predefined evaluation criterion may include one or more of a technical skill in demand and an efficiency of a developed product with respect to bugs identified in the developed product.
Further, the AI based evaluation system 102 may interact with the server 114 or the external device 118 over the network 120 for sending and receiving various types of data. The external device 118 may include, but not limited to, a desktop, a laptop, a notebook, a netbook, a tablet, a smartphone, a remote server, a mobile phone, or another computing system/device.
The network 120, for example, may be any wired or wireless communication network and the examples may include, but not limited to, the Internet, Wireless Local Area Network (WLAN), Wi-Fi, Long Term Evolution (LTE), Worldwide Interoperability for Microwave Access (WiMAX), and General Packet Radio Service (GPRS). In some embodiments, the AI based evaluation system 102 may fetch information associated with the developers from the server 114, via the network 120. The database 116 may store information associated with existing technologies or the new technology in demand.
In operation, the AI based evaluation system 102 may be configured to receive the plurality of performance parameters associated with a set of developers. The AI based evaluation system 102 may be further configured to create one or more feature vectors corresponding to each of the plurality of performance parameters. The AI based evaluation system 102 may create one or more feature vectors based on one or more features determined for each of the plurality of performance parameters. Further, the AI based evaluation system 102 may assess the one or more feature vectors. The AI based evaluation system 102 may then classify each of the set of developers into one of a set of performance categories. In accordance with an embodiment, the set of performance categories may include an excellent performer category, a good performer category, an average performer category, and a bad performer category. Thereafter, the AI based evaluation system 102 may evaluate the performance of at least one of the set of developers, based on an associated category in the set of performance categories. In order to evaluate the performance, the AI based evaluation system 102 may compute ranks for each developer from the set of developers categorized within an associated performance category. Based on the computed ranks, the AI based evaluation system 102 may rank each developer from the set of developers associated with each of the set of performance categories. In addition, the AI based evaluation system 102 may generate a feedback for each of the set of developers. The AI based evaluation system 102 may generate the feedback based on a plurality of bugs identified for a module of a product developed by each of the set of developers. The AI based evaluation system 102 may then evaluate performance of each of the set of developers based on the generated feedback. This is further explained in detail in conjunction with
Referring now to
With reference to
In accordance with an embodiment, the memory 104 may be configured to receive the input data 202. The input data 202 may correspond to data associated with a plurality of performance parameters of a set of developers. In an embodiment, the plurality of performance parameters may include, but is not limited to, at least one of efficiency of a developed product associated with a module developed for a product, complexity of the developed product, types of support received from peers, feedback or rating received from managers, quality of the module developed for the product, and technical skills of each of the set of developers. The memory 104 may be configured to receive the input data 202 in the database 204 from the external device 118. Additionally, the input data may include information associated with the set of developers.
The database 204 may serve as a repository for storing data processed, received, and generated by the modules 206-224. The data generated as a result of the execution of the modules 206-224 may be stored in the database 204.
During operation, the reception module 206 may be configured to receive the plurality of performance parameters associated with each of the set of developers as the input data 202. The plurality of performance parameters may include, but is not limited to, at least one of efficiency of a developed product associated with a module developed for a product, complexity of the developed product, types of support received from peers, feedback or rating received from managers, quality of the module developed for the product, and technical skills of each of the set of developers. In accordance with an embodiment, complexity of the developed product may correspond to complexity in code snippets of the developed product. The efficiency of the developed product associated with the module developed for the product may be based on faulty codes or bug free codes associated with the developer. For example, the AI based evaluation system 102 captures performance parameters for a developer who works on multiple modules of a single product, multiple products or an application in various programming languages or technologies and may evaluate the performance of the developer, which is an ardent task when done manually. In accordance with an embodiment, the AI based evaluation system 102 may be configured to use the performance parameter corresponding to the technical skills of developers to identify developers of similar skill set.
The plurality of performance parameters may correspond to tabular data shown in
Further, the feature vector creation module 208 may be configured to determine one or more features for each of the plurality of performance parameters corresponding to the pre-processed input data. In accordance with an embodiment, the feature vector creation module 208 may be configured to determine one or more features based on the first pre-trained machine learning model. As will be appreciated, the first pre-trained machine learning model may correspond to any deep neural network model (for example, an attention based deep neural network model and a Convolution Neural Network (CNN) model).
In accordance with an embodiment, the feature vector creation module 208 may be further configured to create one or more feature vectors corresponding to the determined one or more features. In accordance with another embodiment, the feature vector creation module 208 may be configured to create one or more feature vectors based on the first pre-trained machine learning model. The feature vectors created for each of the plurality of performance parameters may be stored in the database 204 for further computation. It may be noted that the process of storing the feature vectors in the database 204 may continue, until the feature vector corresponding to each of the plurality of performance parameters is created and stored. The feature vector stored in the database 204 may further be utilized by the assessment module 210.
In addition, the feature vector creation module 208 may be configured to send the one or more feature vectors to the training module 220. The assessment module 210 may be configured to receive each of the one or more feature vectors from the feature vector creation module 208. Upon receiving the one or more feature vectors, the assessment module 210 may be configured to perform assessment of each of the one or more feature vectors. In an embodiment, the assessment module 210 may perform assessment based on the first pre-trained machine learning model. Further, the assessment module may be configured to send a result of assessment of each of the one or more feature vector to the classification module 212.
The classification module 212 may be configured to receive the result of assessment of each of the one or more feature vectors from the assessment module 210. The classification module 212 may be configured to classify each of the set of developers into one of a set of performance categories, based on the assessment of each of the one or more feature vectors. The performance categories may include an excellent performer, a good performer, an average performer and a bad performer. In accordance with an embodiment, the classification module 212 may classify the set of developers into one of the excellent performer, the good performer, the average performer and the bad performer, based on the assessing of the one or more feature vectors. Further, the classification module 212 may be configured to send a result of classification of the set of developers to the evaluation module 214. In addition, the classification module 212 may be configured to send the result of classification to the identification module 222.
The evaluation module 214 may be configured to receive the result of classification from the classification module 212. In addition, the evaluation module 214 may be configured to receive input from the training module 220. In an embodiment, the training module 220 may correspond to a machine learning model (such as, the second machine learning model). As will be appreciated, the second pre-trained machine learning model may correspond to any deep neural network model (for example, an attention based deep neural network model and a Convolution Neural Network (CNN) model). The second machine learning model may be trained by assigning weights to the one or more features associated with the each of the plurality of performance parameters based on a predefined evaluation criterion. The predefined evaluation criteria may include one or more of a technical skill in demand and an efficiency of a developed product with respect to bugs identified in the developed product.
Further, the evaluation module 214 may be configured to evaluate the performance of at least one of the set of developers. The evaluation module 214 may evaluate the performance of at least one of the set of developers based on an associated category in the set of performance categories, in response to the classification. In accordance with an embodiment, output data corresponding to the evaluation of the performance of at least one of the set of developers may be rendered on a user device. Such output data may facilitate identification of employees in need of training on a particular technology. Further, the output data may provide insights for collaboration amongst employees of an organization.
In order to evaluate performance of at least one of the set of developers, the computing module 216 within the evaluation module 214 may be configured to compute ranks for each developer from the set of developers categorized within an associated performance category for each of the set of performance categories. In an embodiment, the computing module 216 may compute ranks based on the second machine learning model. In accordance with an embodiment, the second machine learning model correspond to any rank based neural network model (for example, Ranknet). By way of an example, the computing module 216 may compute ranks for each developer from the set of developers based on the predefined evaluation criteria.
In accordance with an embodiment, high weights are assigned to one or more features associated with at least one of the high demand technical skill as compared to a low demand technical skill and bug-free developed product as compared to the developed product with a plurality of bugs, by using the second machine learning model. In accordance with an exemplary embodiment, the computing module 216 may be configured to compute re-rank of developers under a same category (say, an “Excellent Performer” category). For example, it may be possible that there exist two developers under the “Excellent Performer” category, however, the two developers may be compared for evaluation of performance on basis of bugs or no-bugs reported in their respective developed modules. In accordance with another exemplary embodiment, some of the developers may be skilled on latest technologies, such as, Machine Learning, Artificial Intelligence, and Natural Language Processing. The developers delivering solutions in the latest technologies may be given more attention as compared to their counterpart developers. This is explained in detail in conjunction with
The ranking module 218 within the evaluation module 214 may be configured to receive the computed rank for each developer from the set of developers from the computing module 216. Further, the ranking module 218 may rank each developer from the set of developers for each of the set of performance categories, based on the computed ranks. In an embodiment, the ranking module 218 may provide ranks for each developer in order to evaluate performance of each developer from the set of developers.
The training module 220 may be configured to receive the one or more feature vectors from the feature vector creation module 208. Based on the one or more feature vectors received, the training module 220 may train the second machine learning model. In an embodiment, the training module 220 may train the second the machine learning model by assigning weights to the one or more features associated with the each of the plurality of performance parameters based on the predefined evaluation criterion. In accordance with an embodiment, the training module 220 may be configured to train the first pre-trained machine learning model.
The identification module 222 may be configured to receive the result of classification from the classification module 212. Further, the identification module 222 may be configured to identify a plurality of bugs associated with a module of a product developed by each of the set of developers. In addition, the identification module 222 may be configured to send the plurality of bugs identified to the feedback generation module 224.
The feedback generation module 224 may be configured to generate a feedback for each of the set of developers, based on the identified plurality of bugs. In an embodiment, the feedback generation module 224 may generate the feedback in response of identifying the plurality of bugs associated with the product developed by each of the set of developers. Moreover, the feedback generation module 224 may receive a feedback from the user 226. The user 226 may correspond to a manager, a reviewer, a supervisor, or a developer. In addition, the user 226 may be working in the same team as of the set of developers or may be working in any other team of the organization. Thereafter, the feedback generation module 224 may be configured to send the generated feedback to the evaluation module 214. In an embodiment, the feedback generation module 224 may send the generated feedback to the evaluation module 214 in order to evaluate the performance of at least one of the set of developers.
In particular, as will be appreciated by those of ordinary skill in the art, various modules 206-224 for performing the techniques and steps described herein may be implemented in the AI based evaluation system 102, either by hardware, software, or combinations of hardware and software. For example, suitable code may be accessed and executed by the one or more processors on the AI based evaluation system 102 to perform some or all of the techniques described herein. Similarly, application specific integrated circuits (ASICs) configured to perform some or all of the processes described herein may be included in the one or more processors on the host computing system. Even though
Referring to
With reference to
At step 302, a plurality of performance parameters may be received. Each of the plurality of performance parameters received may be associated with the set of developers. In an embodiment, each of the performance parameters may include, but is not limited, to at least one of efficiency of a developed product associated with a module developed for a product, complexity of the developed product, types of support received from peers, feedback or rating received from managers, quality of the module developed for the product, and technical skills of each of the set of developers.
At step 304, one or more feature vectors may be created, corresponding to each of the plurality of performance parameters. Moreover, the one or more feature vectors may be created based on one or more features determined for each of the plurality of performance parameters. In an embodiment, the one or more feature vectors may be created based on a first pre-trained machine learning model.
With reference to
At step 308, each of the set of developers may be classified into one of a set of performance categories. In an embodiment, the set of performance categories may include an excellent performer category, a good performer category, an average performer category, and a bad performer category. By way of an example, the excellent performer category refers to a group of developers that may have received top rating during evaluation of performance. Further, the bad performer may refer to a group of developers that may have received lowest rating during evaluation of performance. In an embodiment, in order to classify each of the set of developers in at least one of the set of performance category, the classification may be based on a deep learning recurrent neural network. Example of the deep learning model may include a Long Short-Term Memory (LSTM) model and Long Short-Term Memory—Gated Recurrent Units (LSTM-GRU).
At step 310, the performance of at least one of the set of developers may be evaluated. In an embodiment, the performance of at least one developer may be evaluated based on an associated category with the set of performance categories, in response to the classification. The process of evaluating the performance of at least one developer from the set of developers has been explained in greater detail in conjunction to
Referring now to
With reference to
At step 404, ranks for each developer may be computed, based on the trained second machine learning model. Moreover, ranks may be computed for each developer from the set of developers categorized within an associated performance category. In addition, ranks for each developer may be computed for each of the set of performance categories.
At step 406, each developer from the set of developers for each of the set of performance categories may be ranked based on the computed ranks for each developer. Moreover, each developer may be ranked based on the computed ranks in order to evaluate the performance of each developer from the set of developers.
Referring now to
At step 502, a plurality of bugs may be identified. In accordance with an embodiment, the plurality of bugs identified may be associated with a module of a product developed by each of the set of developers. Moreover, the plurality of bugs identified may be reported to each of the set of developer and his particular manager to take appropriate actions in order to solve a bug identified.
At step 504, Once the plurality of bugs is identified, a feedback may be generated for each of the set of developers. In an embodiment, the feedback may be generated in response of identification of the plurality of bugs associated with the product developed by each of the set of developers. By way of an example, based on the feedback, when the plurality of bugs may have been reported for same developer very frequently then a negative response may be imposed for the developer, thereby effecting rating of the developer while evaluating the performance
At step 506, the performance of at least one of the set of developers may be evaluated. By way of an example, in order to evaluate at least one developer, the at least one developer may be re-ranked based on the feedback generated corresponding to the plurality of bugs identified. In an embodiment, a neural network-based ranking method may re-rank each of the set of developers under at least one of the set of performance category. By way of an example, the neural network used for ranking may correspond to Ranknet. For example, multiple developers from the set of developers may be ranked under the “excellent performer” category. However, based on a number of the plurality of bugs identified in the module of the product by each from the set of developers ranked under the excellent performer category, each developer from the set of developers may be re-ranked. This has been explained in greater detail in conjunction to
Referring now to
With reference to
The tabular representation 600B may represent numerical values of the plurality of performance parameters captured for each of the set of developers. The AI based evaluation system 102 may be configured to convert input data with ordinal values as shown in
Further, in n-hot representation, embedding layer of the first trained Machine Learning model may have vector representation for a number of dimensions equal to number of unique values (T1 to T5 of 600B) in certain column (such as, the column 608a of 600A. Column ‘T1’, ‘T2’, ‘T3’, ‘T4’, and ‘T5’ may represent unique numerical values based on the type of technology or language used in order to develop the module of the product.). By way of an example, the performance parameter “types of technology/language used” in column 608a may be represented numerically in T1 to T5 of 600B, such as, Python: [1 0 0 0 0 0], Java: [0 1 0 0 0 0], Machine Learning: [0 0 1 0 0 0], Natural Language Processing: [0 0 0 1 0 0], MS SQL database: [0 0 0 0 1 0] and Dynamic Programming: [0 0 0 0 0 1].
A tabular representation 600C represents the plurality of bugs identified corresponding to each of the set of developers. A column 602c may represent a serial number. A column 604c may represent developers rating. The developers rating may be based on classification of each of the set of developers in one of the set of performance. A column 606c may represent developer ID. A column 608c may represent type of technology or language used to develop the module of the product. Lastly, a column 608c may represent a number of bugs identified in the module of the product developed by each of the set of developers. Moreover, each developer represented in the tabular representation 600C may be initially classified in the excellent performance category. However, the number of bugs identified in the module developed by each developer is different. Therefore, each developer classified under the excellent performance category may be re-ranked.
By way of an example, in 600C, maximum number of bugs, i.e., ‘3’ have been identified for a developer with developer ID ‘D2’. Therefore, the developer with developer ID ‘D2’ may be ranked lowest. In addition, number of bugs identified for a developer with developer ID ‘D1’ and a developer with developer ID ‘D3’ is ‘2’. However, the developer with developer ID ‘D1’ may be ranked higher than the developer with developer ID ‘D3’ because the developer with developer ID ‘D1’ has worked on more technologies as compared to the developer with developer ID ‘D3’.
The AI based evaluation system 102 may be configured to rank each developer from the set of developers for each of the set of performance categories to evaluate the performance of each developer from the set of developers using the second pre-trained machine learning model. The AI based evaluation system 102 may be configured to render output data on a user device (not shown) based on evaluation of the performance of at least one of the set of developers. Such output data may be used by some other developer or manager looking for assistance from another developer in order to assist developer or managers to resolve bugs in future. In certain scenario, one developer may be of different team in the same organization and can use the output data to find a developer of specific domain/technical skillset.
In certain other scenario, a developer or a manager who needs help or assistance from another developer may leverage the AI based evaluation system 102. By way of an example, the developer or the manager may ask query like “Can you help me to find out developer who is an expert in machine learning?” via the user interface 110 of the AI based evaluation system 102. As a response, the AI based evaluation system 102 may connect the developer or the manager via REST API (Representational State Transfer Application Programming Interface) to get details of developers and render/display the response using the I/O devices 108.
Referring now to
There is shown a model 702, training data 704 with a set of developer data 706 and Q-learning algorithm 708, apply model 710, a test set of developer data 712, and developer ratings 714. In accordance with an embodiment, the model 702 may correspond to a trained evaluation system, such as the AI based evaluation system 102. In accordance with an embodiment, the model 702 may be exposed to new training data 704 when the model 702 has never been through earlier training process. The model 702 may leverage AI based code reusability system to generate code snippets in various languages and technologies. The code snippets may be generated for modules of dummy products similar to the ones developed by developers. Thereafter, a set of bugs may be introduced in some of the modules of the dummy products to generate information associated with each of the set of developers, and to capture feedback given by a manager.
As a result, the model 702 may learn to identify optimal reward function that will maximize reward for end goal of performance evaluation. In accordance with an embodiment, the set of developer data 706 may correspond to information associated with each of the set of developers. The information may include performance category associated with each of the set of developers, computed ranks, the plurality of performance parameters, etc. Further, the Q-value algorithm 708 may be used to calculate a Q-value corresponding to each of the set of developers. The Q-value may be calculated based on the reinforcement learning approach. In addition, the feedback associated with each of the set of developers may be predicted based on reinforcement learning approach.
In an embodiment, the Q-value represents preference of a particular developer over other developers from the set of developers across all values of the feedback or rating. In other words, the Q-value may represent probability of one developer being preferred over the other developer across different values of the feedback or rating. Based on the calculated Q-value, the model 702 may penalize the manager for giving incorrect feedback or rating to one developer over the other developers from the set of developers. Moreover, the Q-values each of the set of developers along with the associated feedback or rating may be used to maximize reward.
Based on the training data received, the model 710 may be generated for a test set of developers' data 712. The test set of developers' data may correspond to information associated with a new set of developers. The rating provided to each of the test set of developers' data may be depicted as developers rating 714. With reference to the
Referring now to
The reinforcement learning based trained evaluation system may correspond to the environment model 802. The environment model 802 may correspond to the apply model 710. The environment model 802 may employ the inverse reinforcement learning model 804. The inverse reinforcement learning model 804 may be configured to utilize the historical records 806 to penalize and boost rating and ranking of each of a set of developers in an organization. The historical records 806 may use various policies, such as policy 808 to penalize and boost rating and ranking of each of the set of developers.
In an embodiment, the historical records 806 may contain detailed information about each of the set of developers from various teams in an organization along with the plurality of bugs reported and action taken by each of the set of developers and the manager with any other member in same team or different team. Thereafter, the inverse reinforcement learning model 804 may identify combination or set of algorithms and function that will define architecture of deep learning based recurrent neural network variations and define hyperparameter for different layers of a neural network. The combination or set of algorithms and function may be represented as relevant algorithm combination 810. In an embodiment, the inverse reinforcement learning model 804 may recommend more than one combination of set of algorithms and functions.
Further, the recommended combination of set of algorithms and functions may be evaluated based on the reinforcement learning approach in order to accept one combination of set of algorithms and functions. Moreover, one combination of set of algorithms and functions may be accepted when it satisfies evaluation of historical records represented as algorithm set satisfying historical records 812. Once the one combination of set of algorithms and functions is accepted, a new environment may be created for the environment model 802. In addition, the inverse reinforcement learning model 804 may recommend optimal values of hyperparameters corresponding to each combination of set of algorithms and functions. Further, the optimal values of hyperparameters may be validated against historical data received from an existing environment of the environment model 802. This process is known as model hyperparameter tuning.
Referring now to
In an embodiment, the transfer learning approach may be used to leverage training of an AI based evaluation system (such as, the AI based evaluation system 102) from previous implementation to new implementation. The new model 906 may correspond to the new environment generated for the environment model 802 based on acceptance of one combination of the set of algorithms and functions. The new model 906 may receive the optimal values of hyperparameters represented as extracted pre-trained hyperparameters from the pre-trained model 902.
Thereafter, the new model 906 may classify a new set of developers in one of the set of performance categories based on the optimal values of hyperparameters received from the pre-trained model 902. In an embodiment, the transfer learning approach may enable gathering of knowledge from an existing environment or implementation of the AI based evaluation system 102. The knowledge corresponds to optimal values (i.e., the one or more feature vectors) of the plurality of performance parameters and hyperparameter required for the implementation of the AI based evaluation system 102. Further, the optimal values of the plurality of performance parameters and hyperparameter may be utilized to develop the new environment for the AI based evaluation system 102. This may require less training time as compared to starting from scratch or from vanilla model. The vanilla model may correspond to a standard, usual, and unfeatured version of the AI based evaluation system 102.
In accordance with an embodiment, the AI based evaluation system 102 may be configured to modify the first pre-trained machine learning model (such as, the pre-trained model 902) with transferable knowledge for a target system to be evaluated. The transferable knowledge may correspond to optimal values associated with the one or more feature vectors corresponding to each of the plurality of performance parameters.
In accordance with an embodiment, the AI based evaluation system 102 may be configured to tune the first pre-trained machine learning model (such as, the pre-trained model 902) using specific characteristics of the target system to create a target model (such as, the new model 906). In accordance with an embodiment, the AI based evaluation system 102 may be configured to evaluate the target system performance using the target model (such as, the new model 906) to predict system performance of the target system for evaluating performance of a set of developers from an organization.
Various embodiments provide a method and system for evaluating performance of developers using Artificial Intelligence (AI). The disclosed method and system may receive each of a plurality of performance parameters associated with a set of developers. The system and method may then create one or more feature vectors corresponding to each of the plurality of performance parameters, based on one or more features determined for each of the plurality of performance parameters. The one or more feature vectors may be created based on a first pre-trained machine learning model. Further, the system and the method may assess the one or more feature vectors, based on the first pre-trained machine learning model. The system and the method may classify the set of developers into one of a set of performance categories based on the assessing of the one or more feature vectors. Thereafter, the system and the method may evaluate the performance of at least one of the set of developers, based on an associated category in the set of performance categories, in response to the classification.
The system and method provide some advantages, like the disclosed system and the method may enable collaboration amongst developers of an organization based on performance evaluation. Further, the disclosed system and method may help managers or reviewers to pro-actively identify developers that may require training on a particular technology. In addition, the system and method may evaluate performance of a developer comprehensively, based on several performance parameters, such as faulty code, complexity of code snippets as well as assistance provided by a developer to other developers in the organization. Such comprehensive evaluation of performance by the AI based evaluation system 102 may facilitate identification of distinguished developers in the organization and similarly aid in providing a necessary appraisal or rating to developers of the organization. Moreover, the system and method may help managers to find developers of a similar type of technical skills. Further, the system and the method may allow managers to fetch details of a developer based on performance parameters.
It will be appreciated that, for clarity purposes, the above description has described embodiments of the disclosure with reference to different functional units and processors. However, it will be apparent that any suitable distribution of functionality between different functional units, processors or domains may be used without detracting from the disclosure. For example, functionality illustrated to be performed by separate processors or controllers may be performed by the same processor or controller. Hence, references to specific functional units are only to be seen as references to suitable means for providing the described functionality, rather than indicative of a strict logical or physical structure or organization.
AI though the present disclosure has been described in connection with some embodiments, it is not intended to be limited to the specific form set forth herein. Rather, the scope of the present disclosure is limited only by the claims. Additionally, although a feature may appear to be described in connection with particular embodiments, one skilled in the art would recognize that various features of the described embodiments may be combined in accordance with the disclosure.
Furthermore, although individually listed, a plurality of means, elements or process steps may be implemented by, for example, a single unit or processor. Additionally, although individual features may be included in different claims, these may possibly be advantageously combined, and the inclusion in different claims does not imply that a combination of features is not feasible and/or advantageous. AI so, the inclusion of a feature in one category of claims does not imply a limitation to this category, but rather the feature may be equally applicable to other claim categories, as appropriate.