The present invention claims the benefit of priority to Japanese Patent Application No. 2022-101344 filed on Jun. 23, 2022 with the Japanese Patent Office, the entire contents of which are incorporated herein by reference in its entirety.
The present invention relates to a method for performing online evaluation. Further, the present invention also relates to an online server for performing evaluation.
There are many situations in the world where things are evaluated and some decisions are made according to the evaluation results. As familiar examples, there are cases where a business idea is evaluated, or company value is evaluated. Innovative businesses and companies with high growth potential will be promising investment destinations. In addition, there are innumerable other evaluation targets, and various evaluation targets in various fields such as politics, economy, society, industry, science, environment, and education exist. Further, within a company, there are various evaluation targets in various departments such as personnel, labor, education, accounting, legal affairs, corporate planning, technological development, security, information management, marketing and sales.
When evaluating things, it is effective to comprehensively evaluate them by a plurality of persons rather than by one person in order to enhance the objectivity of the evaluation. In addition, with the progress of Internet technology, it is possible to collect evaluations from a large number of evaluators online.
For example, Japanese Patent Application Publication No. 2014-500532 (Patent Literature 1) proposes a method wherein examinees evaluate each other's answers for a question without a model answer. It is disclosed in the literature that a system comprises a memory device resident in a computer and a processor provided in communication with the memory device, wherein the processor is configured to request a candidate to create a question based on a theme; to receive the question from the candidate; to request an evaluation of the question and the theme from at least one evaluator; and to receive a question score from each evaluator, wherein the question score is an objective measure of the evaluation of the question and the evaluator; to receive a grade for each evaluator; and to calculate a grade for the candidate based on the question score from each evaluator and the grade for each evaluator.
In WO 2017/145765 (Patent Literature 2), there is disclosed an online test method that enables simple and objective measurement of each examinee's idea creativity by determining the connoisseurship of each examinee and reflecting the result in each examinee's evaluation. Specifically, there is disclosed an online test method to evaluate an innovation ability such as the ability to create many highly evaluated ideas, the ability to create a wide range of highly evaluated ideas, or the ability to create rare and highly evaluated ideas, in which an online test is conducted in which a number of examinees are asked to select a situation setting related to 5W1H from the options and to describe their ideas as much as possible within the time limit, and the answers from the examinees are weighted according to a predetermined standard and the total score is calculated.
In WO 2020/153383 (Patent Literature 3), there is disclosed a method for collecting and evaluating problems online, comprising collecting various problems or solutions to problems from multiple examinees via a computer network and allowing the examinees to evaluate each other, and scoring the problems or the solutions to problems.
In Patent Literature 1, a grade is given to an evaluator, which is determined based on evaluations by other evaluators. Therefore, in Patent Literature 1, in order to grade a candidate, it is necessary not only to evaluate the question created by the candidate but also to mutually evaluate other evaluators among the evaluators, which imposes a heavy burden on the evaluators.
In Patent Literature 2, when the examinee's ability to create ideas is mutually evaluated, weighting is given according to the examinee's connoisseurship as an evaluator, but the examinee's connoisseurship is ranked based on their ability to create ideas. Therefore, it is not necessary for the examinees to mutually evaluate their connoisseurship, and the burden on the examinees is small. However, in the online test method of Patent Literature 2, the examinees need to participate in the idea creation test, and the connoisseurship as an evaluator cannot be measured independently. In addition, the connoisseurship as an evaluator does not always match the ability to create ideas. There may be examinees who have high ability to create ideas but have low connoisseurship as evaluators, and examinees who have low ability to create ideas but have high connoisseurship as evaluators. Therefore, it is desirable that the connoisseurship as an evaluator is ranked independently of the ability to create ideas.
In Patent Literature 3, when scoring a problem or a solution to the problem, weighting is given to evaluation from examinees with high connoisseurship. However, it is based on the assumption that examinees who are highly evaluated for problem creation or solution creation also have high connoisseurship. Therefore, there is a problem similar to that of Patent Literature 2.
The present invention has been created in view of the above circumstances, and in one embodiment, it is an object to provide a method for online evaluation that can independently calculate the evaluation of an evaluation target such as an idea, and the connoisseurship (evaluation ability) of evaluators who evaluate the evaluation target without imposing a heavy burden on the evaluators. Further, in another embodiment, an object of the present invention is to provide a server for carrying out such an evaluation method online.
As a result of diligent studies to solve the above problems, the present inventors have found the following online evaluation method and server contribute to problem solving, the method comprising analyzing, by a server, a degree of strictness of each evaluator, then calculating, by the server, an evaluation ability score of each evaluator based on closeness between a provisional score of an evaluation target by all the evaluators and an evaluation of the evaluation target by each evaluator, and calculating, by the server, a final score of the evaluation target in consideration of the evaluation ability score of each evaluator.
[1] A method for online evaluation, comprising:
[2] The method for online evaluation according to [1], wherein the corrected score of each evaluation target is regarded as the provisional score, and the server repeats the step 1G and the step 1 H one or more times.
[3] The method for online evaluation according to [2], wherein the server stops repeating the step 1G any more when either or both of the following conditions (a) and (b) are satisfied:
[4] A method for online evaluation, comprising:
[5] The method for online evaluation according to [4], wherein the corrected score of each evaluation target is regarded as the provisional score, and the server repeats the step 1G1 and the step 1H1 one or more times.
[6] The method for online evaluation according to [5], wherein the server stops repeating the step 1G1 any more when either or both of the following conditions (a) and (b) are satisfied:
[7] The method for online evaluation according to any one of [1] to [6], wherein the evaluation target data storage part may also store data related to a plurality of different evaluation targets from the plurality of evaluation targets used in the current evaluation session, and the method further comprises:
[8] The method for online evaluation according to any one of [1] to [7], further comprising:
[9] The method for online evaluation according to [8], further comprising:
[10] The method for online evaluation according to any one of [1] to [9], wherein the evaluation targets are ideas related to the predetermined theme.
[11] The method for online evaluation according to any one of [1] to [10], wherein the data related to the evaluation targets include text information.
[12] A server for online evaluation, comprising a transceiver, a control unit, and a storage unit, wherein
[13] The server for online evaluation according to [12], wherein the corrected score of each evaluation target is regarded as the provisional score, and the evaluation analysis part is capable of repeating the step 1G and the step 1H one or more times.
[14] The server for online evaluation according to [13], wherein the evaluation analysis part stops repeating the step 1G any more when either or both of the following conditions (a) and (b) are satisfied:
[15] A server for online evaluation, comprising a transceiver, a control unit, and a storage unit, wherein
[16] The server for online evaluation according to [15], wherein the evaluation analysis part regards the corrected score of each evaluation target as the provisional score, and repeats the step 1G1 and the step 1 H1 one or more times.
[17] The server for online evaluation according to [16], wherein the evaluation analysis part stops repeating the step 1G1 any more when either or both of the following conditions (a) and (b) are satisfied:
[18] The server for online evaluation according to any one of [12] to [17], wherein the evaluation target data storage part may also store data related to a plurality of different evaluation targets from the plurality of evaluation targets used in the current evaluation session,
[19] The server for online evaluation according to any one of [12] to [18], wherein
[20] The server for online evaluation according to [19], wherein
[21] The server for online evaluation according to any one of [12] to [20], wherein the evaluation targets are ideas related to the predetermined theme.
[22] The server for online evaluation according to any one of [12] to [21], wherein the data related to the evaluation targets include text information.
[23] A program for causing a computer to execute the evaluation method according to any one of [1] to [11].
[24] A computer-readable recording medium on which the program according to [23] is recorded.
According to one embodiment of the present invention, it is possible to provide a method for online evaluation that can independently calculate the evaluation of an evaluation target such as an idea, and the connoisseurship (evaluation ability) of evaluators who evaluate the evaluation target without imposing a heavy burden on the evaluators. Further, according to one embodiment of the present invention, it is possible to provide a server for carrying out such an evaluation method online.
According to one embodiment of the present invention, the evaluation ability for each evaluator can be calculated based only on the evaluation given by each evaluator to the evaluation target. As a result, it is expected that the calculation result of the evaluation ability of each evaluator can be obtained with high reliability. Further, since the reliability of the calculation result of the evaluation ability of each evaluator is high, it is expected that the evaluation of the evaluation target calculated based on this is also highly reliable.
According to one embodiment of the present invention, the degree of strictness of the evaluation of the evaluation target by each evaluator is analyzed, and the evaluation is corrected such that the evaluation by the evaluator who gives a strict evaluation rises relatively and the evaluation by the evaluator who gives a lax evaluation decreases relatively. As a result, the degree of strictness of evaluation, which may differ for each evaluator, is adjusted, so that it is possible to reduce the influence of the evaluators who gives excessive evaluation.
According to one embodiment of the present invention, the evaluation of an evaluation target is performed after being weighted according to the evaluation ability of each evaluator. Therefore, even if an ineligible evaluator is mixed in the evaluator, it is possible to perform a highly accurate evaluation of the evaluation target while minimizing the influence of such evaluator.
Hereinafter, embodiments of the method for online evaluation and the online server for evaluation according to the present invention will be described in detail with reference to the drawings, but the present invention is not limited to these embodiments. In the following description, a person who participates in an evaluation session and evaluates an evaluation target is referred to as an “evaluator”, and a person who participates in a collection session and provides information to be evaluated is referred to as an “answerer”. A participant may participate in only one of the evaluation session and the collection session, or both. It can be decided in advance which session the participant will participate in. Further, although embodiments in which both the collection session and the evaluation session are carried out will be described here, the evaluation session may be carried out alone.
<1. System Configuration>
[Network]
The computer network 14 is not limited, but can be, for example, a wired network such as a LAN (Local Area Network) or a WAN (Wireless Network), and can be a wireless network such as WLAN (Wireless Local Area Network) using MIMO (Multiple-Input Multiple-Output). Alternatively, an Internet using a communication protocol such as TCP/IP (Transmission Control Protocol/Internet Protocol), or it may be via a base station (not shown) that plays a role as a so-called Wireless LAN Access Point.
The server means a server computer, and can be configured by the cooperation of one or a plurality of computers. The participant terminal 12, the project administrator terminal 13, and the server administrator terminal 15 can be realized by a personal computer equipped with a browser, but the present invention is not limited thereto. They can be composed of portable terminals such as smartphones, tablets, mobile phones, mobiles and PDAs, and devices and apparatus capable of communication via a computer network such as digital televisions.
The basic hardware configurations of the server 11, the participant terminal 12, the project administrator terminal 13, and the server administrator terminal 15 are common. As shown in
The processing device 201 refers to a device, a circuit, or the like that controls the entire computer and executes arithmetic processing according to commands, instructions and data input by the input device 204, or data stored in the storage device 202, and the like. As the processing device 201, for example, a CPU (Central Processing Unit), an MPU (Micro Processing Unit), or the like can be adopted.
The storage device 202 refers to a device, a circuit, or the like that store various data, operating systems (OS), network applications (example: a web server software on the server 11 side, and browsers on the participant terminal 12, the project administrator terminal 13, and the server administrator terminal 15), and programs for executing various arithmetic processes, and the like. For example, known storage devices such as a primary storage device that mainly use semiconductor memory, a secondary storage device (auxiliary storage device) that mainly use hard disk drives and semiconductor disks, offline storage and tape libraries that mainly uses removable media drives such as CD-ROM drives may be used. More specifically, in addition to magnetic memory storage devices such as hard-disk drives, Floppy™ disks drives, zip drives and tape storages, storage devices or storage circuits employing semiconductor memory such as registers, cache memory, ROM, RAM, flash memory (such as USB storage devices or solid state drive), semiconductor disks (such as RAM disks and virtual disk drives), optical storage media such as CDs and DVDs, optical storage devices employing magneto-optical disks like MO, other storage devices such as paper tapes and punch cards, storage devices employing phase change memory technique called PRAM (phase change RAM), holographic memory, storage devices employing 3-dimensional optical memory, storage devices employing molecular memory which stores information through accumulating electrical charge at a molecular level, and the like may be used.
The output device 203 refers to an interface such as a device or circuit that enables output of data or commands, and a display such as LCD and OEL as well as a printer or a speaker, and the like can be employed.
The input device 204 refers to an interface to pass data or commands to the processing device 201, and a keyboard, a numeric keypad, a pointing device such as a mouse, a touch panel, a reader (OCR), an input screen and an audio input interface such as a microphone may be employed.
The communication device 205 refers to a device or a circuit for transmitting and receiving data to/from the outside the computer, and may be an interface such as a LAN port, a modem, wireless LAN and a router. The communication device 205 can transmit/receive the processed results by the processing device 201 and the information stored in the storage device 202 through the computer network 14.
The random number generator 206 is a device which is able to provide random numbers.
The timer 207 is a device which is able to measure and inform time.
[Server]
<Storage Unit>
In the present embodiment, the storage unit 340 of the server 11 may store a participant account file 341, session participant registration data file 342, a project data file 343, an evaluation axis data file 344, a question data file 345, an answer column data file (summarized) 346a, an answer column data file (detailed) 346b, an answer data file (summarized) 348a, an answer data file (detailed) 348b, an evaluation result data file 349, an evaluator score data file 350, an answer score data file 351, an answerer score data file 352, a project administrator account file 353, a server administrator account file 354, an evaluation progress management file 355, an answer progress management file 356, and the like. These files may be prepared individually according to the type of data, or a plurality of types of files may be collectively stored in one file. Further, the data included in the same file name may be stored in a plurality of files separately. The data stored in these various files may be temporarily or non-transitory stored in these files depending on the type of data.
Further, the storage unit 340 of the server 11 may store a first format data file 361 for storing first format data for evaluation input including a selective evaluation input section based on at least one evaluation axis, and a second format data file 362 for storing second format data including at least one information input section. These files may be prepared individually according to the type of data, or a plurality of types of data may be collectively stored in one file. Further, the same type of data may be stored in a plurality of files separately.
(Participant Account File)
The participant account file 341 may store the account information of candidates who may participate in a collection session and/or an evaluation session in a searchable state.
(Session Participant Registration Data File)
The session participant registration data file 342 may store information about which participant stored in the participant account file 341, will or will not participate in the collection or the evaluation session in a searchable state. In addition, the session participant registration data file 342 can also store information about the status of each participant's session in a searchable state.
(Project Data File)
The project data file 343 may store information about implementation conditions of a project that conducts a collection session and/or an evaluation session on a given theme in a searchable state.
(Evaluation Axis Data File)
The evaluation axis data file 344 may store information about the evaluation axis used when evaluating the evaluation target in a searchable state. This information can be regarded as a kind of information about the implementation conditions of the project. When there is a plurality of evaluation axes, the evaluation axis data file 344 stores information about the plurality of evaluation axes.
The evaluation axis name describes the viewpoint of evaluation when an evaluator evaluates the evaluation target. The viewpoint of evaluation may be appropriately set according to the evaluation target. From the viewpoint of evaluation, for example, when evaluating a business idea, novelty, innovativeness, growth potential, social contribution, unexpectedness, sympathy, and the like can be mentioned. In addition, when evaluating corporate value, the growth potential of the evaluated company, the stability of the evaluated company, the social contribution of the evaluated company, and the like can be mentioned. In addition, the viewpoint of evaluation may simply be whether or not to agree or sympathize with the answer from the answerer. Further, in addition to the evaluation axis for evaluating individual items, an evaluation axis named comprehensive evaluation may be provided. This makes it possible to visualize “which evaluation axis has the greatest influence on the comprehensive evaluation” by performing a multiple regression analysis of the relationship between the evaluation of individual items and the comprehensive evaluation from all evaluators. However, as will be described later, a comprehensive evaluation can also be calculated from the evaluation results on each evaluation axis.
At least one evaluation axis is required for one evaluation target. In order to make evaluation from various aspects, it is preferable that the number of evaluation axes be two or more, and more preferably three or more.
(Question Data File)
The question data file 345 may store information about the questions presented to the answerers when a collection session is conducted and information about the questions presented to the evaluators when an evaluation session is conducted in a searchable state. The above information can be regarded as a kind of information regarding the implementation conditions of the project. In the present embodiment, the information about the questions presented to the answerers when a collection session is conducted and the information about the questions presented to the evaluators when an evaluation session is conducted are stored in the question data file 345 as a set, but for example, when the collection session and the evaluation session are performed independently, these two types of information may be stored separately in a plurality of files.
The theme of the question is not particularly limited, but examples include brainstorming of ideas, penetration of a vision, and quantification of the five senses.
(Answer Column Data File (Summarized))
The answer column data file (summarized) 346a may store information about the answer column to be presented to the answerers when a collection session is conducted in a searchable state. This information can be regarded as a kind of information about the implementation conditions of the project.
(Answer Column Data File (Detailed))
The answer column data file (detailed) 346b may store information about detailed conditions according to the type of answer column presented to the answerers when a collection session is conducted in a searchable state. This information can be regarded as a kind of information about the implementation conditions of the project.
(Answer Data File (Summarized))
The answer data file (summarized) 348a may store such as the identifier of the answer data including the information about a predetermined theme (in other words, the answer to the question) transmitted by the answerer in the collection session in a searchable state. The information contained in the answer data can be used for evaluation.
(Answer Data File (Detailed))
The answer data file (detailed) 348b may store information about the specific contents of the answer data according to the type of the answer column in a searchable state.
The answer data files 348a and 348b may store only the answer data collected by the current collection session, but may also store the answer data collected by collection sessions in the past. Further, the storage unit of the server 11 may have an evaluation target data storage part that stores information that can be an evaluation target in addition to the answer data collected by the current or past collection session. All the information stored in the evaluation target data storage part may be the evaluation target of the current evaluation session, or only a part of the information may be the evaluation target of the current evaluation session.
(Evaluation Result Data File)
The evaluation result data file 349 may store evaluation result data including the evaluation by the evaluators for the information from the answerers as the evaluation target in the present embodiment and the corrected evaluation after analyzing the degree of strictness of the evaluator for each evaluation axis, in a searchable state.
For example, the evaluation value may be a three-choice type such as “I do not agree very much”, “I can agree”, or “I can agree very much”, or a two-choice type such as “I agree” or “I do not agree”. It may be expressed by a score within a predetermined range. The evaluation value can be stored for each evaluation axis described above. By performing the evaluation of the evaluation target with a selective evaluation, the evaluation data of the evaluation target can be statistically analyzed easily by the selective evaluation. The selective evaluation includes, but is not limited to, a method of selecting one of options displayed in advance, a method of inputting a numerical value related to the grade of the evaluation, and the like. Further, the evaluation result data file 349 may store comment data that can be arbitrarily described by the evaluator at the time of evaluation.
(Evaluator Score Data File)
The evaluator score data file 350 is a kind of evaluator score data storage part. It may store an evaluation ability score for each evaluation axis corresponding to the connoisseurship of the evaluator in a searchable state. The evaluation value of the evaluation target by one evaluator and the score acquired by the evaluation target based on the evaluation of the evaluation target from all the evaluators (in this embodiment, the “answer score”) are compared for each evaluation axis. The higher the closeness between the two is, the higher the evaluation ability score of the evaluator is.
(Answer Score Data File)
The answer score data file 351 is a kind of evaluation target score data storage part. It may store an answer score acquired by the information from the answerer as the evaluation target based on the evaluations from all the evaluators for each evaluation axis in a searchable state. The answer score includes a provisional score, a corrected score, and a final score, and one or more of these can be stored depending on the embodiments. The provisional score and the corrected score may be stored in a temporary file, and the temporary file for temporarily storing the provisional score and the corrected score is also a kind of evaluation target score data storage part.
(Answerer Score Data File)
The answerer score data file 352 is a kind of answerer score data storage part. It may store a score of the answerer calculated based on the answer score acquired by the information as the evaluation target (the “answerer score”) for each evaluation axis in a searchable state. In general, answerers who provide information that has acquired a high answer score (information that is highly evaluated by the evaluators) are given a high answerer score.
(Project Administrator Account File)
The project administrator account file 353 may store account information for the administrator of a project that conducts a collection session and/or an evaluation session on a given theme, for example, the account information of the organization such as a company to which the answerers and/or the evaluators belong in a searchable state.
(Server Administrator Account File)
The server administrator account file 354 may store the server administrator account information in a searchable state.
(Evaluation Progress Management File)
The evaluation progress management file 355 may store information about the progress of the evaluation session.
(Answer Progress Management File)
The answer progress management file 356 may store information about the progress of the collection session.
In the above tables in the data files, data types such as “int” (integers), “text” (character string type), “float” (floating point type), “crypt” (encrypted character string type) and “date” (date and time type), bool (true and false binary type) are used for each filed. However, the data types are not limited to the illustrated forms, and may be appropriately changed as needed.
(First Format Data File)
First format data file 361, which is a type of the first format data storage part, may store first format data for evaluation input, including a selective evaluation input section based on at least one evaluation axis, which is used to carry out the evaluation session. As mentioned above, the selective evaluation makes it easier to statistically analyze the evaluation data for the evaluation target. The first format data may further include at least one descriptive comment input section. A descriptive evaluation increases the degree of freedom of description, so that the reader can deeply understand the evaluator's way of thinking and the basis of the evaluation.
(Second Format Data File)
The second format data file 362, which is a type of the second format data storage part, may store second format data including at least one information input section (in this embodiment, the “answer column”), which is used to carry out the collection session. As long as information that reflects the answerer's thoughts can be input in the answer column, there are no particular restrictions on the input method, but it is preferable that the answer column include an input section for text information. Answerers can input answers to questions about a predetermined theme in the answer column. The format of the answer column is determined according to the conditions specified in the answer column data file (summarized) 346a and the answer column data file (detailed) 346b, and the second format data that meet the conditions is sent to the answerers.
<Transceiver>
The server 11 can exchange various data with the participant (evaluator, answerer) terminal 12, the project administrator terminal 13, the server administrator terminal 15, and the computer network 14 through the transceiver 310.
For example, the transceiver 310 may be capable of:
<Control Unit>
In the present embodiment, the control unit 320 of the server 11 comprises an authentication processing part 321, an evaluator allocation part 322, an evaluation input data extraction part 323, an information input data extraction part 324, a data registration part 325, an evaluation analysis part 326, an evaluation analysis data extraction part 327, a time limit judgement part 328, an evaluation number judgement part 329, and an answer number judgement part 330. Each part can perform the desired calculation based on the program.
(Authentication Processing Part)
The authentication processing part 321 can authenticate the participant ID and password based on the access request from the participant (evaluator, answerer) terminal 12. For example, the access request from the participant terminal 12 can be executed by inputting the participant ID and password and clicking a login button on a screen of a top page on the participant terminal 12 as shown in
In addition, the authentication processing part 321 may authorize an organization ID and password based on an access request from the project administrator terminal 13. The organization ID and password may be given in advance by the server administrator. The authentication processing may be executed by the authentication processing part 321 which refers to the project administrator account file 353 and determines whether or not the input organization ID and password match the data stored in the project administrator account file 353. If the input organization ID and password match the stored data, the screen data of the project administrator page as shown in
In addition, the authentication processing part 321 may authorize a server administrator ID and password based on an access request from the server administrator terminal 15. The administrator ID and password of the server administrator may be given in advance by himself/herself. The authentication processing may be executed by the authentication processing part 321 which refers to the server administrator account file 354 and determines whether or not the input server administrator ID and password match the data stored in the server administrator account file 354. If the input server administrator ID and password match the stored data, the screen data of the server administrator page (for example, the administration screen shown in
(Data Registration Part)
The data registration part 325 may register the participants (evaluators, answerers). For example, when a project administrator such as a company to which the participants belong logins using the project administrator terminal 13 according to the above procedures, a project administrator screen as shown in
In addition, the data registration part 325 may register the project implementation conditions including collection and evaluation sessions. For example, when the project administrator clicks a button of “Collection/evaluation session question setting” on the project administrator screen as shown in
After inputting the information about the questions in the collection session and the information about the questions in the evaluation session, when the administrator clicks the “Finish creation” button, as shown in
In addition, the question titles and the like can be input simultaneously on the question setting screen of the collection session as shown in
In this way, the information about the questions of the collection session transmitted to the server 11 is received by the transceiver 310 of the server 11, and the data registration part 325 may store the received information in the question data file 345, the answer column data file (summarized) 346a, and the answer column data file (detailed) 346b in association with the identifiers such as the question ID and the answer column ID. Identifiers such as the question ID and the answer column ID may be manually assigned by the server administrator individually, and may be automatically allocated according to predetermined rules when the server 11 stores the information about the implementation conditions of the collection session in the question data file 345, the answer column data file (summarized) 346a, and the answer column data file (detailed) 346b.
Further, the information about the questions of the evaluation session transmitted to the server 11 is received by the transceiver 310 of the server 11, and the data registration part 325 may store the received information in the question data file 345 and the evaluation axis data file 344 in association with identifiers such as the question ID, the evaluation axis ID, and the answer column ID. The evaluation axis ID may be manually assigned by the server administrator individually, and may be automatically allocated according to predetermined rules when the server 11 stores the information about the questions of the evaluation session in the question data file 345 and the evaluation axis data file 344.
Further, when the project administrator clicks the “Project condition setting” button on the project administrator screen as shown in
In addition, the data registration part 325 can register the project administrator. When the server administrator (in other words, the provider of the online evaluation system) logs in using the server administrator terminal 15 according to the above procedure, the server administrator terminal 15 displays a server administrator screen as shown on the left side of
In addition, the data registration part 325 can register the answer data including the information on the predetermined theme transmitted by the answerer in the collection session. For example, when the answerer terminal 12 displays the screen for answerers in the collection session as shown in
The number of times the answerer can input the information to be evaluated in the answer column and transmit it may be appropriately set by the project administrator according to the purpose of the collection session. For example, it may be set such that it can be transmitted only once, or it may be set such that it can be transmitted multiple times when the purpose is to collect many kinds of ideas. In addition, it may be possible to collectively transmit the information as a plurality of evaluation targets input in a plurality of answer columns.
In addition, the data registration part 325 can register the evaluation by the evaluator. For example, when the evaluator terminal 12 displays the evaluator screen in the evaluation session as shown in
(Information Input Data Extraction Part)
The information input data extraction part is capable of performing a step 2A comprising extracting the question data from the question data file 345 and the answer column data file (summarized) 346a, and extracting the second format data that match the conditions stored in the answer column data files 346a and 346b from the second format data file 362, and transmitting them from the transceiver 310 via the computer network 14 all at once or individually to the answerer terminals 12 of a plurality of answerers flagged as answerers in the collection session in the session participant registration data file 342. The extraction and transmission may be triggered by the transceiver 310 receiving an instruction to start a collection session transmitted from the project administrator terminal 13, and also may be triggered by the transceiver 310 individually receiving a request to start a collection session from an answerer terminal 12. Further, after receiving an instruction or request to start a collection session, the information input data extraction part 324 can change the status in the session participant registration data file 342 or the like to a status indicating that the collection session has started and store the status.
(Time Limit Judgement Part)
The time limit judgement part 328 may, for example, in the collection session (or the evaluation session), use the timer 207 which is built in the server 11 to judge whether or not the time when the transceiver 310 receives the answer data transmitted from the answerer terminal 12 (or the evaluation result data transmitted from the evaluator terminal 12) is within a time limit, based on the time information such as the project ID, the start date and time of the collection session (or evaluation session), the end date and time of the collection session (or evaluation session), the answer (or evaluation) time limit, and the like stored in the project data file 343.
As a result of the judgement, if it is judged that the time limit is met, the time limit judgement part 328 may instruct the data registration part 325 to assign an answer ID (or evaluation ID) to the answer data (or evaluation data), and store them in the answer data file 348a, 348b (or evaluation result data file 349) in association with the answerer ID of the answerer (or evaluator ID of the evaluator) who has transmitted the answer data (or evaluation data).
On the other hand, as a result of the judgement, if it is judged that the time limit has passed, it is possible to refuse the transmission of the answer data from the answerer terminal 12 (or the evaluation data from the evaluator terminal 12) or the reception of them by the server 11. In addition, regardless of whether or not the answer data from the answerer terminal 12 (or the evaluation data from the evaluator terminal 12) is received, if it is judged that a predetermined time limit has passed, the time limit judgement part 328 may inform the end of the collection session (or the evaluation session) from the transceiver 310 in a displayable form to the answerer terminal 12 (or the evaluator terminal 12) and the project administrator terminal 13, and refuse to receive the answer data (or the evaluation data) which fail to meet the time limit. In addition, in order to record that the collection session (or the evaluation session) has ended, the time limit judgement part 328 of the server 11 can change the status in the session participant registration data file 342 or the like to “collection session (or evaluation session) ended”. Furthermore, the time limit judgement part 328 may inform the evaluator allocation part 322 that the collection session has ended.
(Evaluator Allocation Part)
After it is confirmed the collection session is ended when the status in the session participant registration data file 342 and the like becomes “collection session ended” for all the participants of the collection session, or when receiving a notification from the time limit judgement part 328 that the collection session has ended, the evaluator allocation part 322 receives an evaluation session start instruction transmitted from the project administrator terminal 13 by the transceiver 310, and it can perform a step 1A comprising allocating evaluators who should evaluate the information (the answer content) in the answer data stored in the answer data file (detailed) 348b, from among a plurality of evaluators flagged as evaluators for the current evaluation session in the session participant registration data file 342. Alternatively, after it is confirmed the collection session is ended when the status in the session participant registration data file 342 and the like becomes “collection session ended” for all the participants of the collection session, or when receiving a notification from the time limit judgement part 328 that the collection session has ended, the evaluator allocation part 322 may not wait for the evaluation session start instruction transmitted from the project administrator terminal 13, and it can automatically perform a step 1A comprising allocating evaluators who should evaluate the information (the answer content) in the answer data stored in the answer data file (detailed) 348b, from among a plurality of evaluators flagged as evaluators for the current evaluation session in the session participant registration data file 342. This makes it possible to save evaluation time.
When the project administrator prepares a plurality of evaluation targets in advance, and the data related to a plurality of evaluation targets is stored in the storage unit 340 of the server 11, a collection session may not be performed. In that case, when the evaluator allocation part 322 receives the evaluation session start instruction transmitted from the project administrator terminal 13, it may allocate evaluators who should evaluate the data related to the evaluation targets stored in the data storage unit, from among a plurality of evaluators flagged as evaluators for the current evaluation session in the session participant registration data file 342.
The population of the evaluation targets can be appropriately selected by the project administrator, and there are no particular restrictions. The evaluation targets may be limited to the information collected by the current collection session, or other information such as the information collected by a collection session in the past may be added to the evaluation targets. Alternatively, information collected from neither the current nor past collection sessions may be evaluated. In this embodiment, since the evaluator's evaluation ability is calculated independently of the evaluator's idea creativity, the degree of freedom of the evaluation target is high.
The method of allocating the evaluators may be conducted according to a predetermined method, and there is no particular limitation. For example, all evaluators may evaluate all the information obtained in the collection session (except for information provided by the evaluator himself/herself) (total evaluation). If there is a lot of information to be evaluated, in order to reduce the evaluation burden of each evaluator, it is possible to obtain a random number generated from the random number generator 206 built in the server 11, and determine the information that each evaluator should evaluate from the population of evaluation targets stored in the evaluation target storage part such as the answer data file (details) 348b using the random number (random shuffle evaluation). When performing random shuffle evaluation, the evaluator allocation part 322 may determine which evaluator evaluates which information by allocating the identifiers of the evaluation targets such as the answer IDs to a number of people required for evaluation from the evaluator IDs using the random number.
From the viewpoint of analyzing the degree of strictness and connoisseurship of the evaluators, it is necessary that one evaluator evaluates a plurality of, preferably 10 or more, and more preferably 20 or more evaluation targets. However, if the number of evaluation targets to be evaluated by one evaluator becomes excessive, the burden on the evaluators will increase. Therefore, from the viewpoint of reducing the burden on the evaluators, the number of evaluation targets evaluated by one evaluator is preferably 100 or less, and more preferably 50 or less.
From the viewpoint of analyzing the evaluation ability score corresponding to the connoisseurship of the evaluators, the number of evaluators for one evaluation target needs to be a plurality, preferably 5 or more, and more preferably 10 or more. There is no particular upper limit to the number of evaluators for one evaluation target, but it is desirable to set the number of evaluation targets evaluated by one evaluator to stay within the above range.
Once the evaluators to evaluate each evaluation target are determined, the evaluator allocation part 322 can store the evaluator ID, the identifier of the evaluation target such as the answer ID to be evaluated, the required number of evaluations, the number of completed evaluations, and the like in association with each other for each evaluator in the evaluation progress management file 355 for managing the progress of evaluation by the evaluators.
An example of procedure for determining the evaluators by the evaluator allocation part 322 will be described. The evaluator allocation part 322 may count the total number of evaluation targets based on, for example, the number of answer IDs in which the information as the evaluation target is stored, and calculate the maximum number of evaluation targets to be allocated to each evaluator using the following formula. The calculation result may be rounded up to an integer.
Maximum allocation number=(total number of evaluation targets)×(number of evaluators for one evaluation target)/(total number of evaluators)
The number of evaluators to evaluate one evaluation target may follow the “number of evaluators allocated to one evaluation target” stored in the project data file 343.
It is preferable that the evaluator allocation part 322 refer to the answer data file (summarized) 348a, and when the answerer ID of the answerer who has transmitted the information as the evaluation target and the evaluator ID of the evaluator who should evaluate the information by this answerer selected by a random number matched, it is preferable to cancel the selection and perform selection again with random numbers. In addition, when a specific evaluation target is selected a number of times exceeding the maximum allocations number obtained above, it is also preferable that the evaluator allocation part 322 cancels the selection and perform selection again with random numbers. If there are enough evaluators, in such a way of selecting evaluators, all evaluators can be allocated with either “Maximum allocation number” or “Maximum allocation number−1” of evaluation targets to be evaluated.
When allocating evaluation targets to evaluators, the order of evaluation targets allocated to evaluators may be randomized such that the order of allocation is different from the generation order (time series) of evaluation targets collected by the collection session. Further, in order to make the order of allocating the evaluation targets to the evaluators uniform for each evaluator, the allocation may be performed in order of increasing such that the evaluator with the smallest number of allocated evaluation target(s) at the time of allocating the evaluation targets is first allocated. Further, after the evaluators to evaluate each evaluation target is determined, the order of presenting the allocated evaluation targets to the evaluators may be randomized. By performing one or more of these procedures, it is possible to prevent bias of the evaluation targets and the evaluators, and it is possible to improve the reliability of the evaluation result.
Further, the evaluator allocation part 322 may be configured to change the status in the session participant registration data file 342 and the like to the status indicating that the evaluation session has started and store it at an appropriate timing such as receiving an evaluation session start instruction transmitted from the project administrator terminal 13.
(Evaluation Input Data Extraction Part)
According to the determination of evaluators to evaluate information by the evaluator allocation part 322, the evaluation input data extraction part 323 is capable of performing a step 1B comprising extracting the data related to the evaluation targets such as the answer data including the information to be evaluated by each evaluator from the evaluation target data storage part such as the answer data file (detailed) 348b, based on the identifier of the evaluation target such as the answer ID and the evaluator ID stored in the evaluation progress management file 355; and extracting the question data including question texts related to predetermined themes in the evaluation session from the question data file 345; and extracting the first format data for evaluation input including the selective evaluation input action based on at least one evaluation axis from the first format data file 361, based on the conditions related to the evaluation axis stored in the evaluation axis data file 344; and transmitting the data related to the evaluation targets such as the answer data, the question data, and the first format data are from the transceiver 310 to the corresponding evaluator terminal 12 via the computer network 14. When transmitting the data related to the evaluation targets such as the answer data, all the data to be evaluated by each evaluator may be transmitted all at once, or may be divided and transmitted.
At this time, the evaluation input data extraction part 323 may extract other information such as evaluation input conditions in the evaluation axis data file 344, question texts related to the predetermined theme in the collection session in the question data file 345, and question texts stored in the answer column data file (summarized) 346a, and transmit them together in a displayable form.
(Evaluation Number Judgement Part)
When the server 11 receives the evaluation result data transmitted from the evaluator terminal 12 at the transceiver 310, the evaluation number judgement part 329 of the server 11 increases the number of completed evaluations by one in the evaluation progress management file 355 in association with the evaluator ID of the evaluator who has transmitted the evaluation. The evaluation number judgement part 329 can grasp the progress of the evaluation session of this evaluator by comparing the number of completed evaluations and the required number of evaluations.
When the data related to the evaluation targets such as the answer data is divided and transmitted to each evaluator, the evaluation number judgement part 329 judges whether or not this evaluator has reached the required number of evaluations according to the above determination. If it is judged that the required number of evaluations has not been reached, the evaluation input data extraction part 323 transmits data related to the evaluation targets such as an unevaluated answer data to the corresponding evaluator terminal 12 together with the first format data in a displayable form from the transceiver 310 via the computer network 14.
When the evaluation number judgement part 329 judges that the number of completed evaluations of a certain evaluator has reached the required number of evaluations, it may transmit the evaluation session end screen and/or the progress information that the evaluation session has ended from the transceiver 310 to the evaluator terminal 12 of the evaluator and the project administrator terminal 13. At this time, in order to record that the evaluation session has ended, the evaluation number judgement part 329 may change the status in the session participant registration data file 342 or the like to “evaluation session ended”.
(Answer Number Judgement Part)
When the server 11 receives the answer data transmitted from the answerer terminal 12 at the transceiver 310, the answer number judgement part 330 of the server 11 increases the number of completed answers by one in the answer progress management file 356 in association with the answerer ID of the answerer who has transmitted the answer data. The answer number judgement part 330 can grasp the progress of the collection session of this answerer by comparing the number of completed answers and the number of required answers.
In cases of dividing the data including information necessary for answer input such as the question data and transmitting it to each answerer, the answer number judgement part 330 judges whether or not the answerer has reached the number of required answers in accordance with the above determination. If it is judged that the required number of answers has not been reached, the information input data extraction part 324 transmits the data including information necessary for answer input, such as an unanswered question data, to the corresponding answerer terminal 12 together with the second format data in a displayable form from the transceiver via the computer network 14.
When answer number judgement part 330 judges that the number of completed answers of a certain answerer has reached the required number of answers, it may transmit a collection session end screen and/or a progress information that the collection session has ended from the transceiver 310 to the answerer terminal 12 of this answerer and the project administrator terminal 13. At this time, in order to record that the collection session has ended, the evaluation number judgement part 334 can change the status in the session participant registration data file 342 or the like to “collection session ended”.
(Evaluation Analysis Part)
The evaluation analysis part 326 is capable of analyzing the degree of strictness of the evaluation of each evaluator for each evaluation axis based on the evaluation input by each evaluator in the selective evaluation input section in the evaluation result data file 349. As a result of the analysis, the evaluation analysis part 326 corrects the evaluation such that the evaluation by the evaluator who gives a strict evaluation rises relatively and the evaluation by the evaluator who gives a lax evaluation decreases relatively, thereby calculating a corrected evaluation.
Among the evaluators, there are those who give a lax evaluation and those who give a strict evaluation, and the evaluation tendency differs depending on the evaluators. For this reason, if there are many evaluators who give excessive evaluations, there is a possibility that the evaluation results will differ greatly even for the same evaluation target depending on which evaluator gives evaluation. Therefore, by adjusting the degree of strictness of the evaluation that may differ for each evaluator, it is possible to reduce the influence of the evaluators who give excessive evaluations.
The method of adjusting the strictness is not particularly limited as long as the evaluation is corrected such that the evaluation by the evaluator who gives a strict evaluation rises relatively and the evaluation by the evaluator who gives a lax evaluation decreases relatively, and a specific method for adjusting the strictness will be described for illustration. For example, assuming a result of evaluating 48 evaluation targets (for example, business ideas) by an evaluator A in 3 grades according to a given evaluation axis is as follows:
When the degree of strictness is adjusted by the above method, if an Evaluator B is a lax evaluator and all 48 evaluation targets are given a high evaluation (evaluation value=+1), the evaluation value of the high evaluation by the Evaluator B is 0 point. Further, if an Evaluator C is a strict evaluator and all 48 evaluation targets are given a low evaluation (evaluation value=−1), the evaluation value of the low evaluation by the Evaluator C is also 0 point.
The evaluation analysis part 326 may store the corrected evaluation value in the evaluation result data file 349 in association with the evaluator ID of each evaluator and the identifier of the evaluation target (example: the answer ID). The corrected evaluation value may be stored in a temporary file, and the temporary file for temporarily storing the corrected evaluation value is also a kind of evaluation result data storage part.
The evaluation analysis part 326 is capable of aggregating the evaluations of each evaluation target based on the corrected evaluation and the identifier of the evaluation target (example: the answer ID) stored in the evaluation result data file 349 to calculate a provisional score of each evaluation axis for each evaluation target. Then, the evaluation analysis part 326 may store the provisional score in the evaluation target score data storage unit (example: the answer score data file 351) in association with the identifier of each evaluation target (example: the answer ID).
An example of a calculation method of the provisional score is shown below. For example, let us assume four evaluators (A, B, C, D) evaluated an idea. Table 1 shows the evaluation values by each evaluator for the idea and the evaluation values after correction by analyzing the degree of strictness according to the above-mentioned method. In this case, the provisional score of this idea can be calculated as (0.56+0.12−0.54+0.70)/4=0.21 assuming that the evaluation ability of all the evaluators is the same. At this stage, the evaluative ability of the evaluators is unknown, so it is appropriate to consider the evaluative ability of the evaluators to be the same.
The evaluation analysis part 326 is capable of comparing for each evaluation axis the corrected evaluation of each evaluation target associated with the evaluator ID stored in the evaluation result data file 349 with the provisional score of each evaluation target stored in the evaluation target score data storage part (example: the answer score data file 351), and by aggregating the closeness between the them for each evaluator to calculate the evaluation ability score of each evaluator. Then, the evaluation analysis part 326 may store the evaluation ability score in the evaluator score data file 350 in association with the evaluator ID of each evaluator.
Any statistical method may be used for the method of aggregating the closeness between the corrected evaluation and the provisional score, and there are no particular restrictions. For example, there is a method of calculating the correlation coefficient between them, Pearson's product-moment correlation coefficient, Euclidean distance, cosine similarity, and polyserial correlation coefficient. For illustration, a method of calculating the correlation coefficient between the corrected evaluation and the provisional score will be described. Table 2 shows the provisional scores for eight ideas from Idea 1 to Idea 8, the evaluation given by Evaluator A for each idea, and the corrected evaluation value by analyzing the degree of strictness of Evaluator A. The correlation coefficient between the “Provisional score” and the “Corrected evaluation value” calculated from Table 2 is 0.60. It can be said that the higher the correlation coefficient between the them is, the higher the closeness between them, and a higher evaluation ability score can be acquired. By comparing the closeness among the evaluators participating in the evaluation session, it is possible to perform a relative rating of the evaluation ability among the evaluators.
The evaluation ability score may be an index that can relatively rate the evaluation ability among evaluators, and there is no particular limitation on the expression method. For example, the above-mentioned correlation coefficient itself may be used as the evaluation ability score. Further, a parameter derived from the correlation coefficient based on a predetermined standard may be used as the evaluation ability score. For example, evaluators may be rated in descending order of correlation coefficient, and evaluation ability scores may be given according to predetermined criteria.
An example of a method of rating evaluators in descending order of correlation coefficient and weighting the evaluations will be described. Assuming that the total number of evaluators is N for each evaluation axis, the evaluation analysis part 326 gives weighting to the evaluations by the evaluators in the k rank (k=1 to N) regarding the evaluation ability according to the following formula.
Weight=1+sin{(1−2×(k−1)/(N−1))×pi/2}
By weighting in this way, a weighting coefficient (weight) can be assigned to each evaluator for each evaluation axis. The weighting coefficient may be adopted as the evaluation ability. In this case, the evaluation by each evaluator had the voting value of one vote equally initially, but the evaluation by the highest evaluator changes to have a voting value of two votes, and the evaluation by the lowest evaluator changes to have a voting value of 0 votes.
The above-mentioned provisional score was calculated under the assumption that all evaluators have the same evaluative ability because the evaluators' evaluation ability was unknown. However, in order to give an appropriate evaluation to the evaluation target, it is appropriate to give a greater weighting to the evaluations from the evaluators with higher connoisseurship.
Therefore, the evaluation analysis part 326 can calculate a corrected score of each evaluation target for each evaluation axis by aggregating the evaluations of each evaluation target based on the corrected evaluation, the evaluator ID of the evaluators and the identifier of the evaluation target (example: the answer ID) stored in the evaluation result data file 349, and the evaluation ability score of each evaluator stored in the evaluator score data file 350, on condition that a greater weighting is given to the evaluation by the evaluator with a higher evaluation ability score. Then, the evaluation analysis part 326 can store the corrected score in the evaluation target score data storage part (example: the answer score data file 351) in association with the identifier of each evaluation target (example: the answer ID).
The specific calculation method of the corrected score may be appropriately determined so as to satisfy the above condition. An example of how to calculate the corrected score is shown below. For example, let us assume four evaluators (A, B, C, D) evaluated an idea. Table 3 shows the evaluation by each evaluator for the idea, the corrected evaluation value by analyzing the degree of strictness according to the above-mentioned method, and the evaluation ability score of each evaluator. In this case, the provisional score of the idea is 0.21 as described above. On the other hand, as a method of calculating the corrected score, when a method of giving weighting by weighted-averaging the corrected evaluations by the evaluators with the evaluation ability score of each evaluator is adopted, the corrected score of the idea becomes 0.29. A value after further performing arbitrary statistical processing on this value may be defined as the corrected score.
The corrected score obtained in the above procedure or the statistic calculated based on the corrected score may be used as the final score of the evaluation target. Alternatively, the evaluation analysis part 326 may regard the corrected score of each evaluation target as the provisional score and repeat the step 1G and the step 1 H one or more times. By repeating the calculation of the evaluation ability score of the evaluator (step 1G) and the calculation of the score of the evaluation target by giving weighting to the evaluation based on the evaluator's evaluation ability (step 1H) once or more, preferably 10 times or more, more preferably 100 times or more, it is possible to obtain a calculation result with higher consistency between the evaluation ability score of each evaluator and the score of each evaluation target, which are mutually reflected in the calculation.
It is desirable that steps 1G and 1H are repeated until either or both of the evaluation ability score and the corrected score of each evaluation target converge. This is because by repeating the step 1G and the step 1H until either, preferably both, of the evaluation ability score and the corrected score of each evaluation target converge, there is an advantage is that the calculated score reaches a solution with the maximum explanatory power and consistency. However, the evaluation ability score or the corrected score of each evaluation target may not converge even if the steps 1G and 1H are repeated (these may diverge, or may converge in a periodic manner, or the like). Therefore, a maximum number of repetition (for example, a value in the range of 10 to 100 times) may be set in advance, and if neither the evaluation ability score nor the corrected score of each evaluation target converges by then, the final evaluation ability score and the corrected score may be determined based on the calculation results obtained by repeating the maximum number. In addition, if either or both of the evaluation ability score and the corrected score of each evaluation target converge before reaching the predetermined maximum number of repetition, the step 1G may not be repeated any more in order to shorten the calculation time.
Therefore, in one embodiment, when either or both of the following conditions (a) and (b) are satisfied, the evaluation analysis part 326 may stop repeating the step 1G even if the maximum number of repetition set in advance is not reached. Therefore, the evaluation analysis part 326 performs a final step 1H after a final step 1G, so that the repetition of the step 1G and the step 1H is completed.
If the evaluation ability score does not converge, the evaluation ability score to be finally adopted does not need to be the latest evaluation ability score after being repeated a predetermined number of repetition. The average value of several evaluation ability scores (for example, the evaluation ability scores calculated in the last few times, for example, the last 2 to 6 times) may be adopted as the final evaluation ability score. Similarly, if the corrected score does not converge, the corrected score to be finally adopted does not need to be the latest corrected score after being repeated a predetermined number of repetition. The average value of several corrected scores (for example, the corrected scores calculated in the last few times, for example, the last 2 to 6 times) may be adopted as the final corrected score.
The evaluation target data storage part (example: the answer score data file 351) may also store data related to a plurality of evaluation targets different from the plurality of evaluation targets used in the current evaluation session. The data related to a plurality of evaluation targets different from the plurality of evaluation targets used in the current evaluation session include, for example, the answer data collected by collection sessions in the past, the data on evaluation targets separately prepared by the project administrator, and the like.
In this case, the evaluation analysis part 326 can calculate the rarity score of each evaluation target in the current evaluation session, by calculating similarity between each of the plurality of evaluation targets in the current evaluation session and the other evaluation targets used in the current evaluation session and/or the different evaluation targets, and aggregating the calculated similarity, Then, it may store the rarity score in the evaluation target score data storage part (example: the answer score data file 351) in association with the identifier of each evaluation target (example: the answer ID).
The similarity between the evaluation targets may be calculated by a known method according to the format of the evaluation targets. For example, when the evaluation target is expressed in the form of a multiple-choice type such as numerical values or options, a method using a correlation coefficient can be mentioned. When the evaluation target is expressed in a text format using a language, there is a method of calculating the similarity between the evaluation targets by performing context analysis such as parsing and semantic analysis by natural language processing for each evaluation target. As a method of natural language processing, there is a method in which the evaluation target (text data) is morphologically analyzed, decomposed into words, each word is vectorized (distributed expression), and the evaluation target is vectorized by using a technique such as LSTM or Average Pooling. When the evaluation targets are vectorized, the similarity between the evaluation targets is calculated based on the Euclidean distance and/or the cosine similarity. The similarity is represented by 0 (same) to ∞ (totally different) for the Euclidean distance and 1 (same) to −1 (opposite) for the cosine similarity.
For example, if the evaluation target is a business idea, N business ideas (N can be, for example, 5 to 20) similar to the business idea are searched for in the evaluation target data storage part in descending order of similarity, and the average value of the inverse number of the similarity of found business ideas of high similarity may be used as the rarity score.
In one embodiment, the evaluation analysis part 326 is capable of performing a step 2D comprising calculating a score of the answerer for at least one evaluation axis, based on data related to the evaluation targets including the corrected score itself of each evaluation target for each evaluation axis and/or a statistic calculated based on the corrected score, and the identifier of the answerer stored in the evaluation target score data storage part (example: the answer score data file 351), and storing the score of the answerer in the answerer score data storage part (example: the answerer score data file 352).
In the second embodiment, as in the first embodiment, after the degree of strictness of the evaluators is analyzed and the evaluation is corrected (step 1F), the score of the evaluation target and the evaluation ability are calculated. However, the method of data processing after step 1F is different from that of the first embodiment. In the first embodiment, the evaluation ability of an evaluator is calculated by taking into account the evaluation results for the evaluation target by the evaluator himself/herself who should be calculated the evaluation ability, and the score of the evaluation target is calculated based on the evaluation ability of each evaluator calculated in this way. Although this method has an advantage that data processing can be performed in a short time, it causes noise because the evaluation result by the evaluator himself/herself who should be calculated the evaluation ability is considered. That is, when the evaluation result by the evaluator himself/herself who should be calculated the evaluation ability is taken into consideration, the evaluation ability of the evaluator is calculated higher. Therefore, by calculating the evaluation ability of the evaluator based on the evaluation results by evaluators other than the evaluator himself/herself who should be calculated the evaluation ability, more reliable results can be obtained (because it is possible to prevent the evaluator who has first acquired a high evaluation ability from becoming higher in the evaluation ability each time the evaluation is repeated).
For example, let us assume four evaluators (A, B, C, D) evaluated a certain evaluation target, and only the Evaluator A gave a low evaluation (X) (a cross) and the remaining three gave a high evaluation (◯) (a circle). It can be understood that when calculating the evaluation ability of the Evaluator A, rather than considering the evaluation result by the Evaluator A, calculating the evaluation ability of the evaluator A based on the evaluation results by only the remaining three people will lower the evaluation ability of the evaluator A, and it can be understood this is a more reliable result. Hereinafter, a specific example of data processing according to the second embodiment will be described.
(1) Analysis of Degree of Strictness of Evaluator and Correction of Evaluation (Step 1 E)
Since the step 1E in the second embodiment is the same as that in the first embodiment, the description thereof will be omitted.
(2) Calculation of Provisional Score of Evaluation Targets (Step 1F)
Assuming that the number of the evaluators is n (n is an integer of 2 or more), the evaluation analysis part 326 is capable of performing a step comprising, for each of the first to nth evaluator, without considering the evaluation of the evaluation target by the kth evaluator (k is an integer from 1 to n), aggregating the evaluations of each evaluation target based on the corrected evaluation by the evaluators other than the kth evaluator and the identifier of the evaluation target stored in the evaluation result data storage part to calculate a provisional score of each evaluation target for each evaluation axis. Then, the evaluation analysis part 326 is capable of performing a step comprising, for each of the first to nth evaluator, storing the provisional score in the evaluation target score data storage part (example: the answer score data file 351) in association with the evaluator ID of the kth evaluator and the identifier of each evaluation target (example: the answer ID).
An example of the calculation method of the provisional score is shown below. For example, let us assume four evaluators (A, B, C, D) evaluated an idea. The evaluation value of the idea by each evaluator and the corrected evaluation value obtained by analyzing the degree of strictness according to the above-mentioned method are as shown in Table 1 described above. Assuming evaluation ability of all evaluators is the same, then the provisional score of the idea for calculating the evaluation ability of Evaluator A is calculated as (0.12−0.54+0.70)/3=0.09 with the evaluation result by the Evaluator A excluded. A provisional score can be calculated for the Evaluators B to D in the same manner. The results are shown in Table 4. It can be seen that a large difference occurs when the provisional score is calculated excluding the evaluation by the evaluator himself/herself. At this stage, the evaluative ability of the evaluators is unknown, so it is appropriate to consider the evaluative ability of the evaluators to be the same.
(3) Calculation of Provisional Evaluation Ability Score of Evaluators (Step 1G1)
Next, the provisional evaluation ability of the evaluators is calculated. The reason for calling it “provisional evaluation ability” is that the evaluation ability calculated in the step 1G1 is not the final evaluation ability. Specifically, the evaluation analysis part 326 is capable of comparing for each evaluation axis the corrected evaluation of each evaluation target associated with the evaluator ID of the kth (k is an integer from 1 to n) evaluator stored in the evaluation result data storage file 349 with the provisional score of each evaluation target stored in the evaluation target score data storage part (example: the answer score data file 351) associated with the evaluator ID of the kth evaluator and the identifier of each evaluation target (example: the answer ID), and aggregating closeness between the them for each evaluator to calculate a provisional evaluation ability score of the kth evaluator. Then, the evaluation analysis part 326 is capable of performing a step, for each of the first to nth evaluator, comprising storing the provisional evaluation ability score in the evaluator score data file 350 in association with the evaluator ID of the kth evaluator.
Any statistical method may be used for the method of aggregating the closeness between the corrected evaluation and the provisional score, and there are no particular restrictions. For example, there is a method of calculating the correlation coefficient, Pearson's product-moment correlation coefficient, Euclidean distance, cosine similarity, and polyserial correlation coefficient between the two. The specific aggregation method is as illustrated in the first embodiment. However, the second embodiment is different from the first embodiment in that the provisional score of each evaluation target for calculating the evaluation ability is different for each evaluator.
Further, the provisional evaluation ability score may be an index that can relatively rate the provisional evaluation ability among evaluators, and the expression method thereof is not particularly limited. The specific expression method is as illustrated in the first embodiment.
(4) Calculation of Corrected Score of Evaluation Targets (Step 1H1)
The above-mentioned provisional score is calculated under the assumption that all evaluators have the same evaluative ability because the evaluative ability of evaluators is unknown. However, in order to give an appropriate evaluation to the evaluation target, it is appropriate to give a greater weighting to the evaluations by the evaluators with higher connoisseurship. However, since the corrected score will be used to calculate the final evaluation ability of the evaluator in the next step, the corrected score is calculated based on the evaluation results from other evaluators excluding the evaluator who should be calculated the final evaluation ability. Therefore, the second embodiment is different from the first embodiment in that the corrected score of each evaluation target is different for each evaluator.
Specifically, the evaluation analysis part 326 is capable of performing a step comprising, for each of the first to nth evaluator, without considering the evaluation of the evaluation target by the kth evaluator (k is an integer from 1 to n), aggregating the evaluations of each evaluation target based on the corrected evaluation by the evaluators other than the kth evaluator, the evaluator ID of the evaluators and the identifier of the evaluation target (example: answer ID) stored in the evaluation result data storage file 349, and the provisional evaluation ability score of the evaluators other than the kth evaluator stored in the evaluator score data file 350 to calculate a corrected score of each evaluation target for each evaluation axis, on condition that a greater weighting is given to the evaluation by the evaluator with a higher a provisional evaluation ability score. Then, the evaluation analysis part 326 is capable of performing a step comprising, for each of the first to nth evaluator, storing the corrected score in the evaluation target score data storage part (example: the answer score data file 351) in association with the evaluator ID of the kth evaluator and the identifier of each evaluation target (example: the answer ID).
The specific calculation method of the corrected score may be appropriately determined so as to satisfy the above conditions. An example of how to calculate the corrected score is shown below. For example, let us assume four evaluators (A, B, C, D) evaluated an idea. Table 5 shows the evaluation of the idea by each evaluator, the corrected evaluation value obtained by analyzing the degree of strictness according to the above-mentioned method, the provisional score of the idea, and the provisional evaluation ability score of each evaluator. As a method of calculating the corrected score, when a method of giving weighting by weighted-averaging the corrected evaluations by the evaluators with the provisional evaluation ability score of each evaluator is adopted, the corrected score of the idea becomes as shown in the Table 5. A value after further performing arbitrary statistical processing on this value may be defined as the corrected score.
(5) Calculation of Final Evaluation Ability Score of Evaluators (Step 1G2)
Next, the evaluation analysis part 326 calculates the final evaluation ability score of each evaluator based on the corrected score. Specifically, the evaluation analysis part 326 is capable of performing a step 1G2 comprising, for each of the first to nth evaluator, comparing for each evaluation axis the corrected evaluation of each evaluation target associated with the evaluator ID of the kth (k is an integer from 1 to n) evaluator stored in the evaluation result data file 349 with the corrected score of each evaluation target stored in the evaluation target score data storage part (example: the answer score data file 351) associated with the evaluator ID of the kth evaluator and the identifier of each evaluation target (example: the answer ID), aggregating closeness between them for each evaluator to calculate a final evaluation ability score of the kth evaluator, and storing the final evaluation ability score in the evaluator score data storage part in association with the identifier of the kth evaluator.
Any statistical method may be used for the method of aggregating the closeness between the corrected evaluation and the corrected score, and there are no particular restrictions. For example, there is a method of calculating the correlation coefficient, Pearson's product-moment correlation coefficient, Euclidean distance, cosine similarity, and polyserial correlation coefficient between the two. The specific aggregation method is as illustrated in the first embodiment. However, the second embodiment is different from the first embodiment in that the corrected score of each evaluation target for calculating the final evaluation ability is different for each evaluator.
Further, the final evaluation ability score may be an index that can relatively rate the final evaluation ability among evaluators, and there is no particular limitation on the expression method. The specific expression method is as illustrated in the first embodiment.
(6) Calculation of Final Score of Evaluation Targets (Step 1 H2)
Next, the evaluation analysis part 326 calculates the final score of the evaluation target by using the final evaluation ability score of each evaluator. When calculating the corrected score, the final evaluation ability score of each evaluator is not determined, so the corrected score of each evaluation target is calculated for each evaluator. On the other hand, when calculating the final score, since the final evaluation ability score of each evaluator has already been determined, by aggregating the evaluations of each evaluation target based on the final evaluation ability score of each evaluator, a single final score of each evaluation target for each evaluation axis can be calculated.
Specifically, the evaluation analysis part 326 can calculate a final score of each evaluation target for each evaluation axis by aggregating the evaluations of each evaluation target based on the corrected evaluation, the evaluator ID of the evaluator and the identifier of the evaluation target (example: the answer ID) stored in the evaluation result data file 349, and the final evaluation ability score of each evaluator stored in the evaluator score data file 350, on condition that a greater weighting is given to the evaluation by the evaluator with a higher final evaluation ability score. Then, the evaluation analysis part 326 can store the final score in the evaluation target score data storage part (example: the answer score data file 351) in association with the identifier of each evaluation target (example: the answer ID).
The specific calculation method of the final score may be appropriately determined so as to satisfy the above conditions. An example of how to calculate the final score is shown below. For example, let us assume four evaluators (A, B, C, D) evaluated an idea. Table 6 shows the evaluation of the idea by each evaluator, the corrected evaluation value obtained by analyzing the degree of strictness according to the above-mentioned method, and the final evaluation ability score of each evaluator. As a method of calculating the final score, when a method of giving weighting by weighted-averaging the corrected evaluations by the evaluators with the final evaluation ability score of each evaluator is adopted, the final score of the idea is as shown in the Table 6. A value after further performing arbitrary statistical processing on this value may be defined as the final score.
(7) Repeating of Steps 1G1 and 1H1
The final score obtained by the above procedure is obtained after performing the step 1G1 and the step 1H1 once. Alternatively, the evaluation analysis part 326 may regard the corrected score of each evaluation target as the provisional score, and repeat the step 1G1 and the step 1 H1 one or more times. By repeating the calculation of the provisional evaluation ability score of evaluator (step 1G1) and the calculation of the corrected score of evaluation target by giving weighting to the evaluation based on the provisional evaluation ability of the evaluator (step 1 H1) once or more, preferably 10 times or more, more preferably 100 times or more, it is possible to obtain a calculation result with higher consistency between the evaluation ability score of each evaluator and the score of each evaluation target, which are mutually reflected in the calculation.
It is desirable that steps 1G1 and 1H1 be repeated until either or both of the provisional evaluation ability score and the corrected score of each evaluation target converge. This is because by repeating the step 1G1 and the step 1H1 until either, preferably both, of the provisional evaluation ability score and the corrected score of each evaluation target converge, there is an advantage is that the calculated score reaches a solution with the maximum explanatory power and consistency. However, the provisional evaluation ability score or the corrected score of each evaluation target may not converge even if the steps 1G1 and 1H1 are repeated (these may diverge, or may converge in a periodic manner, or the like). Therefore, a maximum number of repetition (for example, a value in the range of 10 to 100 times) may be set in advance, and if neither the provisional evaluation ability score nor the corrected score of each evaluation target converges by then, the final evaluation ability score and the final score may be determined based on the calculation results obtained by repeating the maximum number. In addition, if either or both of the provisional evaluation ability score and the corrected score of each evaluation target converge before reaching the predetermined maximum number of repetition, the step 1G1 may not be repeated any more in order to shorten the calculation time.
Therefore, in one embodiment, when either or both of the following conditions (a) and (b) are satisfied, the evaluation analysis part 326 may stop repeating the step 1G1 even if the maximum number of repetition set in advance is not reached. Therefore, the evaluation analysis part 326 performs a final step 1 H1 after a final step 1G1, so that the repetition of the step 1G1 and the step 1H1 is completed.
Further, if the provisional evaluation ability score does not converge, the provisional evaluation ability score to be finally adopted does not need to be the latest provisional evaluation ability score after being repeated a predetermined number of repetition. The average value of several provisional evaluation ability scores (for example, the provisional evaluation ability scores calculated in the last few times, for example, the last 2 to 6 times) may be adopted as the final provisional evaluation ability score. Similarly, if the corrected score does not converge, the corrected score to be finally adopted does not need to be the latest corrected score after being repeated a predetermined number of repetition. The average value of several corrected scores (for example, the corrected scores calculated in the last few times, for example, the last 2 to 6 times) may be adopted as the final corrected score.
The evaluation analysis part 326 uses the obtained final provisional evaluation ability score to perform the final step 1H1 to calculate the final corrected score, and after that, by performing the step 1G2 and the step 1H2, the final evaluation ability score of the evaluator and the final score of the evaluation target are calculated.
(8) Calculation of Rarity Score of Evaluation Targets (Step 1J)
The evaluation target data storage part (example: the answer score data file 351) may also store data related to a plurality of different evaluation targets from the plurality of evaluation targets used in the current evaluation session. Data related to a plurality of different evaluation targets from the plurality of evaluation targets used in the current evaluation session include, for example, answer data collected by collection sessions in the past, data on evaluation targets separately prepared by the project administrator, and the like.
In this case, the evaluation analysis part 326 can calculate the rarity score of each evaluation target in the current evaluation session, by calculating similarity between each of the plurality of evaluation targets in the current evaluation session and the other evaluation targets used in the current evaluation session and/or the different evaluation targets, and aggregating the similarity. Then, it may store the rarity score in the evaluation target score data storage part (example: the answer score data file 351) in association with the identifier of each evaluation target (example: the answer ID). Since a specific example of the method of calculating the similarity between evaluation targets is the same as that of the first embodiment, the description thereof will be omitted.
(9) Calculation of Answerer Scores (Step 2D)
In one embodiment, the evaluation analysis part 326 is capable of performing a step 2D comprising calculating a score of the answerer for at least one evaluation axis, based on data related to the evaluation targets including the final score itself of each evaluation target for each evaluation axis and/or a statistic calculated based on the final score stored in the evaluation target score data storage part (example: the answer score data file 351), and the identifier of the answerer, and storing the score of the answerer in the answerer score data storage part (example: the answerer score data file 352).
(Evaluation Analysis Data Extraction Part)
The evaluation analysis data extraction part 327 is capable of extracting various evaluation analysis data stored in the evaluation result data file 349, the answerer score data file 352, the evaluator score data file 350, and the evaluation target score data storage part (example: answerer score data file 351), and transmitting the evaluation analysis data from the transceiver 310 to the project administrator terminal 13 in a displayable form via the computer network 14. For example, the evaluation analysis data extraction part 327 is capable of performing a step 1I comprising extracting either or both of the following data (1) and (2), and transmitting them from the transceiver 310 to the administrator terminal 13 via the network:
Further, the evaluation analysis data extraction part 327 is capable of performing a step 1K comprising extracting data related the evaluation target, including the rarity score itself of each evaluation target and/or a statistic calculated based on the rarity score stored in the evaluation target score data storage part (example: the answer score data file 351), and transmitting them from the transceiver 310 to the administrator terminal 13 via the network.
Further, the evaluation analysis data extraction part 327 is capable of performing a step 2E comprising transmitting data related to the answerer, including the score itself of each answerer for each evaluation axis and/or a statistic calculated based on the score stored in the answerer score data file 352, from the transceiver 310 to the administrator terminal 13 via the network.
The statistic includes, for example, an arithmetic mean value, a total value, a coefficient of variation, a rank, a standard deviation, and the like, but is not limited thereto.
[Participant (Evaluator, Answerer) Terminal]
The participant terminal 12 may also have the hardware configuration of the computer 200 described above. In the storage device 202 of the participant terminal 12, in addition to a program such as a web browser, browser data and data transmitted from/to the server 11 can be temporarily or non-transitory stored. The participant terminal 12 can input login information, input information as an evaluation target, input evaluation to the evaluation target, and the like by using the input device 204. The participant terminal 12 can display a login screen, a screen for inputting information as an evaluation target, a screen for inputting evaluation, evaluation analysis results (evaluator score data, answer score data, answerer score data and the like) and the like with the output device 203. The participant terminal 12 can communicate with the server 11 via the computer network 14 with the communication device 205. For example, it is possible to receive a login screen, information on an evaluation target, format data for inputting information as an evaluation target, format data for inputting evaluation, evaluation analysis data, and the like from the server 11, and is possible to transmit login information, answer data including the information as an evaluation target, evaluation result data, and the like to the server 11.
[Project Administrator Terminal]
The project administrator terminal 13 may also have the hardware configuration of the computer 200 described above. In the storage device 202 of the project administrator terminal 13, in addition to a program such as a web browser, browser data and data transmitted from/to the server 11 can be temporarily or non-transitory stored. The project administrator terminal 13 can input participant account information, input login information, input project implementation conditions, input session start instruction, and the like by using the input device 204. The project administrator terminal 13 can display participant account data, a login screen, a screen for inputting project implementation conditions, a screen for inputting evaluation target information, a screen for inputting evaluation, evaluation analysis results (evaluator score data, answer score data, answerer score data, and the like), and the like with the output device 203. The project administrator terminal 13 can communicate with the server 11 via the computer network 14 with the communication device 205. The project administrator terminal 13 is possible to receive, for example, a login screen, participant account data, answer data including the information as an evaluation target, evaluation result data, evaluation analysis data, evaluation progress data, and the like from the server 11, and is possible to transmit project implementation condition data (including evaluation start instruction), participant account data, login data, and the like to the server 11.
[Server Administrator Terminal]
The server administrator terminal 15 may also have the hardware configuration of the computer 200 described above. In the storage device 202 of the server administrator terminal 15, in addition to a program such as a web browser, browser data and data transmitted from/to the server 11 can be temporarily or non-transitory stored. The server administrator terminal 15 can input server administrator account data, input project administrator account data, input login information, and the like by using the input device 204. The server administrator terminal 15 can display server administrator account data, project administrator account data, a login screen, participant account data, a screen for inputting project implementation condition, a screen for inputting the information as an evaluation target, a screen for inputting evaluation, evaluation analysis results (evaluator score data, answer score data, answerer score data, and the like), and the like with the output device 203. The server administrator terminal 15 can communicate with the server 11 via the computer network 14 with the communication device 205. The server administrator terminal 15 is possible to receive, for example, a login screen, server administrator account data, project administrator account data, participant account data, answer data including the information as an evaluation target, evaluation result data input by the evaluator, evaluation analysis data, evaluation progress data, and the like from the server 11, and is possible to transmit server administrator account data, project administrator account data, login data, and the like to the server 11.
<2. Flow for Online Evaluation>
Next, a procedure of the method for online evaluation by the above-mentioned system will be described with reference to a flowchart for illustration.
(2-1 Setting Project Implementation Conditions)
If the login is successful, the administration screen is displayed on the project administrator terminal 13 (example:
When the registration of the project implementation conditions is completed, the server 11 transmits a screen for notifying the project implementation conditions to the project administrator terminal 13 (S108). The project administrator can confirm the registered information on the screen. Next, the project administrator inputs information about the participants (evaluators, answerers) who participate in the collection session and the evaluation session on the administration screen, and transmits the information to the server 11 (S109). When the server 11 receives the data including the information about the participants, the data registration part 325 stores the data in the participant account file 341, the session participant registration data file 342, and the like (S110).
(2-2 Collection Session)
Next, the information input data extraction part 324 of the server 11 extracts question data including question texts related to a predetermined theme and specific question texts that describe the contents to be provided by the answerers and the like from the question data file 345 and the answer column data file (summarized) 346a, and extracts second format data that match the conditions stored in the answer column data files 346a and 346b from the second format data file 362, and transmits it to the answerer terminals 12 of a plurality of answerers who are flagged as answerers of the collection session in the session participant registration data file 342 (S112). In this way, the answerer terminal 12 displays a screen for inputting information (answer content) as shown in
After the information input screen is displayed on the answerer terminal 12, the answerer inputs the information (answer content) for the question on the screen and clicks the “Transmit” button. Accordingly, the answer data are transmitted from the answerer terminal 12 to the server 11 (S114). When the server 11 receives the answer data, the time limit judgement part 328 judges whether or not the answer data has been received within the time limit (S115). When it is judged that it is within the time limit, the data registration part 325 of the server 11 assigns an answer ID to the answer data, and stores the answer data in the answer data files 348a and 348b in association with the answerer ID and the like of the answerer who has transmitted the response data (S116).
Next, if a maximum number of answers is set, each time the answer number judgement part 330 of the server 11 receives one answer data from the answerer terminal 12, it increases the number of completed answers in the answer progress management file 356 corresponding to the answerer ID of the answerer by one, and judges whether or not this answerer has reached the maximum number of answers (S117). As a result, when it is judged that the maximum number of answers has not been reached, the information input data extraction part 324 transmits data including information necessary for answer input, such as unanswered question data, to the corresponding answerer terminal 12 (S112). In this way, the question data are repeatedly transmitted to the answerer terminal 12 until the maximum number of answers is reached.
Alternatively, in S112, the server 11 may collectively transmit question data including questions necessary for each answerer to input information (answer contents) to each answerer terminal 12. Further, the answerer terminal 12 may be able to collectively transmit the answer data to the server 11 in S114. In this case, the server 11 can receive all the answer data from the answerer at once, and it is not necessary to repeat S112.
On the other hand, when the answer data are received after the time limit has passed, or when it is judged that the time limit has passed regardless of whether or not the answer data has been received from the answerer terminal 12, or when the answer number judgement part 330 of the server 11 judges that the maximum number of answers has been reached, the time limit judgement part 328 of the server 11 records that the collection session has ended, and changes the status in the session participant registration data file 342 and the like to “Collection session ended” (S118). Further, it transmits a collection session end screen or a progress information that the collection session has ended to the answerer terminal 12 and the project administrator terminal 13 (S119). As a result, the answerer terminal 12 displays a screen indicating that the collection session has ended (S120), and the project administrator terminal 13 displays progress information indicating that the collection session has ended (S121).
(2-3 Evaluation Session)
When the evaluator allocation part 322 of the server 11 receives the instruction to start the evaluation session, it allocates evaluators who should evaluate the information (answer content) in each of the answer data stored in the answer data file (detailed) 348b, from among a plurality of evaluators who have been flagged as evaluators for this evaluation session in the session participant registration data file 342. Then, for each evaluator, the evaluator allocation part 322 stores the evaluator ID, the answer ID to be evaluated, the required number of evaluations, and the like in association with each other in the evaluation progress management file 355 for managing the progress of each evaluation by the evaluator (S123).
In addition, the start of the allocation process by the evaluator allocation part 322 is not limited to the instruction for starting the evaluation session from the project administrator terminal 13, and may be started by any instruction for starting the evaluator allocation process. For example, the allocation process may be executed by receiving an instruction from the project administrator terminal 13 only for assigning an evaluator, and may be executed according to other instructions, or may be executed when the status is changed to end of the collection session.
The evaluation input data extraction part 323 of the server 11 extracts the answer data including the information (answer content) to be evaluated by each evaluator, based on the answer ID and the evaluator ID stored in the evaluation progress management file 355, and extracts the question data including the question texts related to the predetermined theme and the specific question texts describing the contents to be provided by the answerer, from the question data file 345 and the answer column data file (summarized) 346a, and extracts the first format data for evaluation input including a selective evaluation input section from the first format data file 361, based on the conditions related to evaluation axes stored in the evaluation axis data file 344, and transmit them to the corresponding evaluator terminal 12 (S124). As a result, the evaluator terminal 12 displays the evaluation input screen as shown in
The evaluator clicks the button for evaluating the information (answer content) on the screen (example: “I do not agree very much”, “I can agree”, “I can agree very much”), and then click the “Transmit” button. Accordingly, the evaluation result data are transmitted from the evaluator terminal 12 to the server 11 (S126). When the server 11 receives the evaluation result data, the time limit judgement part 328 judges whether or not the evaluation result data has been received within the time limit (S127). When it is judged that it is within the time limit, the data registration part 325 of the server 11 assigns an evaluation ID to the evaluation result data, and stores the evaluation result data in the evaluation result data file 349 in association with the evaluator ID and the evaluation ID and the like of the evaluator who has transmitted the evaluation result data (S128).
Next, if a required number of evaluations is set, each time the answer number judgement part 330 of the server 11 receives one evaluation data from the evaluator terminal 12, it increases the number of completed evaluations in the evaluation progress management file 355 corresponding to the evaluator ID of the evaluator by one, and judges whether or not this evaluator has reached the required number of evaluations (S129). As a result, when it is judged that the required number of evaluations has not been reached, the evaluation input data extraction part 323 transmits data including information necessary for inputting evaluation, such as unevaluated answer data, together with the first format data in a displayable form from the transceiver 310 via the computer network 14 to the corresponding evaluator terminal 12 (S124). In this way, the answer data are repeatedly transmitted to the evaluator terminal 12 until the maximum number of evaluations is reached.
Alternatively, in S124, the server 11 may collectively transmit answer data including information (answer content) to be evaluated by each evaluator to each evaluator terminal 12. Further, the evaluator terminal 12 may be able to collectively transmit the evaluation result data to the server 11 in S126. In this case, the server 11 can receive all the evaluation result data from the evaluator at once, and it is not necessary to repeat S124.
On the other hand, when the evaluation result data are received after the time limit has passed, or when it is judged that the time limit has passed regardless of whether or not the evaluation result data has been received from the evaluator terminal 12, or when the evaluation number judgement part 329 of the server 11 judges that the required number of evaluations has been reached, the time limit judgement part 328 of the server 11 records that the evaluation session has ended, and changes the status in the session participant registration data file 342 and the like to “Evaluation session ended” (S130). Further, it transmits an evaluation session end screen or a progress information that the evaluation session has ended to the evaluator terminal 12 and the project administrator terminal 13 (S131). As a result, the evaluator terminal 12 displays a screen indicating that the evaluation session has ended (S132), and the project administrator terminal 13 displays progress information indicating that the evaluation session has ended (S133).
(2-4 Evaluation Analysis)
The evaluation analysis part 326 of the server 11 generates, for example, the following evaluation analysis data based on the evaluation result data and the like stored in the evaluation result data file 349.
The evaluation analysis data are stored in the evaluation result data file 349, the evaluator score data file 350, the answer score data file 351 and the answerer score data file 352 according to the type of data (S135). The evaluation analysis data extraction part 327 of the server 11 extracts the evaluation analysis data and transmits them to the project administrator terminal 13 (S136). When the evaluation analysis data are received, a screen showing the evaluation analysis result is displayed on the screen of the project administrator terminal 13 (S137). The evaluation analysis results of all participants and all evaluation targets can be displayed on the project administrator terminal 13.
The evaluation analysis data may be transmitted to the participant terminals 12 in addition to the project administrator terminal 13. The evaluation analysis data to be transmitted to the participant terminals 12 can be set in advance by the administrator. For example, mention can be made to the score of the evaluation target provided by the participant himself/herself, the score of the evaluation target evaluated by the participant himself/herself, the answerer score of the participant himself/herself, the evaluation ability score of the participant himself/herself, and the like. Upon receiving the evaluation analysis data, the participant terminal 12 displays a preset screen showing the evaluation analysis result (S138).
Number | Date | Country | Kind |
---|---|---|---|
2022-101344 | Jun 2022 | JP | national |