This application claims the benefit of U.S. Provisional Application No. 62/081,030, filed on Nov. 18, 2014, the content of which is incorporated herein by reference.
The present invention relates generally to the field of providing a computer-implemented system and method that provides a micro-transaction based, crowd-sourced expert system marketplace in which answers from human experts are provided within a guaranteed period of time to electronically submitted questions by human users. The invention is a computer-implemented system and method that provides a micro-transaction based marketplace in which electronically submitted questions in text or picture form from human users are parsed, categorized and routed in real-time and then transmitted electronically to the best available human experts with a response guaranteed within a specified time.
The problem the present invention addresses is fully automating in a computer-implemented system, the process by which electronically submitted questions by human users are submitted, parsed, categorized and then routed to the best available human experts to answer the question in a guaranteed period of time. The present invention also addresses the computer-implemented processes by which experts are ranked in their areas of expertise. And finally the present invention addresses the computer-implemented processes by which micro-transaction based currency exchange provides the necessary economic incentive for subject matter experts to participate.
The present invention focuses on helping human beings who are trying to learn something or study for something to get help from other human beings within a guaranteed time period. These potential users may need help in solving a specific problem, or for more general learning. If a person has a question on a subject or a problem, the ideal solution is one where they can get the best possible answer to their question, instantly, from the best expert in the field.
This ideal is very hard to accomplish given that finding the best expert is not easy in the first place, and even if they can be found, finding them in a timely fashion is very hard. The computer-implemented system of the present invention provides for finding an expert in a very short time, in fact a guaranteed amount of time, who will answer a question on any subject.
Current approaches that attempt to provide online answers to questions have many problems. There are three approaches that exist currently:
1. Web Search: Many people use web search engines like Google or Bing to submit a generic query such as “English Grammar Help” or a specific question like “Is divine a noun or an adjective?” The approach used to answer this question by search engines is well known. The engines have already crawled and indexed all websites containing relevant information, and those websites/links that are most relevant to the query submitted are returned as a search result. This approach has some possibility that a grammar expert may have created a website, but it also may not. Also, the specific question may not be answered for the user: but perusal of several links and some reading may yield the answer. In this case the answer is “Both. It's usually an adjective, as in the divine Lord, but sometimes its used as a noun, as in Lord, the Divine”. While search results are instant, the specific answer from an expert cannot be guaranteed.
2. Online Tutoring: Sometimes students use websites to find tutors who are “experts” in a subject. In this case it is possible to go to a website such as tutor.com, and ask for a tutor in English, and set up an appointment for a web videoconferencing session, and ask one or more questions during the session. In this approach, it is relatively certain you are getting some sort of expertise, but it is not instantly available: appointments may take an hour or a day to obtain.
3. Online Q&A site: Sometime people post questions on Q&A sites like Yahoo answers, or Quora. In this case you don't know when you will get answers, and whether they are from experts at all.
In summary, the problem of getting expert answers to a wide variety of questions in a very short, guaranteed timeframe, via a computer-implemented system has not been solved.
The present invention relates generally to the field of providing a computer-implemented system and method that provides a micro-transaction based, crowd-sourced expert system in which answers from human experts are provided within a guaranteed period of time to electronically submitted questions by human users. The invention is a computer-implemented system and method that provides a micro-transaction based marketplace in which electronically submitted questions in text or picture form from human users are parsed, categorized and routed in real-time to the best available experts and a response is guaranteed within a specified time.
The approach of the present invention is to create a digital marketplace of users and experts. Users can post questions on any topic and experts are instantly notified of questions that they may be interested in, and these experts have a monetary incentive to immediately answer the question if they can. Users are able to pay for answers, using micro-transactions. If for some reason, a human expert is not available, the system will use its knowledge base to return to the user a previous answer from a human expert to a similar question, within the guaranteed window of time.
In order for the marketplace to work, there has to be a minimal yet sufficient number of users and experts to create a steady volume of questions and answers so that the algorithms of the present invention can operate correctly. This is often known as the “bootstrap” problem. The present invention bootstraps the marketplace by creating a dedicated cadre of experts willing to answer questions in the N minute guaranteed Service Level Agreement (SLA) window simply by directly providing an economic incentive to these experts, even if the users are not yet willing to provide that economic incentive in the bootstrap phase.
Monetary incentives are provided in the form of a virtual currency called “credits” that users get free from the system in the bootstrap phase, and are transferred to experts when they provide acceptable answers to questions. During the bootstrap phase, experts are reimbursed by the system, for the credits they've collected, by setting a standardized exchange rate.
On the other side of the marketplace, the present invention uses marketing techniques to collect users. In one embodiment for example, users are high school students who need math help. The present invention advertises to users on social media channels such as Facebook. The prevention invention gathers from these social media channels a critical mass of users in the bootstrap phase. When they join the marketplace they are automatically given several thousand credits for free, to spend on asking questions. This creates a free flow of questions and answers between users and experts until a sufficient scale is reached for the algorithms described below to start working. After the bootstrap phase, users do not get credits for free. Credits must be paid for after the bootstrap phase. Payment methods include but are not limited to in-app purchases or credit cards in the mobile app of the present invention.
Each expert in the system has a set of topics/subjects that they will offer answers for. Experts are ranked against each topic/subject in their profiles. The ranking score of an expert for each subject is calculated based on a number of factors including but not limited to:
ExpertScore=w1×(Cmax−C)/Cmax+
w2×(Rmax−R)/Rmax+
w3×PC+
w4×PSLA+
w5×PR
If two experts have the same ExpertScore, the expert who has a better volume of answers will be ranked higher when comparing two experts. If two experts have the same ExpertScore and Volume then one of them is picked randomly.
In the one embodiment of the present invention questions can come in multiple forms including but not limited to: in one embodiment, text or in another embodiment a photo of a page in a textbook with the question highlighted.
Example 1: “How do you prove the Pythagorean Theorem?” is a text question.
Example 2: Photo from a high school math textbook with the highlighted question
Regardless of the form of the question submitted, in one embodiment of the present invention both automated and human techniques are used categorize the questions.
In the human form, one technique is to ask the user to tag the question as a geometry or algebra or trigonometry question. If the user tags it incorrectly, an expert may change the tag to correct it.
Another way to categorize the questions is through interpretation, which uses automated techniques described below. To “interpret” a question suggests some degree of understanding the system has of the “meaning” of the question.
Example: “Do you use the word “sleeped” or “slept” for the past tense of sleep?” requires an algorithm than can deduce the meaning of the question. In one embodiment of the present invention natural language parsing techniques and analysis are used to classify the question into a metadata structure and assign it meaning, beyond what user-submitted categorization would offer.
So in the example “Do you use the word ‘sleeped’ or ‘slept’ for the past tense of sleep?” the categorization of the question provided by the user may have specified “English Grammar” but the automated technique may add the interpretation “tense related question”. These techniques work for text questions, but may not work for photo questions. Hence interpretation is an additional possible parameter, beyond categorization that is used by the system of the present invention.
Once the system has both a human categorization and an automated interpretation it is ready to route the question to the best expert.
In one embodiment of the present invention the system uses a combination of ExpertScore (described above), Question Categorization, Question Interpretation, and Expert Self Classification to determine the best expert/s to send the question to. The algorithm creates an ordered list of experts ranked 1 to N: effectively providing an estimate of the best candidates for answering the question.
The list is further divided into levels. All experts in the list are segmented into different levels for each subject/topic in their profiles. Experts at higher levels have higher priority in getting new questions. All experts start with Expert-Level 0, then the level will be increased by 1 whenever they achieve a block of 100 answers in which 90 or more answers meet the SLA and have “Yes” votes (i.e. qualified answers) from users. If the number of qualified answers in the next block is less than 70, their Expert-Level will be decreased by 1.
In one embodiment of the present invention, the routing algorithm works as follows. An expert is considered active if he/she is active on the platform in the last 7 days and is not busy answering other questions at the current time. Whenever a new question is posted, its tag will be used to locate the best possible experts. Starting at the highest expert level for that tag, the routing system finds, as one example, 5 experts with the highest ranking and routes the question to all of them in parallel. Any of those experts are able to claim the question, on a first come first serve basis. Once an expert claims the question, the expert has to give the answer within the N minute SLA guarantee window
If no one claims the question within X seconds, the system will find another, for example, 15 experts in the same level or lower depending on the availability at that level and give them 20 seconds to claim. If still no one claims the question, system will find next 30 experts and give them 10 seconds to claim. The platform needs to have enough experts to ensure that at least one claim for each question after three rounds of routing.
When an answer is posted, the system will prompt the user who asked question to rate the answer, if the answer is accepted then the expert will get credits for the answer. If not, the user will have an option to repost the question for free to find other experts.
If an expert claims a question but is not able to give an answer within 10 minutes, the question will be marked as new and available for others to claim. In that case, the question is considered as unqualified question for that expert in the level calculation.
Each time an answer is marked acceptable by a user, both the question, with its answer is added to the knowledge base, along with the categorization and interpretation, and who asked it, and who answered it.
Some questions, which come as photos, are not easy to parse and require a photo to be further broken down into a text phrase or question for the system parser to work. The system uses a human assisted photo interpretation approach to accomplish this. The system uses the incentive of credits, to have any user who wants to earn some extra credits translate a photo question to a text question. The incentive for users is so they can ask future questions for free with these credits. The incentive for experts is so they can cash these credits in for money. Once the categorization for the submitted photo question is done it can be added to the knowledge base with its interpretation.
The system is designed to meet the SLA guarantee, regardless of whether an expert is currently available. In one embodiment of the present invention, this is accomplished by using the knowledge base as the “safety net” to provide at least a relevant answer to the question, if not a precise answer to the question. If no expert is willing to claim the question and answer it within the SLA window, then a very short time before the window expires, the system chooses the most relevant similar question and answer from the knowledge base, based on categorization and interpretation. Note that the answer provided is NOT an automatically system-generated answer: it is a human expert answer to a similar question in the past. Thus the user is always getting a human expert answer.
Because this is an answer that does not cost anything, the system provides this answer for significantly less credits than a real time expert answer, and in some instances may even be provided free.
All the existing alternatives described for providing immediate expert answers to questions have problems. Thus, there is a need for a computer-implemented system and method that automates the process such that an online user can get an answer to their question from a human expert in a guaranteed period of time.
Therefore the approach of the present invention is designed to automate the process of providing expert human answers to online submitted questions by providing:
1. Economic Incentive
2. Expert Ranking
3. Guaranteed Response Time
Embodiments of the present invention provide a computer-implemented system and method that provides a micro-transaction based, crowd-sourced expert system in which answers from human experts are provided within a guaranteed period of time to electronically submitted questions by human users. The invention is a computer-implemented system and method that provides a micro-transaction based marketplace in which electronically submitted questions in text or picture form from human users are parsed, categorized and routed in real-time to the best available experts and a response is guaranteed within a specified time.
In one embodiment of the present invention the computer-implemented system is a client-server system. The users of the systems include but are not limited to those users of the system that submit questions to be answered and those users of the system that provide expert answers. The users of the system may use a wide variety of client devices or interfaces to access the server system of the present invention, including but not limited to web browsers, mobile apps and other client devices or interfaces.
As an overview of the present invention, the following description shows an example of how the marketplace of users and experts is bootstrapped and then shows the flow of question submittal, question parsing, question routing, expert selection, and expert answer response in one embodiment of the present invention using the system and method of the present invention.
In one embodiment of the present invention, the initial marketplace of users and experts is bootstrapped and established through the following steps:
1. Virtual credits are provided free to potential users and are transferred to experts when they provide acceptable answers to submitted questions.
2. During the bootstrap period, virtual credits earned by the experts are reimbursed by the system at a standardized exchange rate.
3. Marketing techniques are used to collect users. For example, in one embodiment of the present invention, high school students who need math help are targeted on social media such as Facebook.
4. Users who join during the initial bootstrap period are given several thousand credits for free to spend on asking questions.
5. During the bootstrap phase, this process creates a free flow of questions and answers until a sufficient scale is established for the paid phase of the marketplace to begin.
6. After the bootstrap phase, users do not get credits for free. They must be paid for. In one embodiment of the present invention this is accomplished with in-app purchases in the mobile app.
In one embodiment of the present invention, an overview of the flow of question submittal and question routing processing includes but is not limited to the following steps:
1. Using either a web browser or a mobile app of the present invention the user submits a question.
2. Submitted question is parsed and classified.
3. Question is routed to the currently available set of experts with the highest ranking, for the type of question submitted.
4. The first expert to claim the question has a fixed time limit to respond.
5. Expert responds and the answer is persisted in knowledge base
6. Expert answer is returned to user.
7. User provides a rating of the returned expert answer.
8. Expert's ranking is computed based on user rating and response time.
9. Fraud and test cheating is monitored for within the questions submitted.
10. User is charged a micro-transaction fee for every accepted answer with the SLA time period.
11. Expert is credited a micro-transaction fee for every accepted answer given with the SLA time period.
Referring to
Referring to
Still referring to
Still referring to
The question routing component 126 of the server system persists the question meta-structure and its current state into the question base database 140. The question base database 140 keeps the canonical meta-structure for all active questions. The question meta-structure includes a “state” element reflecting the current state of the question. Values for the “state” of an active question include but are not limited to values such as, queued, routed, potentially claimed, answering, initial answered, flagged, skipped, pending, claimed but not answered, timeout, micro-session, micro-session completed, micro-session incomplete, limbo, KB waiting, KB answered, Ops answered, rated, dead etc.
Based on the category of the question, the question routing component 126 gets a list of the highest ranked free experts from the expert ranking component 136. The expert ranking component 136 accesses the expert index database 128 for the past rankings of experts. The expert index database 128 holds a profile for each human expert as well their ranking and any anti-test-cheating penalties applied to their ranking
Still referring to
The SLA manager 138 component distributes the question to the first available expert 102 from the list that claims the question. The SLA manager component 138 monitors their response.
The SLA manager component 138 in one embodiment of the present invention monitors the quality and timeliness of responses provided by experts 102. The SLA manager component 138 passes this quality and timeliness information to the expert index 128. The expert ranking component 136 uses all the information in the expert index 128 to rank the experts 102. The expert ranking component 136 ranks the experts 102 by using criteria including but not limited to claim time in seconds, response time in seconds, percentage of claims made for available questions, percentage of answers meeting SLA, percentage of answers having high ratings, volume of answers, etc.
Still referring to
ExpertScore=w1×(Cmax−C)/Cmax+
w2×(Rmax−R)/Rmax+
w3×PC+
w4×PSLA+
w5×PR
Where in one embodiment of the present invention
Cmax is the maximum time an expert is allowed to claim a question (default=30 secs)
Rmax is the maximum time an expert is allowed to answer a question from the time it is Claimed (e.g. default=10 mins)
w1,w2,w3,w4, and w5 are weighting parameters.
For example, in one embodiment of the present invention, if an expert has an average claim time of 7 seconds, an average response time of 531 seconds, % of claims/available questions of 89%, % of answers meeting SLA of 90%, % of answers have yes votes of 17% then the expert score is 2.84. If two experts 102 have the same expert score, then the expert 102 who has the higher volume will be ranked higher.
Still referring to
Still referring to
Still referring to
Still referring to
Where as
Referring to
Still referring to
Still referring to
Still referring to
If a user initiates a micro-session the state of the question is changed to the micro-session state 236. While in the micro-session state messages can be exchanged between the user and the expert. When the micro-session completes successfully the state of the question is changed to the micro-session completed state 228. From here after the user rates the answers to their questions, the state of the question is changed to the rated state 222, which is an endpoint state.
However, if either the user or expert does not engage in the micro-session the state of the question is changed to the timeout state 238. From there the state of the question is changed to the micro-session incomplete state 230. From there when the user provides a rating to the answer(s) from the expert the state of the question is changed to the rated state 222 which is an endpoint state.
If the user and expert do engage in the micro-session but there is a problem in communications between the two, the state of the question is changed to the limbo state 240. From there after a period of 3 minutes the system will change the state of the question to the timeout state 238. From there the state of the question will follow the state descriptions changes previously described above.
Still referring to
Still referring to
If a similar question cannot be found in the knowledge base database (114 of
Still referring to
Still referring to
Referring to
When a question is routed to an expert the state is changed to the receiving state 316. If a expert in the receiving state 316, skips or flags a question the state of the expert reverts to the free state 314. If an expert in the receiving state 316, claims a question then the state is changed to the claiming state 318. If from the claiming state 318, the expert becomes unavailable the state is changed to the unavailable state 320, which is an endpoint state.
However, if an expert in the claiming state 318 begins answering the question the state is changed to the answering state 324. From the answering state 324 the expert can become unavailable which causes the state to change to the unavailable state 320, which is an endpoint state.
From the answering state 324 the expert upon answering the question transitions to the free state 314. From the answering state 324 the user can request a micro-session in which case the state of the expert transitions to the chatting state 328. If the user leaves a micro-session the state of the expert transitions to the waiting state 326 and then after a timeout period back to the free state 314. Alternatively, when the micro-session is complete the expert transitions back to the free state 314. At any time in the free state 314, the expert can log out which transitions their state first to the logged-out state 322.