Computer scientists and engineers have long tried to create computers that mimic the mammalian brain. Such efforts have met with limited success. While the brain contains a vast, complex and efficient network of neurons that operate in parallel and communicate with each other via dendrites, axons and synapses, virtually all computers to date employ the traditional von Neumann architecture and thus contain some variation of a basic set of components (e.g., a central processing unit, registers, a memory to store data and instructions, external mass storage, and input/output devices). Due at least in part to this relatively simple architecture, von Neumann computers are adept at performing calculations and following specific, deterministic instructions, but—in contrast to the biological brain—they are generally inefficient; they adapt poorly to new, unfamiliar and probabilistic situations; and they are unable to learn, think, and handle data that is vague, noisy, or otherwise imprecise. These shortcomings substantially limit the traditional von Neumann computer's ability to make meaningful contributions in the oil and gas and other industries.
Accordingly, there are disclosed in the drawings and in the following description various embodiments of a cognitive computing meeting facilitator that may be used in numerous applications, including the oil and gas context. In the drawings:
It should be understood, however, that the specific embodiments given in the drawings and detailed description thereto do not limit the disclosure. On the contrary, they provide the foundation for one of ordinary skill to discern the alternative forms, equivalents, and modifications that are encompassed together with one or more of the given embodiments in the scope of the appended claims.
Disclosed herein are methods and systems for facilitating meetings using cognitive computers. Cognitive computers—also known by numerous similar terms, including artificial neural networks, neuromorphic and synaptronic systems, and, in this disclosure, neurosynaptic systems—are modeled after the mammalian brain. In contrast to traditional von Neumann architectures, neurosynaptic systems include extensive networks of electronic neurons and cores operating in parallel with each other. These electronic neurons function in a manner similar to that in which biological neurons function, and they couple to electronic dendrites, axons and synapses that function like biological dendrites, axons and synapses. By modeling processing logic after the biological brain in this manner, cognitive computers—unlike von Neumann machines—are able to support complex cognitive algorithms that replicate the numerous advantages of the biological brain, such as adaptability to ambiguous, unpredictable and constantly changing situations and settings; the ability to understand context (e.g., meaning, time, location, tasks, goals); and the ability to learn new concepts.
Key among these advantages is the ability to learn, because learning fundamentally drives the cognitive computer's behavior. In the cognitive computer—just as with biological neural networks—learning (e.g., Hebbian learning) occurs due to changes in the electronic neuron and synapses as a result of prior experiences (e.g., a training session with a human user) or new information. These changes, described below, affect the cognitive computer's future behavior. In a simple example, a cognitive computer robot with no prior experience or software instructions with respect to coffee preparation can be introduced to a kitchen, shown what a bag of ground coffee beans looks like, and shown how to use a coffee machine. After the robot is trained, it will be able to locate materials and make the cup of coffee on its own, without human assistance. Alternatively, the cognitive computer robot may simply be asked to make a s cup of coffee without being trained to do so. The computer may access information repositories via a network connection (e.g., the Internet) and learn what a cup is, what ground coffee beans are, what they look like and where they are typically found, and how to use a coffee machine—for example, by means of a YOUTUBE® video. A cognitive computer robot that has learned to make coffee in other settings in the past may engage in a conversation with the user to ask a series of specific questions, such as to inquire about the locations of a mug, ground coffee beans, water, the coffee machine, and whether the user likes sugar and cream with his coffee. If, while preparing the coffee, a wet coffee mug slips from the robot's hand and falls to the floor, the robot may infer that a wet mug is susceptible to slipping and it may grasp a wet mug a different way the next time it brews a cup of coffee.
The marriage between neurosynaptic architecture and cognitive algorithms represents the next step beyond artificial intelligence and can prove especially useful in the oil and gas industry, although the techniques disclosed herein find application in many different contexts and industries. This disclosure describes the use of the cognitive computer's neurosynaptic technology (and associated cognitive algorithms) to intelligently facilitate meetings (e.g., meetings between oil and gas personnel). The cognitive computer is an active participant in the meeting and behaves in a manner similar to the human participants. For instance, the cognitive computer listens to the discussion, views presentations, reads documents, asks questions and provides statements or suggestions. In this way, the cognitive computer is substantially more useful in such meetings than a traditional von Neumann computer. The cognitive computer can be more useful than even humans because it has instant access to a vast array of resources stored in one or more information repositories, such as any and all material accessible via the Internet/World Wide Web; journals, articles, books, white papers, reports and all other such documents; speeches; presentations; video and audio files; and any and all other information that a cognitive computer could potentially access. The cognitive computer adds value to the meeting by drawing on these resources to generate its questions, answers, statements and suggestions. The cognitive computer additionally provides arguments supporting and opposing each of its answers or suggestions and engages in conversations with human meeting participants about its answers, suggestions or any other aspect of the meeting agenda. The cognitive computer performs all of these actions intelligently and with minimal or no human assistance using its neurosynaptic architecture and cognitive algorithms.
In addition to being an active participant in the meeting, the cognitive computer functions in an executive capacity by having access to controls for numerous remotely located machines. For instance, the cognitive computer can access and control or at least communicate with other personal computers (e.g., laptops, notebooks), drilling equipment, logging equipment, safety equipment, and other, similar devices. Further, the cognitive computer performs in a secretarial capacity by memorializing the meeting. The cognitive computer may perform this task by generating minutes and other records of the meeting (including what was said during the meeting and who said it (e.g., using commercially available voice recognition software)); tagging such records with relevant keywords or phrases to facilitate location of the records in the future; and updating the resources to which it has access with any relevant information from the meeting (e.g., the tagged records). The cognitive computer also may send copies of the records to one or more persons or entities, such as the meeting participants. The cognitive computer performs these and other actions automatically, intelligently, intuitively and with minimal or no human assistance using its neurosynaptic architecture and cognitive algorithms.
In some cases, the cognitive computer may manage the meeting, meaning that—in addition to the other duties described above—it sets the agenda, initiates discussions, keeps the meeting focused on the agenda and provides reminders when the discussion strays off topic, and distributes assignments to each participant. The scope of disclosure is not limited to this or any other specific set of tasks or roles within a meeting. On the contrary, the cognitive computer has the ability to perform virtually any task that it has been trained to perform.
In an illustrative application, a cognitive computer may be present during a meeting of humans and/or other cognitive computers and may automatically and intuitively identify the meeting agenda by receiving input from the meeting (e.g., listening to the conversation between participants; viewing presentations using a camera; listening to participants using a microphone), by actively asking questions, by receiving a meeting agenda document, or the like. For instance, during a meeting convened between drilling engineers to discuss placement of a new well, the cognitive computer may collect information (e.g., by listening to the conversation between the engineers and viewing presentation materials displayed on a television screen) and may automatically and without prompting determine, using its cognitive algorithms and prior learning experiences, that a new well is being planned and understand all details pertaining to the potential new well.
As the meeting progresses, the cognitive computer is an active participant, asking questions, answering questions and making statements and suggestions. For example, a human participant may ask the cognitive computer to produce a map of a particular oilfield, and the cognitive computer may oblige by accessing relevant resources and displaying the map on a television screen in the meeting room. When asked for a recommendation on an optimal drilling site for a new well in that oilfield, the cognitive computer accesses any number of resources—such as those that include formation properties, time constraints, personnel constraints and financial constraints—to generate a recommendation. The cognitive computer may also generate arguments supporting and opposing its recommendation, as well as a ranked list of alternative recommendations. The ranking algorithm may have been programmed directly into the computer, or the computer may have been trained to use the algorithm, or some combination thereof. The cognitive computer may have automatically modified its ranking algorithm based on past user recommendation selections and subsequent outcomes so that the recommendation most likely to be selected by the user is ranked highest and is most likely to produce the best outcome for the user.
The computer may also engage in conversations with a meeting participant or other entity (e.g., another cognitive computer) about the recommendations, the arguments pertaining to the recommendations, or any item on the meeting agenda in general. For example, a meeting participant may rebut the cognitive computer's arguments supporting a particular suggestion and, in turn, the cognitive computer may rebut the participant's arguments with facts gleaned from any available resource, having been trained to engage in such fact-based conversations in the past. The computer may, for example, explain that although other wells in the field have historically underperformed, the formations abutting those wells were sub-optimally fractured. Based on the participant's responses, the cognitive computer may learn for future use the types of facts and arguments the participant finds most persuasive.
The foregoing example is merely illustrative. The cognitive computer is able to handle virtually any task that it has been trained to perform, regardless of whether that training is provided by another entity or whether the cognitive computer has accessed resources to help it train itself to at least some extent. Numerous such interactions may occur during the course of a single meeting, and the cognitive computer handles some or all such actions using the computer's probabilistic, cognitive algorithms and prior learning experiences. After the meeting is complete, the cognitive computer updates its resources in accordance with information collected during the meeting, thereby improving the accuracy and reliability of the data in the resources. The cognitive computer also generates a summary (e.g., minutes) of the meeting as well as any other such relevant information, and provides the summary and other relevant information to one or more of the meeting participants—for instance, through e-mail.
As explained above with respect to
Each synaptic component 140 includes an excitatory/inhibitory signal generator 146, a weight signal generator 148 associated with the corresponding synapse, and a pulse generator 150. The pulse generator 150 receives a clock signal 152 and a spike input signal 154, as well as a weight signal 151 from the weight signal generator 148. The pulse generator 150 uses its inputs to generate a weighted spike signal 158—for instance, the spike input signal 154 multiplied by the weight signal 151. The width of the weighted spike signal pulse reflects the magnitude of the weighted signal, and thus the magnitude that will contribute to or take away from the membrane potential of the electronic neuron. The weighted signal for the synapse corresponding to the synaptic component 140 is provided to the core component 142, and similar weighted signals are provided from synaptic components 140 corresponding to other synapses from which the electronic neuron receives input. For each weighted signal that the core 142 receives from a synaptic component 140, the core 142 also receives a signal 156 from the excitatory/inhibitory signal generator 146 indicating whether the weighted signal 158 is an excitatory (positive) or inhibitory (negative) signal. An excitatory signal pushes the membrane potential of the electronic neuron toward its action potential threshold, while an inhibitory signal pulls the membrane potential away from the threshold. As explained, the neurosynaptic learning process involves the adjustment of synaptic weights. Such weights can be adjusted by modifying the weight signal generator 148.
The core component 142 includes a membrane potential counter 160 and a leak-period counter 162. The membrane potential counter receives the weighted signal 158 and the excitatory/inhibitory signal 156, as well as the clock 152 and a leak signal 164 from the leak-period counter 162. The leak-period counter 162, in turn, receives only clock 152 as an input.
In operation, the membrane potential counter 160 maintains a counter—initially set to zero that is incremented when excitatory, weighted signals 158 are received from the synaptic component 140 and that is decremented when inhibitory, weighted signals 158 are received from the synaptic component 140. When no synapse pulse is applied to the core component 142, the leak period counter signal 164 causes the membrane potential counter 160 to gradually decrement at a predetermined, suitable rate. This action mimics the leak experienced in biological neurons during a period in which no excitatory or inhibitory signals are received by the neuron. The membrane potential counter 160 outputs a membrane potential signal 166 that reflects the present value of the counter 160. This membrane potential signal 166 is provided to the comparator component 144.
The comparator component 144 includes a threshold signal generator 168 and a comparator 170. The threshold generator 168 generates a threshold signal 169, which reflects the threshold at which the electronic neuron 130 generates a spike signal. The comparator 170 receives this threshold signal 169, along with the membrane potential signal 166 and the clock 152. If the membrane potential signal 166 reflects a counter value that is equal to or greater than the threshold signal 169, the comparator 170 generates a spike signal 172, which is subsequently output via an axon of the electronic neuron. As numeral 174 indicates, the spike signal is also provided to the membrane potential counter 160, which, upon receiving the spike signal, resets itself to zero.
The synapse array 408 also couples to neurons 412. The neurons 412 may be a single-row, multiple-column array of neurons, or, alternatively, the neurons 412 may be a multiple-row, multiple-column array of neurons. In either case, dendrites of the neurons 412 couple to axons 410 in the synapse array 408, thus facilitating the transfer of spikes from the axons 410 to the neurons 412 via dendrites in the synapse array 408. The spike router 424 receives spikes from off-core sources, such as the core 406 or off-chip neurons. The spike router 424 uses spike packet headers to route the spikes to the appropriate neurons 412 (or, in some embodiments, on-core neurons directly coupled to axons 410). In either case, bus 428 provides data communication between the spike router 424 and the core 404. Similarly, neurons 412 output spikes on their axons and bus 430 provides the spikes to the spike router 424. The core 406 is similar or identical to the core 404. Specifically, the core 406 contains axons 416, neurons 418, and a synapse array 414. The axons 416 couple to a spike router 426 via bus 432, and neurons 418 couple to the spike router 426 via bus 434. The functionality of the core 406 is similar or identical to that of the core 404 and thus is not described. A bus 436 couples the spike routers 424, 426 to facilitate spike routing between the cores 404, 406. A bus 438 facilitates the communication of spikes on and off of the chip 402. The architectures shown in
Various types of software may be written for use in cognitive computers. One programming methodology is described below, but the scope of disclosure is not limited to this particular methodology. Any suitable, known software architecture for programming neurosynaptic processing logic is contemplated and intended to fall within the scope of the disclosure. The software architecture described herein entails the creation and use of programs that are complete specifications of networks of neurosynaptic cores, along with their external inputs and outputs. As the number of cores grows, creating a program that completely specifies the network of electronic neurons, axons, dendrites, synapses, spike routers, buses, etc. becomes increasingly difficult. Accordingly, a modular approach may be used, in which a network of cores and/or neurons encapsulates multiple sub-networks of cores and/or neurons; each of the sub-networks encapsulates additional sub-networks of cores and/or neurons, and so forth. In some embodiments, the CORELET® programming language, library and development environment by IBM® may be used to develop such modular programs.
The remainder of this disclosure describes the use of hardware and software cognitive computing technology to facilitate meetings. As explained above, any suitable cognitive computing hardware or software technology may be used to implement such techniques. This cognitive computing technology may include none, some or all of the hardware and software architectures described above. For example, the meeting facilitation techniques described below may be implemented using the CORELET® programming language or any other software language used in conjunctive with cognitive computers. The foregoing architectural descriptions, however, are non-limiting. Other hardware and software architectures may be used in lieu of, or to complement, any of the foregoing technologies. Any and all such variations are included within the scope of the disclosure.
The cognitive computer 702 communicates with any number of remote information repositories 710 via the network interface 708. The quantity and types of such information repositories 710 may vary widely, and may include, without limitation, other cognitive computers; databases; distributed databases; sources that provide real-time data pertaining to oil and gas operations, such as drilling, fracturing, cementing, or seismic operations; servers; other personal computers; mobile phones and smart phones; websites and generally any resource(s) available via the Internet, World Wide Web, or a local network connection such as a virtual private network (VPN); cloud-based storage; libraries; and company-specific, proprietary, or confidential data. Any other suitable source of information with which the cognitive computer 702 can communicate is included within the scope of disclosure as a potential information repository 710. The cognitive computer 702—which, as described above, has the ability to learn, process imprecise or vague information, and adapt to unfamiliar environments—is able to receive an oilfield operations indication (e.g., via one or more input interfaces 704) and intelligently determine one or more recommendations based on the oilfield operations indication and associated information; prior learned knowledge and training; scenarios generated using oilfield operations models; and resources accessed from information repositories. The software stored on the cognitive computer 702 is probabilistic (i.e., non-deterministic) in nature, meaning that its behavior is guided by probabilistic determinations regarding the various possible outcomes of each oilfield operations model scenario and each recommendation available in a given oilfield operations indication.
In addition to the participants, the meeting environment 800 includes multiple input/output devices with which the participants may interact with each other, with the cognitive computing participant 810, and with other computers or servers with which the input devices can communicate. For example, the meeting environment 800 includes laptop computers 812A-812D—one for each human participant. Such computers facilitate communication between the participants, including the cognitive computing participant 810. For instance, input provided by one of the human participants 802, 804, 806, 808 may be sent directly to all participants, some participants, or just one participant (e.g., just the cognitive computing participant 810). Similarly, the cognitive computing participant 810 may provide output that is available on all, some, or just one of the laptop computers 812A-812D. The computers also facilitate communications with entities other than meeting participants—e.g., the Internet and World Wide Web, computers or non-participants located in various geographic areas, and other such entities.
The environment 800 also includes microphones 814A, 814B. In some cases, such as in the environment 800, a single microphone may be shared by multiple participants, and in other cases, each participant may have his or her own microphone. In some cases, a microphone may be positioned in the environment 800 so that it receives speech output by the cognitive computing participant 810. The cognitive computing participant 810 may use the microphones 814A, 814B to record some or all of the meeting. Alternatively or in addition, the microphones 814A, 814B may be used to teleconference with one or more participants who are not present in the conference room depicted in meeting environment 800.
The meeting environment 800 may include other types of input and output devices. For example, the environment 800 may include one or more smart phones 816; one or more touch screen tablets 818; one or more cameras 820; one or more wearable devices 822 (e.g., augmented reality devices such as GOOGLE GLASS®); one or more printers 824; one or more displays 826; and one or more speakers 828. With the exception of the printer 824, display 826, and speaker 828, each of these devices is able to capture various types of input and provide that input to one or more entities, including all, some, one or none of the participants, as described above with respect to the laptop computers 812A-812D. In addition, the camera 820 may be used to capture information and provide it to one or more participants or entities. For example, multiple cameras 802 may be used to identify the human participants attending the meeting by comparing an image of each participant captured by the cameras 802 and comparing those images to images stored in a database. In another example, a camera 820 may capture the facial expressions of a human participant and provide the images to the cognitive computing participant 810, which, in turn, is trained to interpret the facial expression images to determine the emotions of the human participant (e.g., with the assistance of commercially available facial recognition software). The cognitive computing participant 810 may determine, for instance, that the facial expressions of the human participant indicate confusion regarding a topic being discussed, and the cognitive computing participant 810 may offer that human participant additional assistance. The display 826 may couple to any electronic device in or outside of the meeting environment 800, including the cognitive computing participant 810, thus enabling various entities to display presentations, photos, videos and the like on the display 826. The speakers 828 output sound produced by, e.g., one or more of the participants (whether located in the meeting room or in a separate geographic area). The scope of disclosure is not limited to the specific input/output devices depicted in
In some embodiments, the cognitive computing participant 810 is the leader of the meeting and, thus, it sets the agenda. For instance, the cognitive computing participant 810 may periodically and unilaterally review its resources and, during such review, it may determine that a meeting should be called to discuss a particular topic. In such cases, the cognitive computing participant 810 uses its resources to determine which human participants and cognitive computing participants to invite, and it sends them invitations (e.g., MICROSOFT OUTLOOK® calendar invitations) specifying the meeting date, time and location. The cognitive computing participant 810 may include additional, relevant information in the invitation (e.g., particular instructions for specific participants). In addition, the cognitive computing participant 810 may reserve meeting rooms using relevant corporate software. Once the meeting begins, the cognitive computing participant may begin the meeting with a background explanation of the reason for the meeting and any and all other information that may be useful to explain the purpose of the meeting. In doing so, it may produce a written agenda that it e-mails to the participants or displays on the display 826. During the course of the meeting, the cognitive computing participant 810 acts as a facilitator, ensuring that the meeting remains on track and does not stray to tangential topics, and further ensuring that all relevant laws and policies are complied with during the meeting (e.g., information technology policies, government regulations, intellectual property laws).
Once the agenda has been determined, the meeting progresses to discussion of the agenda topics (step 906). In step 906, the cognitive computing participant interacts with the other participants and enhances the meeting by combining access to a vast array of resources with its ability to think in a manner similar to the mammalian brain. This step 906 is now described with respect to the method 906 of
The method 906 proceeds with the cognitive computing participant determining whether the received input is a question or a statement (step 952). If the input is a question, the cognitive computing participant performs steps 954, 956, 958 and 960; otherwise, if the input is a statement, the cognitive computing participant performs steps 962, 964, 966, 968 and 960. Assuming that the input is a question, the method 906 comprises the cognitive computing participant asking one or more follow-up questions of the other participants (step 954). For example, if human participant 802 asks what fracturing plan the team agreed to at the previous meeting, the cognitive computing participant may ask human participant 802 to specify the well to which the human participant 802 is referring if the identity of the well is not apparent from the preceding conversation.
Still assuming that the input is a question, the cognitive computing participant then accesses one or more resources to obtain relevant information that assists the cognitive computer in answering the question, and it may ask additional questions of the other participants as necessary (step 956). As explained above, the resources to which the cognitive computing participant has access is vast and can include, without limitation, any material available via the Internet or World Wide Web; books; journals; patents; patent applications; white papers; newspapers; magazines and periodicals; proprietary data and local data (e.g., coupled to the cognitive computing participant via a universal serial bus port; accessible on a company intranet) that form a knowledge corpus; other machines (both von Neumann and cognitive-based) with which the cognitive computing participant can interact; and virtually any other information in any form and in any language to which the cognitive computing participant may have access. Thus, for example, to answer the question regarding what fracturing plan the team agreed to at the previous meeting or what suggestions were made, the cognitive computing participant may access minutes or reports that it generated at the previous meeting. The method 906 then comprises the cognitive computing participant answering the human participant 802 accordingly (step 958) and updating the resources to which it has access based on the interaction (e.g., updating meeting minutes to reflect the question and answer) (step 960). The scope of disclosure is not limited to such simple tasks, however. On the contrary, as explained above, the cognitive computing participant uses a neurosynaptic architecture to execute cognitive, probabilistic algorithms that enable it to use relevant resources to perform complex probabilistic or deterministic data analyses, run simulations and oilfield operations models, and other such multifaceted operations—essentially, any and all actions that it has been trained to perform or that it can unilaterally learn to perform using the resources to which it has access.
If, however, the cognitive computing participant determines at step 952 that a statement was made, the method 906 comprises the cognitive computing participant assessing the statement and asking questions to gather more information, if necessary (step 962). The method 906 next includes the cognitive computing participant accessing its resources to determine whether it can add value by making a statement or suggestion (step 964). The cognitive computing participant may also ask additional questions as it accesses the resources, as necessary. For instance, during a discussion about a novel technology that the human participants have invented, the human participant 804 may tell the human participant 808 that she thinks their technology has already been patented in the United States by a particular company. The cognitive computing participant hears this discussion and determines that it can add value to the discussion by accessing its resources to verify the statement made by human participant 804. Thus, the cognitive computing participant proactively accesses the patent databases of various countries, generates search terms appropriate for the technology being discussed, and enters the search terms into the patent databases in an attempt to identify the most relevant patents and patent applications. The cognitive computing participant may find five relevant patents and may display a ranked list of the patents, with the top-ranked patent being the patent that human participant 804 was referencing. The cognitive computing participant also may summarize each of the five patents, explain its opinion on whether the patents disclose the technology being discussed and to what degree, and offer suggestions on how to proceed (e.g., by describing the ways in which the participants' invention and the five patents differ). When it provides suggestions, the cognitive computing participant may provide arguments supporting and opposing each suggestion, thus enabling the human participants to make better-informed decisions and facilitating conversation between the human participants and the cognitive computing participant. The cognitive computing participant may provide all such information in the form of an e-mail, voice, a presentation, some other communication technique, or a combination thereof. Based on these results, the human participants may decide that their invention has not been patented and they may choose to move forward with filing one or more patent applications describing the invention. As explained above in detail, the cognitive computing participant performs these actions by executing its cognitive, probabilistic algorithms.
The method 906 subsequently includes the cognitive computing participant determining whether it has a statement or suggestion to make to the rest of the participants in the meeting (step 966). If so, it makes the statement or suggestion (step 968), for example, by voice, email, audio, video, images, etc. In either case, the cognitive computing participant updates one or more resources based on these interactions (step 960), and control of the method 906 again returns to step 951.
As previously explained,
At least some embodiments herein are directed to a system for facilitating meetings that comprises: neurosynaptic processing logic; and one or more information repositories accessible to the neurosynaptic processing logic, wherein, during a meeting of participants that includes the neurosynaptic processing logic, the neurosynaptic processing logic accesses resources from the one or more information repositories to perform a probabilistic analysis, and wherein, based on said probabilistic analysis, the neurosynaptic processing logic answers a question from one or more of the participants, asks a question of the participants, makes a statement to the participants, or provides a suggestion to the participants. Some or all such embodiments may be supplemented using one or more of the following concepts, in any order and in any combination: wherein the neurosynaptic processing logic accesses said resources based on input collected from one or more of the participants; wherein, without human assistance, the neurosynaptic processing logic generates an argument in favor of or opposing said suggestion; wherein the neurosynaptic processing logic generates a record of at least part of said meeting; wherein the record includes information selected from the group consisting of: names of the participants; input provided by each of said participants during the meeting; links to materials presented or distributed during the meeting; copies of materials presented or distributed during the meeting; keywords and phrases relating to said meeting; and security clearance requirements to access the record; wherein said accessed resources include documents identifying intellectual property rights, and wherein, based on said probabilistic analysis, the neurosynaptic processing logic provides to one or more of said participants a subset of said documents that the logic determines to be relevant to said meeting; wherein the neurosynaptic processing logic executes a decision that is made during the meeting; wherein said meeting participants include oil and gas industry personnel; wherein the participants are human participants, other cognitive computer participants, or a combination of human participants and cognitive computer participants; wherein the neurosynaptic processing logic interacts with one or more of the participants based on facial expressions of said one or more of the participants; wherein the neurosynaptic processing logic receives input from at least one of the participants via a wearable device.
At least some embodiments described herein are directed to a cognitive computer for facilitating meetings, comprising: a plurality of neurosynaptic cores operating in parallel, each neurosynaptic core coupled to at least one other neurosynaptic core and comprising multiple electronic neurons, electronic dendrites and electronic axons, at least some of said electronic dendrites and electronic axons coupling to each other in a synapse array; and a network interface coupled to at least one of the plurality of neurosynaptic cores, the network interface provides access to resources in one or more information repositories, wherein the plurality of neurosynaptic cores accesses said resources via the network interface to interact with one or more participants in a meeting. Some or all such embodiments may be supplemented using one or more of the following concepts, in any order and in any combination: wherein said meeting occurs at least partially online; wherein, to interact with said one or more participants, the plurality of neurosynaptic cores answers a question from one or more of the participants, asks a question of the participants, makes a statement to the participants, or provides a suggestion to the participants; wherein said question is regarding a prior decision made by at least one of said one or more participants or a prior suggestion made by at least one of said one or more participants, said prior decision and said prior suggestion made during said meeting or during a different meeting; wherein said participants include human participants, cognitive computer participants, or both; wherein the plurality of neurosynaptic cores generates a record of at least part of said meeting; wherein the meeting is between oil and gas industry personnel.
At least some embodiments are directed to a method for facilitating meetings, comprising: conducting a meeting between one or more human participants and a cognitive computer that includes a plurality of neurosynaptic cores; the cognitive computer observing interactions between the one or more human participants; the cognitive computer accessing resources from one or more information repositories to perform a probabilistic analysis based on said observation; and the cognitive computer using the probabilistic analysis to make a statement, offer a suggestion, ask a question, or answer a question during the meeting. Some or all such embodiments may be supplemented using the following concept: wherein observing interactions includes one or more actions selected from the group consisting of: listening to said interactions using a microphone; watching a presentation using a camera; reading a report using the camera; observing a facial expression using the camera; receiving input from a keyboard; receiving input from a touch screen; receiving input from a mouse or touchpad; and receiving input from a wearable device.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US15/39118 | 7/2/2015 | WO | 00 |