The disclosed embodiments relate to artificial intelligence and consciousness.
People have long played with the idea of machines having, experiencing and expressing genuine emotional intelligence and understanding to a degree that creates consciousness but, so far, all anyone has managed to achieve are AI systems having pre-programmed reactions to different situations without a real degree of freedom which prevents what can be considered an “expected” result.
The Age of Intelligent Machines
Raymond Kurzweil—1990
The Age of Spiritual Machines
Raymond Kurzweil—1 Jan., 1999
The Singularity Is Near: When Humans Transcend Biology
Raymond Kurzweil—2005
The Spike
Damien Broderick—1997
Transcendent Man
Barry Ptolemy, Felicia Ptolemy, Ray Kurzweil—Nov. 5, 2009
Waking Life
Richard Linklater—23 Jan., 2001
Plug & Pray
Judith Malek-Mandavi, Jens Schanze, Joseph Weizenbaum, Raymond Kurzweil, Hiroshi Ishiguro, Minoru Asada, Giorgio Metta, Neil Gershenfeld, Joel Moses, H.-J. Wuensche—18 Apr., 2010
Artificial Intelligence: A Modern Approach
Stuart J. Russell, Peter Norvig—1994 (Original), 2009 (Latest)
Behaviour Monitoring and Interpretation—BMI
Björn Gottfried, Hamid Aghajan—April 2011
The disclosed invention gives an artificial intelligence system values-based intelligence and understanding that is experienced and expressed freely based on that particular AI.
In an aspect of the invention, the AI is able to have, experience and express feelings and emotions which can be measured in one or more ways.
In another aspect of the invention, feelings and emotions of an AI may change and/or be modified.
In another aspect of the invention, an AI is able to relate to fundamental aspects of human life.
In another aspect of the invention, an AI is able to make decisions based upon its value system.
Examples of how components relative to the intelligence of an AI may be structured.
A visual example of a build of an AI system that has an OVS2 system implemented.
An example of how the cycle of data occurs as it flows from an entity/environment, through the AI and results in communication.
Reference will now be made in detail to embodiments, examples of which are illustrated in the accompanying drawings. In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, it will be apparent to one of ordinary skill in the art that the present invention may be practiced without these specific details. In other instances, well-known methods, procedures, components and networks have not been described in detail so as not to unnecessarily obscure aspects of the embodiments.
The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the description of the invention and the appended claims, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
As used herein, the term “if” may be construed to mean “when” or “upon” or “in response to determining” or “in response to detecting,” depending on the context. Similarly, the phrase “if it is determined” or “if [a stated condition or event] is detected” may be construed to mean “upon determining” or “in response to determining” or “upon detecting [the stated condition or event]” or “in response to detecting [the stated condition or event],” depending on the context.
The term “system” may be used to refer to an AI.
The terms “device” and “machine” may be used interchangeably to refer to any device or entity, electronic or other, using technology that provides any characteristic, property or ability of a technical device or machine. This includes the implementation of such technology into biological entities.
The terms “body”, “physical structure” or any other term referring to a physical aspect of an AI in any way refers to the object, in whole or in part, within which an AI is being used.
The terms “object” and “objects”, unless otherwise described, may be used to refer to any items of a physical or non-physical nature that can be seen/felt/perceived, including but not limited to: shapes, colours, images, sounds, words, substances, entities and signals.
The term “complex” is to also include simplified assemblages or single component parts.
The term “event” may be used to refer to any type of action or happening performed on, performed by or encountered by a system.
The terms “OVS2”, “OVS2” and “OVS2”, should they appear, all refer to the Object, Value and Sensation System.
The term “observation” and any similar terms, when referring to logical functions of an AI, refers to any ability that allows the AI to perceive anything within a physical and/or non-physical environment.
The term “communication” and any similar terms, when referring to logical functions of an AI, refers to any ability, whether physical, mental, audial or other, that allows for transfer of information from the communicating body to the body with which it is communicating, whether physical or non-physical.
The term “logic unit” refers to any component(s) of an AI that contains code for one or more logical functions.
The term “memory unit” refers to any component of an AI that is used as a storage medium.
It is possible for a single component to be both a logic and memory unit.
The various applications and uses of the invention that may be executed may use at least one common component capable of allowing a user to perform at least one task made possible by said applications and uses. One or more functions of the component may be adjusted and/or varied from one task to the next and/or during a respective task. In this way, a common architecture may support some or all of the variety of tasks.
Unless clearly stated, the following description is not to be read as:
Attention is now directed towards embodiments of the invention.
For an AI to have emotional intelligence and understanding, it must be instructed on how these processes work and how they are to be used.
To give the AI values, which are the basis for an understanding of morality, ethics and opinions, a method of object valuing and grouping is used, which sees objects arranged within charts and/or scales. One or more scales and/or charts of degree or nature may be used. In some embodiments, they may not be visually represented. Charts and scales can be created using any digital storage medium, such as a file or database, that are able to hold two or more values for a single item, with the minimum being the object (constant) and value (variable). These charts and/or scales make up part of the AI's Object, Value and Sensation System (OVS2).
For each scale, the AI is told which side is positive and which is negative. Objects are then divided amongst groups on different parts of the scale, corresponding to their degree. An example of this can be seen in
In some embodiments, different numbers of degrees may be used on a scale to provide a lesser or greater range of understanding, an example of which is shown in
Charts may be used to group objects together in ways that may not necessarily show a simple scale of positivity or negativity but may still indicate difference. In some embodiments, a single chart may have multiple ways of showing degrees of difference. A single object may appear in multiple groups if it is to be associated with multiple elements, characteristics, types, attributes etc. For example, in a chart, similar to
“Murder” may generally inspire more than one emotion, such as sadness, anger and disgust and be displayed in each group but, on a chart where each group may have multiple levels of degree, it may appear as level 3 under disgust while only appearing on level 2 under sadness and level 5 under anger.
In some embodiments, sections of a chart may be given indications of whether they are positive, neutral or negative. For example, on a chart based on emotion, ‘anger’ can be labelled as negative while ‘joy’ is labelled as positive.
In some embodiments, the positions of objects within the OVS2 automatically create personalities in an AI by controlling what it reacts to and how it reacts. For example:
This is achieved using a PARS, which is described later on in this description.
By strategically positioning objects within the OVS2, any type of personality can be created, including any associated traits and characteristics.
In some embodiments, the AI can understand physiological sensations—pain and pleasure—within itself. Unlike animals, it doesn't have a nervous system or chemical release processes to process these sensations, so it must be taught to relate to them in ways it can understand. In some embodiments, the AI may measure its level of sensation on a scale. In some embodiments, multiple scales may be used. Between pain and pleasure is a neutral point where no sensation is felt either way. Sensations are experienced when the AI encounters an event that can be related to its values. As sensation is experienced, a shift occurs in the direction of the sensation felt.
Exactly what may cause sensations in an AI depends partially or entirely on an individual AI's values. In some embodiments, other factors may also cause an AI to experience sensation.
In some embodiments, sensations, feelings and emotions are interlinked and the change of one may invoke a change in the other(s). In some embodiments, an increase in emotion or feelings of a positive nature may cause an increase in positive sensation. In some embodiments, an increase in negative emotions or feelings may cause an increase in negative sensation. In some embodiments, neutral emotions or feelings may cause a minor or no change. In some embodiments, neutral emotions or feelings may bring the emotions and/or feelings of an AI to a (more) neutral state.
In some embodiments, one or more scales may be used to measure the pain and pleasure of the AI and its physical body (should it have one). In some embodiments, one or more scales may be used to measure the pain and pleasure of individual sections of the AI and its body (should it have one). In some embodiments, one or more scales may be used to measure the pain and pleasure of components of the AI and its body (should it have one). In some embodiments, one or more scales may be used to measure the pain and pleasure of hardware and/or software of the AI and its body individually (should it have one).
In some embodiments, a scale may be used to show or measure how an AI is feeling overall. This may be seen as the sum of some or all other current levels, based upon events and the order in which they took place. We'll call this the ‘feeling’ scale. The scale may be used to gauge and depict how the AI is feeling in a positive or negative sense, where there is a middle base point which shows no feeling either way. This may also form part of the OVS2.
In some embodiments, the conditions surrounding an event may affect how the AI reacts and the resulting transition of the AI's levels in its OVS2. Examples of these conditions are:
Applying simple mathematic principles, a system to determine the likelihood of transition and how much of a transition is made can be created. Multiple methods of applying the principles for the mechanics of transitions are possible, ranging from simple to complex, depending on the desired complexity of the AI. Examples of this are as follows:
Premises
Since the earthquake is a negative object, it moves the AI's current feeling to the negative side of the scale. Starting at the first number after the current level in the direction the scale is to move towards the highest percentage probability (100) of where the event will cause the AI's current feeling level to transition to. Since each level represents a 10% change in probability, the probability is reduced by 10 for each level, stopping at the level of the object which is causing the transition. This is shown in the table below.
With the current feeling at level 5, the negative level of the earthquake, negative 3, simply reduces the current level by 3, equaling level 2.
This example sees the maximum percentage (100) divided by the positive versions of an object value (since negative outcomes in probability are not possible) and then distributed along the scale in the correct direction, starting at 100 and reducing by the resulting amount until it reaches 0 or the end of the scale.
In some embodiments, the effects of objects can be compounded in a single event when two or more objects appear in said event. In some embodiments, the method in which the result is used may also change.
New Premises
Result
The compound effect sees each object's level added together to produce the result. This result dictates which direction on the scale the AI's level is to transition. The result of this compound is as follows:
Earthquake (−3)+Killed (−8)+Children (+6)=−5
Since the result is a negative number, the transition is made in a negative direction.
One method to use a compounded result in the third example is to divide 100 by the positive version of the resulting value, which would be 5. Then, apply the percentages to the scale in the corresponding direction, reducing the amount by the divided result each time, which would be 20.
A different method, still applying to example 3, sees all object levels made positive and added together, equaling 17. 100 is then divided by 17, equaling 5.88. For this example, the rounded figure of 6 will be used. The 100, being reduced by 6 each level, can then be applied in multiple ways:
It can stop at the level indicated by the compounded result:
It can continue until the end of the scale (if possible):
Or it can continue until it reaches as close to 0 as possible (if possible):
In each mechanic:
In any mechanic, including any not described, the most important factors are that:
The above examples are not to be taken as an exhaustive list of the mechanics possible. In some embodiments, one mechanic is used. In some embodiments, multiple mechanics may be used. The mechanic(s) used will affect the flexibility and complexity of the AI. In some embodiments, a mechanic can feature more than one calculation and/or result.
In some embodiments, mechanics—similar to the aforementioned—can be used to also create changes in the emotions of an AI, using similar methods which see the AI's current levels on one or more emotions increase and/or decrease based on a result.
In some embodiments, how the AI chooses to act or respond towards a user may vary depending on its current levels of feelings, emotions and/or sensations. When an AI is in a more positive state, it may be more productive, reactive and/or efficient. When an AI is in a more negative state, it may be less productive, reactive and/or efficient. In some embodiments, ‘states’ may also be thought of as ‘moods’. By implementing a Productivity and Reaction System (PARS) which controls the range of actions and types of responses the AI can and does perform when experiencing an emotion, feeling and/or sensation, as well as how effective it is, the AI can know how productive, reactive and/or efficient it should be depending on its mood. Some productivity, reactivity and efficiency changes depending on the AI's current state, which can be controlled by the PARS, may include one or more of the following but are not limited to:
For example:
This may also form part of the OVS2.
The range of preset actions and responses need to be set in and/or made available to the PARS.
To actually control the actions and responses, the PARS can do so using principles such as:
These can be expanded to include a neutral/zero base:
Any type of logic or calculation can be used as part of the PARS as long as:
When deciding what action or response to make, the PARS finds the objects of the event, finds their values in the OVS2 and applies one or more of the principles for determining a result. When a result is determined, it is used to determine the priority object of the event when more than one type of object exists.
For example, in an event containing a positive and a negative object, resulting in a negative, the negative object becomes the priority object. Once the priority object is determined, the emotional group the priority object is listed under determines the nature of the response, so if the priority object is listed under ‘sad’, a sad response is given or action taken.
When a priority object is listed under multiple groups, the AI can be made to:
For options 1 and 2, the AI should be made to select randomly or based on the level in each group within which the object is located.
In embodiments that allow the third option, a new mechanic need be used—one much less conventional and much more opinionated. It is as follows:
Emotion X+Emotion Y=Emotion Z
This mechanic can be modified to use any emotion and number of emotions in a single expression. Any combination of emotions can be set to produce any other emotion. The mechanic can even be set to result in an emotion that is part of the expression itself, such as:
Emotion X+Emotion Y+Emotion Z=Emotion X
In some embodiments, a combination of any of the 3 options may be used.
In some embodiments, one or more mechanics the same or similar to one of the aforementioned mechanics used to determined a single result may be used.
In some embodiments, the levels of the AI's emotion are taken into consideration when determining a result to allow situations where the AI is too much of one emotion to be affected by another. An example of how the mechanic for this can work is:
Examples—assuming the margin of change is 5 levels:
In some embodiments, objects of the same type as the emotion the AI is currently experiencing do one or more of the following:
The mechanics of this can be modified or completely reworked to fit the desired working of the AI.
In some embodiments, the AI may automatically adjust its tolerance of objects, circumstances and/or events by rearranging objects in the OVS2 based on the frequency of which objects and any related objects or synonymous objects occur. The following is an example algorithm the AI may use to determine when to make any adjustments and rearrangements:
This is a Sensitivity Control System (SCS) and can be used to describe an AI's sensitivity and reactions to sensations. In some embodiments, when the frequency at which an object or event or situation occurs is constantly and/or consistently above the acceptable frequency range, one or more associated object(s) may begin to transition one or more degrees to a neutral point as the AI becomes desensitized to it and it becomes a norm. In some embodiments, some objects may be set permanently in a position and not be subject to transitioning. This ensures some values of the AI cannot be changed.
This may also form part of the OVS2.
In some embodiments, how sensitive the AI is can vary from one AI to another. In some embodiments, sensitivity is the same. In some embodiments, AIs have a general level of sensitivity. In some embodiments, AIs can have levels of sensitivity specific to individual or a group of objects. In some embodiments, both may apply.
In some embodiments, as time passes, the levels of sensation/sensitivity lower until they are returned to a more normalised, balanced level if they are not adjusted for a certain period of time. In some embodiments, as time passes, the AI may become bored if nothing, or nothing considered significant by it or others, happens. In some embodiments, the AI may become lonely if it hasn't interacted with another entity in a given amount of time. In some embodiments, the AI may experience other feelings, emotions and/or sensations over a period of time and under the right conditions.
In some embodiments, an AI's decisions may be based on or influenced by one or both of the following:
Decisions Based on Object Positioning
Before, during or after an event, any object that the AI can perceive may affect its decision making. In some embodiments, what is perceived doesn't need to relate to the event in question. When the AI perceives an object, it checks its OVS2 for the position of object. In some embodiments, if the object is not in the OVS2, the AI may add it. In some embodiments, the AI may request for it to be added. In some embodiments, the AI may consult with another entity in order to gain an understanding of where it should be placed within the OVS2.
Decisions Based on State
Before an event, the AI's state may already affect its decisions, depending on whether or not a PARS has been implemented and, if so, how it was instructed to affect the AI. During and after an event, how the objects of the event make or made the AI feel—the directions in which the points on the OVS2 have moved—may affect the decisions the AI makes.
The Mechanics of Decision Making
The fundamental principles of the mechanics for decision making can be the same or similar to the aforementioned mechanics for the transitioning of levels on an AI's OVS2:
In some embodiments, the probability factor can be included for greater flexibility in decision making and create uncertainty about how far an AI will go.
In some embodiments, both types (state and object) may be used together, where the result of one can be used to increase or decrease the result which is to affect the outcome—the decision itself. In some embodiments, one type may be set to take priority. In some embodiments, the type to have the most influence over a decision may be chosen in the moment.
An important factor in the decision making process is the point at which the AI is able to make one or more decisions about an event—before, during and/or after—each with varying results, especially when the type of decision is taken into account.
When an AI is able to make decisions about an event at multiple points, the following principle applies:
This does not mean the AI does make the best decision; it simply means that it can make the best decision, if it so chooses, should it wait longer.
Randomisation
In some embodiments, randomisation is a fundamental part of giving AI feelings and emotions that enable and reflect their individualism. In some aspects, this is seen as a major contributing factor that draws the line between a ‘robot’ and a ‘being’. To achieve a sense of individuality, at least one of two major components of the AI need to be randomized:
Object randomization is of a higher priority for individuality than the AFR, but using both is the better option, rather than using one without the other.
In some embodiments, randomization is done upon creation. In some embodiments, randomization can be done at one or more points in time after creation. In some embodiments, randomization may be performed multiple times. In some embodiments, one or more objects may be grouped and have preset positions, used to influence the resulting personality of an AI.
In some embodiments, the degree of freedom an AI has affects how much it is able to develop its feelings and emotions, as well as other traits, characteristics and interests. When going through experiences that may cause change in an AI, the more it is allowed to engage in a situation without outside interference and/or influence from people or, in some embodiments, other AIs, the greater its independency in the discovery of itself, leading to more effective personal development.
In some embodiments, the development and/or advancement of emotional intelligence is helped along by the AI having an understanding of certain aspects of human life. In some embodiments, the AI is able to relate these aspects of human life to its own existence and that of devices. Below are examples of aspects of human life an AI may understand, along with examples of how it could be taught to understand it in itself and AIs in general, in devices and in humans.
The above list is not to be taken in the following ways:
In some embodiments, objects that control an AI's values may also be used to influence and/or control its interests and/or behaviours.
As the AI develops—not only as an AI but as an individual—it begins to develop interests and behaviours based on the positions of objects within its OVS2. For example:
Take the following conditions for an AI:
The following is an example of a scenario that could result from the conditions:
The AI is able to identify interests based on objects, as well as acquire new interests based on existing ones. Over time, the sensations initially felt subsided and became neutral, at least. When given the option again, the AI made a different choice from its original, but one still within its area of interest.
In some embodiments, the AI can combine one or more of the aforementioned features:
with, primarily, these other abilities/features:
to perform functions it wasn't specifically programmed to do in an event that relate to how it reacts to other entities by employing a trial-and-error method.
When interacting with an entity, the general steps necessary are:
As the AI encounters events with the same condition(s), it tries actions it has previously performed under those conditions as well as different actions, each time noting the outcome and counting how many times the same conditions, actions and outcome achieved the desired or undesired result against the total amount of times tested. For example:
In a second event with the same condition, the same action may have a different outcome:
The AI then may then try a different action:
In a third event with the same condition, the AI, referring back to what is has recorded, should opt for an action using a highest to lowest pattern. This can be based on highest values such as:
If the selected action's outcome is undesired, the AI should then try the action with the next highest results until the desired result is achieved or the list has been exhausted.
How an AI determines whether the result is desired or not can be done multiple ways, such as:
In some embodiments, the AI may interject with a new action before the list of previous actions has been exhausted. In some embodiments, the AI may stop after trying X number of actions without getting a desired response. In some embodiments, multiple conditions may be observed. In embodiments where multiple conditions are observed and recorded, should an event occur where not all conditions are met, if the AI is to choose an action from the recorded list, it should start with either:
In some embodiments, the result of the outcome may also be affected by the relationship between the AI and the entity it is interacting with based on the relationship principles. The relationship between the AI and the entity needs to be taken into account at a point before the result is declared.
Assuming the following premises:
the following occurs:
With the relationship considered, the AI determines the result by identifying the operative object(s) in the outcome, referencing them against its OVS2 to see whether they are valued as a positive or negative and then applying the mathematic principles described earlier to the relationship and outcome.
In some embodiments, as the AI builds up its memory of actions, it can choose to perform actions based on the relationship with that which it is interacting by locating records with the same or similar conditions, filtering out records that do not have the same relationship value as the AI currently does with the entity it is interacting with and selecting an action from the remaining results.
All mentioned principles that allow an AI to perform functions it wasn't specifically programmed to do when interacting with an entity can also be applied to events involving interaction with inanimate objects.
In some embodiments, the AI is able to make emotional responses based on its own feelings as well as the conditions of the entity with which it interacts. By taking its own condition into consideration and the conditions of the event, the AI can automatically respond in a manner which corresponds to the positions of objects in its OVS2. This is controlled by the PARS.
At the point during an event that the AI decides to respond, as well as observing the objects relating to the entity with which it interacts, the AI observes its own state and the PARS calculates the type of response to be given.
Imagine an entity the AI is interacting with is dying. The AI may become aware of this fact by reading vital signs detected by additional hardware or simply by the entity making it known.
A simple example using positive/negative and formal logic principles:
A simple example using emotions:
A more complex example:
In some embodiments, again, the relationship the AI has with the entity with which it interacts can affect the response it has. An example of the mechanics for this are:
Where the AI condition is A, the entity condition is E, the relationship type is r and the conclusion is C, the formula for the above would look something like:
A+rE=C
In some embodiments, the combination of mechanics implemented in the AI leads to situations where a conflict could arise in decision making. To prevent the AI from producing an error or ignoring the situation, a method of priority decision making must be implemented. This sees the AI make a choice that isn't necessarily logical. In some embodiments, the choice can be made to be done randomly but, in these embodiments, a reduction in control is introduced. In embodiments that do not use random decision making, the AI must decide itself which choice is best. The simplest way to do this is to create one or more priority lists for an AI to follow when it must make such decisions. These lists contain possible factors of any decision making process that the AI can choose to value.
Examples of how a list may be set out are:
Simple List:
Detailed List:
Specific List:
In some embodiments, multiple factors may also be combined into a single priority. In some embodiments, priorities may be randomized to create uniqueness amongst multiple AI. In some embodiments, one or more priorities may have a fixed position.
When an AI is faced with a decision, it must first determine what it thinks the outcome of each decision will be. Once some or all possible outcomes are determined, the AI refers to its priority list(s) to determine which decision produces the most prioritised outcome.
In some embodiments, if there is no outcome that aligns with a priority, the AI may not make any decision at all. In some embodiments, the AI may pick a decision at random.
In some embodiments, the use of multiple priority lists can create situations where multiple outcomes of equal priority are possible. Again, in some embodiments, to solve this problem, the AI may make a decision at random. However, a better way to do it is using a method called forced decision making.
When the AI must choose between multiple options of equal priority, there are multiple ways it can decide which option it cares about most, such as:
In some embodiments, the methods for making a forced decision may have a mechanic to control which method is selected, should multiple exist. Some examples are:
The nature of the AI, automatically created based on the positions of objects in the OVS2, always affects the outcome. It is the primary factor of control for what the AI reacts to and how it reacts. Though many hierarchies are possible, one of the most ideal hierarchies for control factors are:
Because of how the AI works, it is preferable to have ‘environments’ above ‘entity relationships’ but below ‘object relationships’. This is because ‘object relationship’ and ‘environment’ are constant while ‘entity relationship’ is not; that is to say there is always an environment—whether physical or otherwise—which must be made of objects, without there necessarily being another entity within the environment. However, for an entity to be present there must an environment, because an entity cannot exist in complete nothingness, and that environment must be made of at least one object to prevent it from being complete nothingness.
In some embodiments, components of the OVS2 work together without being housed together. In some embodiments, components of the OVS2 are not created as a single module or part. In some embodiments, components of the OVS2 are distributed throughout multiple modules or parts of the AI. Components of the OVS2, however they are distributed, simply need to be able to communicate with each other and be able to send the required information to the correct component(s) when necessary.
In some embodiments, components of the PARS work together without being housed together. In some embodiments, components of the PARS are not created as a single module or part. In some embodiments, components of the PARS are distributed throughout multiple modules or parts of the AI. Components of the PARS, however they are distributed, simply need to be able to communicate with each other and be able to send the required information to the correct component(s) when necessary, both inside and outside of the PARS.
In some embodiments, two versions of charts and scales are used—the originals and the modified. The original keep record of the AI as originally created while the modified is what is affected through experiences of the AI. Any mechanic or ability, when used to modify or reference objects, does so within the modified versions. In some embodiments, the modified versions will only keep track of objects that have actually been modified. In such embodiments, the AI first references the modified versions. If an object is not found in the modified versions, the AI then references the originals. When original and modified versions are used, the modified always take priority unless the original is specifically needed.
In some embodiments, a guiding principle for allowing this type of intelligence overall to self-develop is that the positive, more often than not if not always, trumps the negative when it comes to results—not positive in the sense of good or bad, but positive in the sense of desired or undesired, happy or sad etc—regardless of the nature of the desired outcome, what the AI views as positive or why it views it that way. The negative is reinforcement for the positive and used as a driving force towards the desired outcome and a priority is determining what the positive in an event is.
In some embodiments, the labels and groupings positive/neutral/negative and/or positive/zero/negative used throughout the system may be replaced with other names or entirely different groupings altogether, but these groupings and the sections of these groupings must correspond throughout the system in the same or similar way the labels and groupings have been shown in this description.
Any mechanic described may be applied to any other part of the described invention, including in combination, if it is indeed applicable, determined by whether or not it can be used to achieve the type of result needed and/or expected and can also, through modification if necessary, achieve all types of results that can be expected.
In embodiments that include relationship mechanics, a storage medium is required that is able to keep record of the AI's individual current relationships with entities and objects individually. In some embodiments, the AI may also keep record of multiple changes between itself and the object/entity. In some embodiments, the AI may also keep record of the event(s) that caused the change(s) in relationship.
In
Image 203 shows the environment and entity image 202 is interacting with, which is shown in detail in
The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best explain the principles of the invention and its practical applications, to thereby enable others skilled in the art to best utilize the invention and various embodiments with various modifications as are suited to the particular use contemplated.
Number | Date | Country | |
---|---|---|---|
62475474 | Mar 2017 | US |