The disclosed embodiments relate to artificial intelligence and consciousness.
Increasing the intelligent abilities of AI to be closer to that of humans has long been sought. A significant part of this is the ability for an AI to have both a conscious and subconscious mind, allowing for actions that an AI both does and does not mean, intend or decide to do.
Creating, Qualifying and Quantifying Values-Based Intelligence and Understanding using Artificial Intelligence in a Machine—Patent Application Number GB1517146.5
System, Structure and Method for a Conscious, Human-Like Artificial Intelligence System in a Non-Natural Entity—Patent Application Number GB1409300.9
The Genome and Self-Evolution of AI—Patent Application Number GB1520019.9
ConceptNet5—conceptnet5.media.mit.edu—Referred to as “ConceptNet”
The disclosed invention gives an artificial intelligence system a dual-type control system that allows for data paths of both conscious and subconscious mental abilities that, in turn, result in actions that an AI both does and does not mean, intend or decide to do.
In an aspect of the invention, the AI has a dual-type control system responsible for its operation.
In another aspect of the invention, the AI is able to sort inputted data into multiple data streams for its own use.
In another aspect of the invention, the AI is able to perform actions without making the decision to perform said action.
A visual example of a build of an AI system that has an OVS2 system implemented.
An example of how the cycle of data occurs as it flows from an entity/environment, through the AI and results in interaction using a single control system.
Examples of how the cycle of data occurs as it flows from an entity/environment, through the AI and results in interaction using a dual-type control system.
Examples of how ranges of perception and focus interact with the conscious and subconscious mind.
Examples of how a dual-type control system can be used to facilitate further AI abilities.
Reference will now be made in detail to embodiments, examples of which are illustrated in the accompanying drawings. In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, it will be apparent to one of ordinary skill in the art that the present invention may be practiced without these specific details. In other instances, well-known methods, procedures, components and networks have not been described in detail so as not to unnecessarily obscure aspects of the embodiments.
The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the description of the invention and the appended claims, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
As used herein, the term “if” may be construed to mean “when” or “upon” or “in response to determining” or “in response to detecting,” depending on the context. Similarly, the phrase “if it is determined” or “if [a stated condition or event] is detected” may be construed to mean “upon determining” or “in response to determining” or “upon detecting [the stated condition or event]” or “in response to detecting [the stated condition or event],” depending on the context.
The term “system” may be used to refer to an AI.
The terms “device” and “machine” may be used interchangeably to refer to any device or entity, electronic or other, using technology that provides any characteristic, property or ability of a technical device or machine. This includes the implementation of such technology into biological entities.
The terms “body”, “physical structure” or any other term referring to a physical aspect of an AI in any way refers to the object, in whole or in part, within which an AI is being used.
The terms “object” and “objects”, unless otherwise described, may be used to refer to any items of a physical or non-physical nature that can be seen/felt/perceived, including but not limited to: shapes, colours, images, sounds, words, substances, entities and signals.
The term “complex” is to also include simplified assemblages or single component parts.
The term “event” may be used to refer to any type of action or happening performed on, performed by or encountered by a system.
The terms “OVS2 ”, “OVS2” and “OVS2”, should they appear, all refer to the Object, Value and Sensation System, as described in patent GB1517146.5.
The term “SCS” refers to the Sensitivity Control System, as described in patent GB1517146.5.
The term “SAC” refers to a set of Scales and/or Charts, as described in patent GB1517146.5.
The term “PARS” refers to the Productivity and Reaction System, as described in patent GB1517146.5.
The term “DTCS” refers to a Dual-Type Control System.
The term “observation” and any similar terms, when referring to logical functions of an AI, refers to any ability that allows the AI to perceive anything within a physical and/or non-physical environment.
The term “communication” and any similar terms, when referring to logical functions of an AI, refers to any ability, whether physical, mental, audial or other, that allows for transfer of information from the communicating body to the body with which it is communicating, whether physical or non-physical.
The terms “thought path” and “data path” may be used interchangeably when referring to the path through which data travels within the AI.
The term “conscious” refers to processes that an AI means, intends or decides to do.
The term “subconscious” refers to processes that an AI does not mean, intend or decide to do.
The term “logic unit” refers to any component(s) of an AI that contains code for one or more logical functions.
The term “memory unit” refers to any component of an AI that is used as a storage medium.
It is possible for a single component to be both a logic and memory unit.
The terms “perception range” and “perception scale” may be used interchangeably.
The terms “decision making” and “decision-making” may be used interchangeably.
The term “such as” is not to be taken as limiting to the one or more examples that follow.
Components of the DTCS, when described, may be referred to as the AI.
When referring to the state of the AI, what is meant is the AI's level(s) of feeling, including but not limited to one or more of the following: emotions, positivity, negativity and productivity.
The various applications and uses of the invention that may be executed may use at least one common component capable of allowing a user to perform at least one task made possible by said applications and uses. One or more functions of the component may be adjusted and/or varied from one task to the next and/or during a respective task. In this way, a common architecture may support some or all of the variety of tasks.
Unless otherwise stated, aspects, components and logic of the invention operate in the same way as stated in Patent Application Number GB1517146.5.
Unless clearly stated, the following description is not to be read as:
Attention is now directed towards embodiments of the invention.
As is shown in patent application GB1517146.5,
To create a DTCS that allows both conscious and subconscious communication, only one thing is absolutely required: two thought paths must exist at a specific point—one that interacts with decision-making logic that is controlled by the AI and one that doesn't. Beyond this, there are many ways in which a DTCS can be configured, with some being more ideal than others, depending on the desired capabilities, efficiency etc.
Single vs. Separate Thought Paths
If, at any point, only a single thought path is used, the efficiency of data flow is reduced, meaning it will take longer for an AI to perform multiple actions. Separate thought paths, especially throughout the entire process, allow the conscious and subconscious data processes to be performed simultaneously, improving
efficiency from start to finish. A single thought path can be seen in
Single vs. Multiple SACs
Configurations with a single SAC mean an AI's feelings, opinions and bases for actions and reactions are the same consciously and subconsciously. This means that there is a significant chance of conscious and subconscious actions being the same or similar. Configurations with multiple SACs, however, with at least one SAC in each thought path, that isn't shared, allows an AI to have different feelings, opinions and bases for actions and reactions consciously and subconsciously, resulting in different actions depending on which thought path is in control.
Single vs. Multiple SCSs
Configurations with a single SCS prevent an AI from having independent sensitivities when dealing with data travelling along separate thought paths, regardless of how many SACs are in use, as shown in
In general, more than one of any component, combined with multiple thought paths, where there is at least one of a component per thought path, is preferable if an AI is to have both independent thought paths and different sensitivities, depending on the nature of the thought, an example of which is shown in
In some embodiments, a single component, such as an SCS or SAC, may be used with the effect of multiple components by using conditions that apply different measures to data, depending on whether or not the data was observed consciously or subconsciously.
In some embodiments, a configuration may comprise:
An example of this is shown in
As previously mentioned, the AI may not have to wait for data to reach the communication component to terminate data, nor does the interaction monitor specifically have to send the termination command to the communication component. What's important is that any relating data can be terminated once interaction ends before it can be communicated.
In some embodiments, a single piece or group of data may be given multiple IDs relative to multiple different interactions. In some embodiments, data may be duplicated to ensure each piece or group of data is only associated with one interaction memory.
Now, determining what is classed as conscious data and what is classed as subconscious data is based on how it was observed.
In some embodiments, the perception range is enabled on a bi- or tri-axis. In some embodiments, within the center of focus range is the main point of focus (MPoF)—the single point(s) that the AI is directly paying attention to and focusing on, shown by Figure Point 401. The two- or three-tier Principles of Perception (PoP)—the center focus, peripheral perception and, if implemented, main point of focus—may be applied to multiple methods of perception. Examples of how they may be interpreted and applied to different methods of perception include but are not limited to:
In some embodiments, the rules that define what is considered the MPoF, CoF and/or PerP are or can be different from what is stated above. The reason why peripheral perception can be accounted for in aspects that are not simply akin to humans is that physical and non-physical AI sensory capabilities can greatly differ depending on the capability of hardware used and how software is written to use said hardware.
In some embodiments, the shapes of the boundaries of any part of the perception range may differ. For example, the area for center of focus may be square/rectangular, going from the highest point of vision to the lowest and X distance left and right from a reference point.
In some embodiments, other/different factors may be taken into consideration when determining what part of the range perceived data is classed under, based on set rules. Some examples are, including but not limited to:
Factors and rules, such as those listed above, may be applied in conjunction with a perception scale, such as the one shown in
In some embodiments, multiple factors listed above may be taken into consideration in a single instance when deciding whether or not data should be considered CoF or PerP. When doing so, factors need to be prioritised—either on-the fly or using preset priority lists—so the AI knows an order in which to process data to determine whether or not it should be CoF or PerP. Examples of the mechanics that can be used to set/determine priority are explained in patent GB1517146.5, including the mechanics for forced decision making.
In some embodiments, it is entirely possible to create an AI that uses a perception scale where no data need be registered as subconscious data input. This is primarily a hardware-dependent feature and secondarily a software based one. For this example, the following parameters are true:
This feature can now be achieved in multiple ways, including but not limited to:
by:
In some embodiments, a range of perception may be divided into more than the two (center and peripheral) and three (main part of focus inclusive) parts described. In some embodiments, two-tier perception may consist of a different combination of parts. In such embodiments, this needs to be reflected in one or more aspects of the dual-type control system that works in conjunction/cooperation with the perception system. In some embodiments, multiple data paths may be included to correspond with each part of the perception range. In some embodiments, a single path may handle data for multiple parts of the perception range.
In some embodiments that use separate SACs for conscious and subconscious object storage, the object values of a subconscious SAC (SSAC) may, over time, influence the object values of the conscious SAC (CSAC). This allows how the AI values an object subconsciously to become consciously apparent without the AI having to perform a conscious process. To do so, a connection must be created between a SSAC and CSAC.
A frequency for data transfer must also be set. Using a one-way connection, the SSAC transfers data about objects to the CSAC, such as their positions and values. With a two-way connection, the SSAC may first read the current values/positions of objects within a CSAC before sending data. This may, at times, prove to be a more efficient process than a one-way connection, depending on how much data is being transferred and altered. For example, if only one object is being altered, it's more efficient to use a one-way connection which sees the SSAC pass object data to the CSAC, where it is handled. The issue with a one-way connection, however, is that it's done blind, so the SSAC can't see if an object actually needs to be altered. If there are many objects that the SSAC wishes to alter, being able to first read the current positions/values of objects in the CSAC before transferring data means the SSAC can be made to remove data that it can determine doesn't need to be altered before sending data over, reducing the workload of the CSAC, which is preferable since the CSAC response time is important in the overall decision-making process and shouldn't be burdened with tasks not relative to the immediate interaction about which a decision is being made, while the subconscious part of the system can perform functions in its own time. In some embodiments, a two-way connection may be used with the inclusion of a conditional statement that chooses a method based on the number of object data that the SSAC wishes to alter. For example:
When data reaches the CSAC, it needs to be read so that the stored object data can be updated accordingly. There are different ways it can be handled, including but not limited to:
In some embodiments, processes required for subconscious functions continuously run as background processes. Because subconscious functions need to run without being manually executed by the AI and, for the most part at least, run at all times, the processes they use must always be ready and available. Processes solely for some (or all) conscious functions, however, can but need not run at all times as these functions are called when needed, though having them run prior to them being needed obviously reduces reaction time, which is always a bonus from a technical perspective—not always from a behavioural one.
In some embodiments, the AI is able to have multiple thoughts at once—that is, process multiple streams of data along a single type of thought path. This can be achieved in multiple ways, including but not limited to:
by, for example:
In some embodiments, an AI is able to establish a “train of thought”. By observing its own communication, a circuit is created using data paths within the AI itself that allows ideas—formed from a collection of objects—to be processed again. Repeatedly processing these ideas allows the AI to continuously develop them by, for example, taking a formed idea, evaluating the collection of objects, comparing them with previous memories—including previous ideas from the current train of thought—and making a decision about the newly formed/modified/refined idea. Observation of one's own communications may be done internally as well as externally—that is to say, the AI may observe both expressed and unexpressed ideas. Internalized (unexpressed) ideas only need to be written for the AI to observe them. This can be in raw code, as a database entry, as a file etc. In some embodiments, trains of thought can be created/continued by observing the ideas of other entities.
In some embodiments, as the train of thought progresses, the AI records memories of formed ideas. By doing so, it is able to ensure the ideas formed continue to progress in value. A basic example of how it works is:
After idea number 1 has been valued, it can begin to be reprocessed. With each successive cycle, one or more additional objects are added and the idea is valued based on the objects it contains—a mechanic explained in patent GB1517146.5—and it is declared progressive or not, based upon the current value compared to the previous. When an idea is deemed progressive, the AI may continue to attempt to further an idea. When an idea is deemed not progressive, the AI may remove the object that caused the reduction and try a different one. This may continue until progression is made.
When the AI chooses to stop the train of thought and use the latest progressive idea can occur at different times and based on different rules, such as but not limited to:
The value of ‘X’ can be manually set by a human or AI, made a random number or automatically determined based on an algorithm used to find the number of ideas required for adequacy when determining efficiency, convenience, probability etc.
In some embodiments, an AI may record an idea in its memory for use beyond the immediate train of thought—primarily for future reference in comparisons and decision-making. The information within the memory will need to contain, at the very least, the objects of the idea. The information may also contain the value of the idea. The memories can then be used by the AI at a later point in time by searching through the list for the same or similar ideas to one it is having in the moment and comparing values (or any other properties it may have stored) to (help) determine whether or not it is an idea that may be worth pursuing. For example, if the idea previously wasn't worth pursuing with a value of 40 but, since then, the AI has changed how it values the objects of the idea, giving the idea a new value of 85, it would deem it worth pursuing if the threshold for pursuit was, say, 60.
In some embodiments, an AI may “forget what it was thinking” or “lose its train of thought” altogether. This isn't necessarily a ‘feature’ that can be coded but actually something that is declared in response to an event that causes function and/or data deficiency, such as:
In some embodiments, the AI is able to regain its train of thought. To do so, the AI would have needed to store current ideas of a train. When functionality has been restored, the AI simply refers back to the memory of ideas of the thought it wishes to regain and continues processing.
In some embodiments, an AI is able to have intuitive abilities. Two types of intuitive abilities are possible:
Data gathered using intuitive abilities are restricted to subconscious data paths, due to the fact that the AI is not allowed to consciously decide what the detected information means. In embodiments that primarily use a single data path, the data must travel along the path, when possible, that avoids conscious decision-making logic.
Though it is accepted that intuition cannot use reason or logic, an exception can be found when implementing the ability in AI: one cannot be said to use reason or logic unless it is a conscious decision to do so; as intuited data travels via a subconscious data path, the AI cannot make the decision to use any type of reason or logic—it is automatic and out of the AI's control—and it cannot be declared that the AI is using logic or reason to arrive at a conclusion based on intuited data—whether right or wrong—if the AI has not chosen to do so and is therefore not actually aware of it.
In some embodiments, the PARS, in combination with memory data, objects and/or a method of observation, can be used to set specific intuited responses. These can be manually implemented by human or AI, or automatically implemented by observing different responses in general to intuited events over time, recording outcomes and determining the most desired outcome based on the event that follows, efficiency, convenience, performance etc. The response that corresponds to the most desired outcome is then selected and implemented. When dealing with this automated aspect of the intuition function in particular, this is the only point where conscious observation can come into play as the learning process may begin with the active observation of the outcome event. However, it is also possible to conduct the learning process using subconscious observation. The mechanics to be used can be similar to those described on pages 25-26 in patent GB1517146.5, where the AI tests for desired results based on actions and outcomes.
In some embodiments, an AI is able to have instinctive abilities and feelings. To do so requires:
In the first instance, pre-programmed instinctive abilities and feelings are easy to implement:
The second instance must combine the workings of the DTCS with the workings of the genome for the automatic implementation of instinctive abilities:
As implied, the effects of instinctive abilities and feelings activate automatically. This happens in (at least) two stages:
In some embodiments and/or in some situations, only one out of stages 2 and 3 occur.
Over time, instinctive feelings and reactions may change based on experiences. As object values and positions change and memories are created based on specific events, the value of events will change and, in turn, so will the data the PARS determines.
In some embodiments, instinctive reactions may be superseded/suppressed by the current and/or resulting state of the AI.
One way to ensure the correct instinctive action is made in an event is to use a mechanic similar to the priority mechanic described in patent GB1517146.5. Using such a mechanic, prioritize the instinctive reaction based on the current event higher than the reaction based on the AI's state and decisions.
In some embodiments, an AI is able to internally process streams of data used for the basis of mental imagery. To achieve this, a component needs to be connected to a memory unit in which visual object information is stored. This is shown in
For the AI to create mental imagery, an object database with specific reference IDs is required. A basic example of how this may look is as follows:
This can be extended to work in conjunction with an integrated object relationship system, looking something like the following, for example:
This system can be extended even further to include properties. Property field values define how the AI is able to imagine an object. For example:
The above take includes a property column called ‘Colour’. Within this column are 3 different types of fields:
In some embodiments, only one field type is available. In some embodiments, one or more different field types than the examples listed are available. In some embodiments, the AI can be given the values for any properties by another entity. In some embodiments, the AI can add properties it can determine through one or more methods of observation. The values of fields or field types themselves do not have to reflect reality, though they may, but are simply used to give the AI as much or as little creative freedom as one wishes.
Now, for the AI to create mental imagery—whether in code or imagery—two things are absolutely required:
Using the above, the AI is able to create rudimentary mental images in code alone, but other features can be implemented, such as classes, instances, IDs etc., to improve functionality and enhance the capabilities of the AI.
As an example of how this system can be developed to better the results, the following shows how this can be made possible:
The downside to a method such as this one is that for any type of precision to exist that is not random or luck, the AI needs to have numerous specific details about every object it contains within its memory—primarily the exact dimensions and positions of one or more individual parts of an object that it deems important. Using the example set in the position description, the AI may need to know the height, width, possibly depth and position of the hand of humanl if it is to precisely place within it the rose object.
Though the above example is shown in a step-by-step method, the actions do not need to be performed in such an order to accomplish the task. Code and coding method/styles are also examples and can be created to be used in any way seen fit. Also, how the AI acquires values and pre-written code is irrelevant—they can be observed, constructed by the AI, written and implemented by a human or whatever other means possible. What is important and relevant is that the AI is able to compose the code necessary to create the mental imagery from objects it knows of and/or creates. An easy way to do this is to keep code simple, as shown in the examples above, using “property/value” pairs, much like in CSS. The AI simply creates the parent instance, any child instances depending on how many instances are required individually and in groups and then selects values from those available for any properties it wishes to implement. As previously stated, the positioning of objects depends on whether the imagery is to be random or coherent—to any degree for either option.
With a few more lines of code for other objects, it is possible for the AI to create the complete set of code for a mental image—a somewhat coherent one, at least, implied by the position description in the above example.
While the code may be written by the AI to control the display of objects, the objects still need a way of being displayed for visual communication and, if desired, confirmation of precision. For this, a visual canvas is required that actually displays the objects in image form. Along with the canvas must be the system to actually translate the code.
The translation and canvas system, referred to as the Mental Imagery Display System or “MIDS”, can be located within the AI, for example, as part of the vision centre or as a communication component, or within external devices. To then display mental imagery, the AI needs to connect to a visual medium and:
Mental imagery can be both a conscious and subconscious process with both conscious and subconscious results. The following is an example of how the process can work, based on the configuration of
Part 1—Data to the vision centre:
To start the process, data needs to reach the vision centre. As usual, the data path travelled depends on how the data was observed. The data may enter the vision centre at two types of points:
Part 2—Creation:
The creation process depends on the mechanic used, five of which are described:
One or more of the above mechanics may be used in an AI. In embodiments that use more than one mechanic, which mechanic is to be used can either be random or conditional. Both can also be possible, where a condition, if met, enables a random choice.
For mechanics that involve the state of the AI, post-reaction data can carry along with it data relating to the state of the AI after passing through or interacting with the SAC/SCS component—something that pre-reaction data cannot do. It is possible, however, to have the state of the AI affect the creation process with pre-reaction data. To achieve this, the state of the AI, at any given time, must be stored somewhere that makes the information available to the vision centre prior to or along with the pre-reaction data. Storing the state within the vision centre and updating it whenever there is a state change is a very efficient and most reliable way of achieving this.
During creation, data travels between the vision centre and memory units to allow the vision centre to pull new objects based on the objects already in use. It is possible for a mass of data about multiple objects to be sent from a memory unit to the vision centre in one go but it may prove to be a less efficient method, depending on both hardware and software capabilities of the AI, as the flood of data may cause the vision centre or AI as a whole to encounter performance reductions. Additional conditional mechanics are also needed, in such a case, which tells the AI when to choose from the mass of data and when to request new data from a memory unit, unless it is specified that the AI is to exhaust X amount of data from the mass before requesting new data. Overall, using mass data can reduce the creative freedom of the AI and limit what is capable of creating in comparison to what can be achieved using single or manageable chunks of data at a time.
In some embodiments, if mental imagery is meant to be coherent, the way objects are selected may also be dependent upon the desired nature of some imagery. Using the object relationship system, the AI can select relationships that are in line with the overall nature (value) it is primarily using as a basis. For example, if the AI is using ‘anger’ as a basis, aside from using objects that have ‘angry’ as a value, the AI may also use objects that have angry relationships between each other, determined by examining the objects contained within said relationship's description. Object X and Object Y may both have values of ‘indifferent’ individually, but if the relationship between the two objects is ‘ X murdered Y’ and the AI values ‘murder’ as ‘angry’, it can determine that, together, these two objects have an ‘angry’ value.
The overall nature of a mental image can be calculated based on the objects and relationships between objects used in the image. Again, example mechanics for calculating a value based on numerous objects can be found in patent GB1517146.5, but other mechanics may also be used, such as the mode or mean. It is possible to declare multiple natures simply by using more than just the most prominent or dominant values.
Part 3—Data from the vision centre:
Multiple paths may be taken when data is sent from the vision centre, for multiple reasons, including but not limited to:
In some embodiments, the vision centre may be set to activate and start processing data without being triggered by the incoming of data that is currently being processed but by automatic activation—either randomly or conditionally. To do so, the vision centre needs to request/pull data from a memory unit which acts as the first building block for the mental image. The creation process can then continue as described above.
Mental imagery techniques not only allow the AI to create mental images as part of a conscious thought process but, as subconscious processes, allow the AI to experience ‘dreams’—mental images the AI subconsciously creates that cannot be controlled. Though AI cannot physiologically sleep, a similar effect can be achieved by shutting down conscious thought paths and processes while allowing subconscious functionality to remain active, then the vision centre automatically activating. This process can be manually induced by having another entity manually activate the vision centre while conscious thought paths are shut down. Conscious though paths may also be shut down either automatically or manually.
In some embodiments, data may bypass or pass through components, without interaction or effect, as it circulates the system. In some embodiments, some of the components shown or described are combined to create multifunctional components that can handle multiple types of tasks.
In some embodiments, data at a junction may be copied in order to allow the same data to travel multiple paths simultaneously, rather than circulating data back around to then travel a different path.
It is important to understand that what makes this invention a “dual-type control system” is not the different types of parts on a perception range or the number of data paths that exist, but the ability to perform actions that an AI both does and does not mean, intend or decide to do, with and without, respectively, the ability to use any decision-making logic controlled by the AI that directly affects which action is performed. Though the mechanics that cause a result in each type can be the same or similar, the differences are significant with just as significant implications:
An example of a situation to show the workings of each type is:
An AI observes gunfire while walking with a human.
Conscious and subconscious decision making need not have different results, as they do above, but it depends on the relationships and/or priorities and/or values of an AI, and/or the mechanics implemented for conscious and subconscious activity and/or how data was observed and/or how the AI questions an event. One result has the ability to change the entire outcome.
Data paths simply describe the type of data (conscious or subconscious—can be established by data tagging or other methods) travelling between components.
In some embodiments, a system such as the one described can be implemented using a more hardware-focused approach. Two examples of this are:
Though it may not be shown in the included drawings, it is an obvious fact that components which require use of memory connect to a memory unit or have memory implemented within it.
The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings, including new components and additional pathways between new and/or existing components. The embodiments were chosen and described in order to best explain the principles of the invention and its practical applications, to thereby enable others skilled in the art to best utilize the invention and various embodiments with various modifications as are suited to the particular use contemplated. Claims
Number | Date | Country | |
---|---|---|---|
62475819 | Mar 2017 | US |