Various embodiments described herein relate to mobile robot devices, and more specifically to an intelligent autonomous mobile robot device.
Vacuum robots have relieved people from tedious floor cleaning chores. Advanced robot vacuum cleaners use cameras to determine positioning and/or objects that are in or adjacent to the path of the robot vacuum cleaner. Robot vacuum cleaners may create a map of a room or house in order to efficiently clean. These maps tend to capture a layout of the room and/or fixed objects such as furniture. However, a map generated by current robot vacuum cleaners may not include objects that are accidentally dropped or temporarily placed on the floor by occupants of the house. Therefore, innovative solutions are needed to address the challenges of operating a robot vacuum cleaner in a house with normal daily behavior of its occupants.
Various embodiments of the present invention are directed to a method for operating a mobile robot. The method for operating a mobile robot includes determining that an object is in or adjacent to a first path of the mobile robot, searching a database for the object in or adjacent to the first path of the mobile robot, selecting an identified object in the database corresponding to the object, determining whether the identified object is valued, responsive to selecting the identified object in the database, and storing information associated with the object in the database, responsive to determining that the identified object is valued.
According to some embodiments, the method may include determining a second path for the mobile robot that is different from the first path, responsive to determining that the identified object is valued. The method may include removing the object, responsive to determining that the identified object is not valued. A location of the object/or and a timestamp of when the object was detected to be of value or removed for lack of value may be stored in the database. Selecting the identified object in the database corresponding to the object may include classifying one or more parameters associated with the object to generate a clustering score, and selecting the identified object in the database based on the clustering score. The one or more parameters may include a detection time, object location, object similarity, or object profile. Selecting the identified object in the database based on the clustering score may include comparing the clustering score to threshold values associated with respective candidate objects in the database, and selecting the identified object among the respective candidate objects based on the comparing the clustering score to the threshold values.
According to some embodiments, selecting the identified object in the database corresponding to the object may include classifying one or more parameters associated with the object to generate a clustering score, selecting a plurality of candidate objects in the database based on the clustering score, transmitting information related to the plurality of candidate objects to a user device, and receiving, from the user device, a selection of the identified object out of the plurality of candidate objects.
According to some embodiments, the method may include capturing first sensor information of an area prior to cleaning the area, capturing second sensor information of the area after cleaning the area, comparing the first sensor information and the second sensor information to determine if a change occurred, and identifying the object based on a difference between the first sensor information and the second sensor information, responsive to determining that the change occurred. The method may include capturing a first image of an area prior to cleaning the area, capturing a second image of the area after cleaning the area, comparing the first image and the second image to determine if a change in the area occurred, and identifying the object in the second image, responsive to determining that the change in the area occurred.
According to some embodiments, an alert may be generated, responsive to determining that the identified object is valued. The alert may be transmitted to a user device that is in communication with the mobile robot. An action associated with the identified object in the database may be identified, and the action that was identified may be performed on the object.
According to some embodiments, responsive to not finding the identified object in the database the method may include determining that an event of the mobile robot includes the mobile robot not being hindered, and determining an action for the mobile robot, responsive to the event of the mobile robot not being hindered.
According to some embodiments, it may be determined that an event of the mobile robot includes the mobile robot being hindered. An action for the mobile robot may be determined, responsive to the event of the mobile robot being hindered. A second path may be determined for the mobile robot that is different from the first path. The second path for the mobile robot may not include a location where the event of the mobile robot was hindered. In some embodiments, the second path for the mobile robot may include the location where the event of the mobile robot is hindered by the object, such that a second direction of the second path to the location is different from a first direction of the first path to the location. The database may be searched for a previous occurrence of the event at a location of the object. Corrective action information corresponding to the first path and the location that hindered the mobile robot may be stored in the database, responsive to not finding the previous occurrence of the event at the location in the database.
Various embodiments of the present invention are directed to a mobile robot device. The mobile robot device includes a transceiver, one or more processors coupled to the transceiver, and a memory coupled to the one or more processors, the memory including a non-transitory computer-readable storage medium storing computer-readable program code therein that is executable by the one or more processors to perform various operations. The various operations include determining that an object is in a first path of the mobile robot device, searching a database for the object in or adjacent to the first path of the mobile robot device, selecting an identified object in the database corresponding to the object, determining whether the identified object is valued, responsive to selecting the identified object in the database, and storing information associated with the object in the database, responsive to determining that the identified object is valued. Selecting the identified object in the database corresponding to the object may include classifying one or more parameters associated with the object to generate a clustering score, comparing the clustering score to threshold values associated with respective candidate objects in the database, and selecting the identified object among the candidate objects based on the comparing the clustering score to the threshold values. In some embodiments, selecting the identified object in the database corresponding to the object may include classifying one or more parameters associated with the object to generate a clustering score, and selecting a plurality of candidate objects in the database based on the clustering score. The transceiver may be configured to transmit information related to the plurality of candidate objects to a user device. The transceiver may be configured to receive, from the user device, a selection of the identified object out of the plurality of candidate objects.
It is noted that aspects of the inventive concepts described with respect to one embodiment, may be incorporated in a different embodiment although not specifically described relative thereto. That is, all embodiments and/or features of any embodiment can be combined in any way and/or combination. Other operations according to any of the embodiments described herein may also be performed. These and other aspects of the inventive concepts are described in detail in the specification set forth below.
The accompanying drawings are included to provide a further understanding of the present disclosure and are incorporated in and constitute a part of this application. These drawings illustrate certain example embodiments. In the drawings:
Various embodiments will be described more fully hereinafter with reference to the accompanying drawings. Other embodiments may take many different forms and should not be construed as limited to the embodiments set forth herein. Like numbers refer to like elements throughout.
Vacuuming and cleaning of floors in a house or an office building is often a tedious and time-consuming task. Robot vacuum cleaners have relieved people of at least a portion of this tedious cleaning chore. Some robot vacuum cleaners may use cameras to localize the location and/or detect walls or boundaries of the path of the vacuum cleaner. However, these systems may poorly distinguish between objects on the floor, with poor accuracy and high false/inaccurate identification rates. These robot vacuum cleaners may lift most objects in the path from the floor without discrimination, including accidentally dropped items such as jewelry or money, or articles of clothing that are placed on the floor. Small objects such as jewelry may be mixed with the dust in the vacuum's dust reservoir or dust bin. Thus, small objects such as jewelry may be difficult to spot by the user when the reservoir is emptied. Therefore, there is a need for robot vacuum cleaners to detect objects during cleaning prior to suctioning, avoid the objects, alert a user of a possible valuable object on the floor, and/or catalog objects that are detected prior to suctioning and/or after suctioning.
Additionally, robot vacuum cleaners may repeatedly collide with the same object that is not detected by the sensors of the robot vacuum cleaner. The robot vacuum cleaner may also repeatedly get stuck in the same area or on the same obstruction during different vacuuming cycles. Therefore, there is a need for robot vacuum cleaners to detect objects and events during operation, and store location and temporal information related to the obstructions. This location and temporal information may be used by the robot vacuum cleaner to perform actions to avoid repeatedly getting stuck at the same location multiple times and/or to determine different paths or patterns of operation that avoid the obstruction or handle the obstruction in a different manner. For example, a robot vacuum cleaner may have gotten stuck on a small welcome rug during previous vacuuming sessions. During a subsequent vacuuming session, the robot vacuum cleaner may approach the welcome rug from a different approach angle to see if it can successfully navigate and clean the welcome rug from a different path direction. During a subsequent vacuuming session, the robot vacuum cleaner may perform operations such as increasing power to the wheels since the texture of the welcome rug may be different from a smoother floor area surrounding the welcome rug. These various actions taken during different vacuuming sessions may be stored and used to determine future actions upon detecting the welcome rug.
Machine learning techniques may be applied to determine the action that is taken by the mobile vacuum cleaner. A deep neural network (DNN) using layers of nodes to develop a machine learning model based on input information may be used to determine the action that should be taken by the mobile vacuum cleaner. For example, the first several iterations of operating the mobile vacuum cleaner in a room may allow for training and/or development of the machine learning model. The machine learning model may find patterns in the training data corresponding to the target object, such as the welcome rug. For example, attributes such as different paths taken in traveling to the welcome rug, various speeds and power levels used by the mobile vacuum cleaner when cleaning the welcome rug, and/or other parameters may be collectively classified based on success in cleaning the welcome rug. The machine learning model that was developed based on previous operations of the mobile vacuum cleaner may be used during subsequent operation of the mobile vacuum cleaner to make a prediction of a subsequent action that can be taken in the given situation. Therefore, future iterations of operating the mobile vacuum cleaner may not get stuck on the welcome mat and may be able to successfully clean the welcome rug. Similar operations may be used to learn the locations of objects that may not be easily detectable. For example, if the mobile vacuum cleaner does not detect an object such as a furniture object that has a large portion at a higher level than the mobile vacuum cleaner, the mobile vacuum cleaner may become stuck on the legs of the furniture object. The machine learning (ML) module may learn that the mobile vacuum cleaner often gets stuck in a specific location in a room. The machine learning module may thus predict an action that avoids the particular area where the furniture object is located.
The mobile vacuum cleaner may encounter various types of objects in a room. The mobile vacuum cleaner may classify one or more parameters associated with the object to generate a clustering score. An object in the database may be identified based on the clustering score. Parameters associated with the unknown object may include a detection time, object location, object similarity, or object profile. The identified object may be selected from the database based on the clustering score by comparing the clustering score to threshold values associated with respective candidate objects in the database. The identified object may be selected out of the candidate objects based on the comparing the clustering score to the threshold values.
Once the object is identified, actions that should be taken may be predicted by machine learning or the DNN when small objects such as paper clips, staples, jewelry, or coins are detected. The DNN may develop a machine learning model over various iterations of the mobile vacuum cleaner operating in an operating environment such as a house. The mobile vacuum cleaner may detect items such as paper clips and staples and classify these items as non-valuable items that can be vacuumed, based on a user indication. However, when items such as jewelry or coins are detected, the user may have previously provided input to not vacuum these items and to avoid the location where these valuable items are located. The machine learning or DNN may be trained based on previous detection of objects and actions taken with respect to these objects. Thus, various objects may be classified by an action directed by a user, or by a subsequent access to information related to having vacuumed a particular object. For example, a paperclip may have been vacuumed during a vacuuming operation. However, subsequent to the vacuuming operation, the user may not have attempted to access information related to vacuuming the paperclip. Thus, the action of vacuuming a paperclip may lead to the paperclip being classified as a non-valuable object. In future vacuuming operations, if the mobile vacuum cleaner detects a paperclip, the machine learning model may predict that the paperclip is a non-valuable object, and thus the action of the vacuum cleaner would be to proceed with vacuuming the paperclip. In contrast to a paperclip, a jewelry item such as a gold ring may have been vacuumed. However, the user may have subsequently searched the database to identify if a gold ring had been suctioned into the vacuum reservoir. The DNN may learn from these operations that the gold ring is a valuable object. In future vacuuming operations, if the mobile vacuum cleaner detects a gold ring, the machine learning model may predict that the gold ring is a valuable object, and thus the action of the vacuum cleaner would be to not vacuum the gold ring. As the DNN obtains more information about the various objects, the mobile vacuum cleaner can automatically make intelligent decisions without obtaining additional user input. The mobile vacuum cleaner may dynamically update a route map to optimize route planning. Furthermore, the mobile vacuum cleaner may be able to enter the location of the gold ring in a database and adjust maps of the room where the gold ring is located. As such, if the gold ring is not picked up off the floor for several weeks, the vacuum cleaner may continue to avoid vacuuming the gold ring.
As used herein, the term “mobile robot” may refer to any intelligent device with cleaning capability that is mobile such that it may move to different locations in a house or building either by self-power or by a user moving the mobile robot. Thus, mobile robots may include mobile devices with cleaning, identification, and/or classification capabilities such as robot vacuum cleaners, hand-held vacuum cleaners, self-powered vacuum devices, and/or other mobile devices used for object detection and/or cleaning. In some embodiments, a mobile robot may move to different locations in a house or building and identify objects.
Still referring to
Object tracking of any movement by the object or prior locations of the object may be performed by an object tracking circuit 244. The object detection circuit 242 and/or the object tracking circuit 244 may detect and track objects by using RGB images and depth information captured by forward-looking RGB, RGB-D, stereo cameras and/or LiDAR. Objects may be detected based on a trigger from the collision sensor and/or issues with the movement of the wheels when the mobile robot device 100 is stuck. For example, the wheel sensor may detect that the wheels are spinning, but the location of the mobile robot device 100 is not changing since it is not moving. The detected objects or obstacles may be further analyzed based on size, shape, and/or height to determine which small objects on the floor may be potentially vacuumed by the mobile robot device 100.
A confidence may be determined that indicates, for example, a value in an interval range from 0 to 1, the confidence of the identification of the object. If the confidence that a detected object is of value and should not be vacuumed is sufficiently higher than a predefined threshold, the mobile robot device 100 may re-plan its path and avoid traveling over and/or vacuuming the object. An alert may be sent to the user from an alert generation circuit 234 upon determining that a detected object is of value. When the user receives the alert, the user may have the option to overwrite the decision regarding whether or not to vacuum the object. If the object is determined by the user as an object that needs to be vacuumed, the robot may come back to clean the area of the object. The example of the object and the corresponding action that was taken may be added into a training database such as the object and image database 220 and/or the robot skill database 260. The robot skill database 260 may store the action that the mobile robot takes, such as types of cleaning, path of travel, etc. Including the detected object and corresponding action may improve the DNN learning regarding what to vacuum and what not to vacuum such that the machine learning algorithm may provide a more accurate prediction during future operation of the mobile robot device 100. Entries in the object and image database 220 may be categorized such that a user may search for types of objects that the mobile robot device 100 has encountered.
If the confidence that a detected object is of value and should not be vacuumed is lower than a predefined threshold, the robot will continue to vacuum as planned and the image chip of the object, which is a portion of an image of an object or a portion of an image of an area including the object that was detected, will be stored in the object and image database 220 as a candidate object vacuumed by the mobile robot device 100 during the specific cleaning process. The time and date of vacuuming of the detected object may be captured as metadata corresponding to the object.
In order to ensure that visible objects removed from the floor are captured, particularly objects that have not been recorded as removed by the above described techniques, the object review and alerting circuit 230 may compare images before vacuuming and after vacuuming to determine the changes to an area resulting from the removal of objects. The before and after images may be captured during normal cleaning operation when the mobile robot device 100 is traveling through the area. For those areas that are only imaged once during cleaning, a separate trip may be planned to capture the after image. Objects removed during cleaning and verified by the change analysis circuit 248 may further be clustered, cataloged and indexed for easy browsing, searching, and retrieval by the spatial-temporal object indexing circuit 232. The object review and alerting circuit 230 may also include a searching and browsing circuit 236 that allows a user to search by the location, and/or time of encounter of objects that the mobile robot device 100 has sensed.
In addition to receiving alerts from the alert generation circuit 234, a user may also browse and search the object and image database 220 to determine what objects are removed by mobile robot device 100. Therefore, review by a user of objects that were vacuumed may further prevent the loss of valuable objects. Potential obstacle objects may be also cataloged and a user is able to search, browse, and/or choose preferred actions for these obstacle obj ects.
Still referring to
The planning circuit 270 may provide action and path information to a control circuit 280. The control circuit 280 may include a drive control circuit 282 and a vacuum control circuit 284 that control the operations of the mechanical elements of body 290 of the mobile robot device 100. The mechanical elements of body 290 may include a motor, wheels, a driving mechanism, a vacuum, cleaning brushes, dust reservoir, a user interface to receive commands, one or more LED lights, etc. The body may include an indicator that is activated upon detection of a valuable object. For example, the indicator may be an LED light or an audio sound from the body 290 of the mobile robot 100 that alerts a user that a valuable object has been detected. The indicator may be configured to alert a user prior to vacuuming the valuable object and/or after a valuable object has been vacuumed. According to some embodiments, the indicator may provide a signal that is transmitted to a user device to alert a user that a valuable object has been detected and/or vacuumed.
The drive control circuit 282 may control the physical operation of the mobile robot device 100 such as motion, speed, direction, start/stop, etc. The vacuum control circuit 284 may control the vacuum and/or cleaning features of the mobile robot device 100 that include suction, brushes, etc.
The mobile robot device 100 may include a user interface 285 that provides alerts, receives input from a user device 295, and/or provides information regarding objects in the database to the user device 295. According to some embodiments, the user interface 285 may include controls and/or information display elements such as a display screen on the body 290 of the mobile robot device 100.
The mobile robot may decide not to vacuum the valued object. Referring now
According to some embodiments, automatic selection of the object from the database may be performed using a deep neural network. Referring now to
According to some embodiments, a user may select the object out of candidate objects that have been presented to the user. A listing of possible matches of the object on the floor may be presented to the user device such that the user may make a selection. Referring now to
According to some embodiments, information regarding an area may be captured by sensors of the mobile robot both before and after cleaning or vacuuming the area. Information regarding the area may be captured by the sensors 210 of the mobile robot of
According to some embodiments, images of the area may be captured by one or more cameras of the mobile robot both before and after cleaning or vacuuming the area. Referring to
Referring to
The mobile robot may get stuck on an object, obstacle, or wall of a room. Referring to
When the mobile robot moves freely and smoothly and its movement is not hindered, regions inside images, video frames, depth and point cloud data that are captured previously and are corresponding to where the robot just passed through are used to build appearance, texture or depth models of areas and surfaces that do not hinder the robot's movement. These objects and associated data are put into the object and image database 220 of
Referring to
Referring now to
According to some embodiments, the memory 1930 may include a non-transitory computer-readable storage medium storing computer-readable program code therein that is executable by the processor circuit to perform various operations. The processor circuit 1920 may receive information from sensors 1910 and perform operations including determining that an object is in a first path of the mobile robot device 1900, searching a database 1960 for the object in or adjacent to the first path of the mobile robot device 1900, selecting an identified object in the database 1960 corresponding to the object, determining whether the identified object is valued, responsive to selecting the identified object in the database 1960, and storing information associated with the object in the database 1960, responsive to determining that the identified object is valued.
Still referring to
In the above-description of various embodiments of the present disclosure, it is to be understood that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of this specification and the relevant art, and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
When an element is referred to as being “connected”, “coupled”, “responsive”, or variants thereof to another element, it can be directly connected, coupled, or responsive to the other element or intervening elements may be present. In contrast, when an element is referred to as being “directly connected”, “directly coupled”, “directly responsive”, or variants thereof to another element, there are no intervening elements present. Like numbers refer to like elements throughout. Furthermore, “coupled”, “connected”, “responsive”, or variants thereof as used herein may include wirelessly coupled, connected, or responsive. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. Well-known functions or constructions may not be described in detail for brevity and/or clarity. The term “and/or” includes any and all combinations of one or more of the associated listed items.
It will be understood that, although the terms first, second, third, etc. may be used herein to describe various elements, and elements should not be limited by these terms; rather, these terms are only used to distinguish one element from another element. Thus, a first element discussed could be termed a second element without departing from the scope of the present inventive concepts.
As used herein, the terms “comprise”, “comprising”, “comprises”, “include”, “including”, “includes”, “have”, “has”, “having”, or variants thereof are open-ended, and include one or more stated features, integers, elements, steps, components or functions but does not preclude the presence or addition of one or more other features, integers, elements, steps, components, functions or groups thereof.
Example embodiments are described herein with reference to block diagrams and/or flowchart illustrations of computer-implemented methods, apparatus (systems and/or devices) and/or computer program products. It is understood that a block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by computer program instructions that are performed by one or more computer circuits. These computer program instructions may be provided to a processor circuit of a general purpose computer circuit, special purpose computer circuit, and/or other programmable data processing circuit to produce a machine, such that the instructions, which execute via the processor of the computer and/or other programmable data processing apparatus, transform and control transistors, values stored in memory locations, and other hardware components within such circuitry to implement the functions/acts specified in the block diagrams and/or flowchart block or blocks, and thereby create means (functionality) and/or structure for implementing the functions/acts specified in the block diagrams and/or flowchart block(s).
These computer program instructions may also be stored in a tangible computer-readable medium that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable medium produce an article of manufacture including instructions which implement the functions/acts specified in the block diagrams and/or flowchart block or blocks.
A tangible, non-transitory computer-readable medium may include an electronic, magnetic, optical, electromagnetic, or semiconductor data storage system, apparatus, or device. More specific examples of the computer-readable medium would include the following: a portable computer diskette, a random access memory (RAM) circuit, a read-only memory (ROM) circuit, an erasable programmable read-only memory (EPROM or Flash memory) circuit, a portable compact disc read-only memory (CD-ROM), and a portable digital video disc read-only memory (DVD/Blu-ray).
The computer program instructions may also be loaded onto a computer and/or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer and/or other programmable apparatus to produce a computer-implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions/acts specified in the block diagrams and/or flowchart block or blocks. Accordingly, embodiments of the present disclosure may be embodied in hardware and/or in software (including firmware, resident software, micro-code, etc.) that runs on a processor such as a digital signal processor, which may collectively be referred to as “circuitry,” “a module” or variants thereof.
Aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable instruction execution apparatus, create a mechanism for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer readable medium that when executed can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions when stored in the computer readable medium produce an article of manufacture including instructions which when executed, cause a computer to implement the function/act specified in the flowchart and/or block diagram block or blocks. The computer program instructions may also be loaded onto a computer, other programmable instruction execution apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatuses or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various aspects of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
It should also be noted that in some alternate implementations, the functions/acts noted in the blocks may occur out of the order noted in the flowcharts. For example, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Moreover, the functionality of a given block of the flowcharts and/or block diagrams may be separated into multiple blocks and/or the functionality of two or more blocks of the flowcharts and/or block diagrams may be at least partially integrated. Finally, other blocks may be added/inserted between the blocks that are illustrated. Moreover, although some of the diagrams include arrows on communication paths to show a primary direction of communication, it is to be understood that communication may occur in the opposite direction to the depicted arrows.
Many different embodiments have been disclosed herein, in connection with the above description and the drawings. It will be understood that it would be unduly repetitious and obfuscating to literally describe and illustrate every combination and subcombination of these embodiments. Accordingly, the present specification, including the drawings, shall be construed to constitute a complete written description of various example combinations and subcombinations of embodiments and of the manner and process of making and using them, and shall support claims to any such combination or subcombination. Many variations and modifications can be made to the embodiments without substantially departing from the principles described herein. All such variations and modifications are intended to be included herein within the scope of the embodiments of the present invention.