This disclosure relates generally to mobile robotics, and more specifically, to the contextualization of map regions by robots.
Blueprint maps are useful for determining a general layout of a building or area, but without further information, may be limited in usefulness to a robot attempting to navigate based on the maps. A robot often needs additional context to demarcate regions within a map through which the robot can travel, and regions the robot should avoid. Further, the location of objects, obstacles, and other entities within the building or area can change over time, and thus a map of the building or area may become outdated over time.
The figures depict various embodiments of the present invention for purposes of illustration only. One skilled in the art will readily recognize from the following discussion that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the invention described herein.
Embodiments described herein detail functionality associated with a mobile robot. Mobile robots can include a number of sensors, data retrieval systems, communication systems, and other components as described in greater detail below. Accordingly, a mobile robot can access data associated with or describing an environment in which the robot operates, for instance from the on-board sensors, from a data source external to the robot, or from a human operator. In response to or based on the accessed data, the mobile robot can perform a number of functionalities, either autonomously or in response to operator input. It should be noted that while the mobile robot is described herein as operating in an indoor environment and performing various housekeeping functions, in other embodiments the mobile robot can operate in other environments and can perform other functions while adhering to the principles described herein.
The robots 102 can wirelessly communicate with a robot server 130 via a first network 104. Likewise, the robot server 130 and/or the robots 102 (via the robot server 130) can communicate with one or more of a client device 132, a remote terminal 134, a building database 136, and a robot database 138 via the first network 104. The robots 102 can also communicate with building infrastructure, such as the dock station 111 and the building station 112, via the second network 106. Further, the robots 102 can communicate with each other via the third network 106. It should be noted that although three distinct networks are illustrated in the embodiment of
The networks 104, 106, and 108 can each include one or more networks, including, but not limited to, wireless networks (e.g., wireless communication networks), mobile telephone networks (e.g., cellular telephone networks), closed communication networks, open communication networks, satellite networks, navigation networks, broadband networks, narrowband networks, the Internet, local area networks, and any other networks capable of carrying data and/or communications signals between the robots 102, building infrastructure, the robot server 130, and any other systems or components (including but not limited to the client device 132, the remote terminal 134, the building database 136, and the robot database 138). For instance, the network 104 can include the Internet, and the robots 102 can interface with the Internet via WiFi. Likewise, the network 106 can be a Bluetooth or Bluetooth Low Energy (BLE) wireless network, enabling the robot 102A to communicate with the dock station 111 and the building system 112 when within a communication threshold proximity of either system. Finally, the network 108 can be a long-distance communicative network, enabling the robots to communicate as a point-to-point network, for instance using GSM transceivers.
The robots 102 can navigate within the building 110, for instance by autonomously planning a route through the building, by following a pre-determined route, or by modifying a route based on the detected environment in which the robot is navigating. In some embodiments, the robots 102 can navigate collectively, for instance by individually selecting a route such that a collective goal is achieved. For example, if the robots 102 are tasked with vacuuming the entirety of the building 110, the robots 102 can each select a route that minimizes an amount of time it takes the robots 102 to entirely vacuum the building, or can select routes that maximize an amount of time each robot 102 is vacuuming a portion of the building floor that has yet to be vacuumed. Likewise, if the robots 102 are tasked with mapping an entirety of the floorplan of the building 110, each robot 102 can individually select a route that minimizes an amount of time it takes the robots 102 to view the floorplan. The robots 102 can communicate with each other via the network 108 or via the robot server 130 in order to coordinate route selection.
The robots 102 can also navigate independently, for instance to complete a task assigned to the robot 102 (such as individually vacuuming or mapping a space, identifying anomalies within a room, identifying other tasks that need to be performed, etc.) In some embodiments, a robot 102 can select a route that minimizes a transit time from a first location to a second location, that minimizes travel through unknown locations or that avoids keepout regions altogether, or that guarantees that the robot 102 can navigate to a power dock station 111 before it runs out of power. For example, in the embodiment of
In some embodiments, one or more rooms within the building 110 can include a dock station 111. The dock station 111 can electrically couple to a robot 102, for instance when the robot 102 drives up to a dock station base and physically couples to a dock station power outlet or connector. In some embodiments, the robot 102 can select a route to the dock station 111 when the power available to the robot 102 falls below a threshold, or can automatically return to a dock station 111 when a particular task is completed. In some embodiments, the robots 102 can remain at a dock station 111 until a subsequent task is assigned to the robot 102, until a subsequent periodic task is to be performed, and the like. In some embodiments, the dock station 111 can also act as an access point, enabling communications between the robots 102 and one or more systems external to the robot 102. Likewise, the dock station 111 may be able to perform a diagnostic check of one or more robot components or systems, and can communicate this information to an external system (e.g., the robot server 130).
The building system 112 can include any system capable of wirelessly communicating with a robot 102, such as a fire detector system, a wireless modem, a thermostat, a clock, an entertainment system, a smart lighting system, an elevator, a housekeeping or front desk computer, or the like. A robot 102 can communicate with the building system 112, for instance by receiving information or instructions from the building system 112 or by providing information or instructions to the building system 112. For instance, the robot 102 may determine that the beds 120A and 120B in the room 114 need to be made, and can instruct a housekeeping computer (e.g., a client device 132) to schedule the room for housekeeping. Likewise, the robot 102 can determine that the temperature within the room 114 is above a comfort threshold, and can instruct a thermostat to reduce the temperature. It should be noted that in some embodiments, the building system 112 does not wirelessly communicate with the robot 102, and instead the robot 102 can determine information displayed by or otherwise conveyed by the building system 112. For instance, the robot can capture an image of a thermostat display in order to determine a temperature of a room, or can capture an image of a room number to identify a room.
The robot server 130, the client device 132, the remote terminal 134, the building database 136, and the robot database 138 are computing systems configured to communicate with each other and with one or more robots 102 and/or building infrastructure via the network 104 or one or more other networks. In some embodiments, fewer, additional, or different computing systems are included within the environment 100 than those shown in
The robot server 130 provides a communicative interface between the robots 102 and one or more other systems of
The client device 132 is a computing system, such as a mobile phone, a desktop computer, a tablet, and the like enabling an operator to interface with the robots 102. For instance, video captured by a robot 102 can be provided to the robot server 130 and routed to a client device 132 for viewing by an operator. In some embodiments, the environment 100 of
The remote terminal 134 includes an interface displayed to an operator via a device display, such as a mobile device, computer monitor, and the like. As noted above, in some embodiments, the remote terminal is displayed via a client device 132, though in other embodiments, the remote terminal is displayed within dedicated hardware. Information received from a robot 102 (for instance, via the robot server 130) can be displayed within the remote terminal interface, such as video or images captured by the robot 102, diagnostic information associated with a robot 102, a map indicating a route being navigated by the robot 102, a map indicating a location of the robot 102 received from the robot 102, a map allowing an operator to configure a route of the robot 102, a set of displayed instruction elements that (when selected) can configure the behavior of the robot to perform one or more tasks, or any other suitable information associated with the robot 102, the building, or one or more other systems or components of
The building database 136 includes data associated with the building 110, such as a floorplan of the building, a location of objects within the building 110, a location of doors and windows within the building 110, information associated with building infrastructure, a schedule associated with the building 110, building occupancy or guest information, building staff information, and the like. The robot database 138 includes data associated with the robots 102, functions the robots 102 can perform, robot maintenance information, and the like. In some embodiments, the robot database 138 can include routes through the building 110, robot routine information (such as periodic cleaning schedules), diagnostic or communication information associated with the robots, and the like. The building database 136 and the robot database 138 can be accessed by the robot server 130, by the robots 102, or by one or more other components of
The sensors 250 comprise devices that measure or detect conditions or properties of the robot, the surrounding environment, or objects and entities within the surrounding environment. The sensors 250 then generate signals to the processor 210, storage medium 220, or other components of the robot 102 representative of the sensed conditions or properties. The sensors 250 may include, for example motion and orientation sensors such as an accelerometer, gyroscope, velocity sensor, or positioning sensor. The sensors 205 may also include object detection sensors, such as a RADAR sensor, LIDAR sensor, and the like. Furthermore, the sensors 250 may include devices such as a light sensor, a noise sensor, a moisture sensor, a thermometer, a dust sensor, a mold sensor, or other sensor for detecting environmental conditions. In yet other embodiments, the sensors 250 may include capture systems such as a still image camera, a video camera, or a microphone. The sensors 250 may include integrated processing devices for processing sensor data before sending it to other components of the robot 102. For example, the sensors 250 may include integrated filters or data conversion processors that may convert the sensor data from its raw captured form to a processed form.
The drive system 252 includes one or more drive components and/or controllers for controlling motion of the robot 102. For example, the drive system 252 may include a set of wheels, one or more drive motors, wheel shafts for providing energy from the drive motors to the wheels, and a drive controller for controlling the drive motors. In an embodiment, the drive system 252 may be configured to enable the robot 102 to navigate in any of multiple different directions regardless of the orientation of the robot. For example, the drive system 252 may include Mecanum wheels or Omni wheels that enable the robot 102 to move in a direction parallel to its axis of orientation, perpendicular to its axis of orientation, or diagonally relative to its axis of orientation. Furthermore, the drive system 252 may enable the robot 102 to rotate in place to a desired orientation.
The input/output device 254 comprise various devices for capturing inputs provided to the robot 102 and generating output information. Input devices may include, for example, buttons, dials, switches, or a touchscreen display for controlling various functions of the robot 102, or may include ports for connecting external input devices such as, for example, a USB port or other serial or parallel input port. The output devices may include, for example, a display, a lighted indicator such as an LED, a haptic device, a speaker, or an output port for connecting to external output devices (e.g., a USB port).
The communications system 256 comprises a wired or wireless communication interface for communicating with external devices (e.g., via the network 104). The communications system 256 may comprise, for example, a Bluetooth interface, a WiFi interface, a cellular communication interface, or other communication interface.
The special purpose equipment 258 comprises components of the robot 102 for performing specific tasks relating to a specialized configuration the robot 102. For example, in a robot 102 designed for vacuuming, the special purpose equipment 258 may comprise a vacuum motor and a vacuum brush. In a robot 102 designed for mopping, the special purpose of equipment 258 may comprise a water tank and a mop head. In other robots 102 (e.g., a robot designed for mapping an area), the special purpose equipment 258 may be omitted.
The power system 260 stores power and supplies power to the robot 102. For example, the power system 260 may comprise a battery or a port for connecting to an external power source (such as a dock station 111). In an embodiment, the power system 260 may include separate power sources for powering high-voltage components such as the drive system 252 and low-voltage components such as integrated electronics.
The storage medium 220 (e.g., a non-transitory computer-readable storage medium) comprises volatile memory, non-volatile storage, or a combination thereof. The storage medium 220 stores instructions that when executed by the processor 210 causes the processor 210 to carry out the functions attributed to the various modules described herein (e.g., a mapping module 222, a navigation module 224, a function control module 226, and a data collection module 228). Additionally, the storage medium 220 may include a data storage 230 to store various data collected by or utilized by the robot 102.
The mapping module 222 includes instructions for automatically mapping an area (e.g., such as a room). For example, the mapping module 222 may control the drive system 252 of the robot 102 to navigate an unexplored space in a particular pattern and process data from the sensors 250 to predict locations of walls, furniture, or other objects or obstacles. The mapping data may be stored to the data storage 230 or may be transmitted to the robot server 130 for storage in the building data 136.
The navigation module 224 includes instructions for controlling navigation of the robot 102. The navigation module 224 may control the robot 102 in an automated manner to complete a certain task assigned to the robot 102 (e.g., mapping a space or cleaning a room) or may navigate in response to manual commands from an operator. In an embodiment, the navigation module 224 may utilize mapping data from the local data storage 230 or the building data 136 to determine an efficient navigation route for completing the assigned task. Furthermore, the navigation module 224 may utilize mapping data and real-time sensing data to navigate in a manner that avoids objects and keepout regions.
The function control module 226 comprises instructions for controlling various functions of the robot 102 to enable the robot 102 to complete an assigned task. In some embodiments, the functions of the robot include operations such as vacuuming, mopping, scrubbing, cleaning, dusting, making a bed, moving a customer's belongings, moving furniture, opening doors, and communicating with a human. For example, for a vacuuming robot 102, the function control module 226 may control the speed of a brush assembly or suction power of a vacuum. For a mopping robot, the function control module 226 may control the spray of water and the spin speed of the mop head. In other embodiments, the function control module 226 may control other special purpose equipment 258 of the robot 102 according to its particular assigned task.
The data collection module 228 comprises instructions for controlling collection and processing of data (e.g., from the sensors 250) obtained by the robot 102. The data collection module 228 may collect and process different types of data depending on a particular data collection task that may be assigned to the robot. For example, the data collection module 228 may collect moisture data to determine areas of a room that are particular damp. In another example, the robot 102 may collect dust data to determine an overall cleanliness factor for a particular room. The data collection module 228 may store the collected data or may share it in real-time with other modules to enable those modules to execute their respective functions, or may provide the collected data to one or more external entities (such as another robot 102, the robot server 130, the remote terminal 134, the robot data store 138, and the like).
Robots can be broken down into two categories: mobile and non-mobile robots. Within the category of mobile robots, they may be further broken down into localized or non-localized robots; a localized robot knows its absolute position with respect to some reference. This reference for a localized mobile robot is typically in the form of a fixed map representation of an area, such as a two-dimensional occupancy grid, which is a collection of cells that each contain values for whether the cell is occupied by a physical object, unoccupied or unknown.
Simultaneous Localization and Mapping (SLAM) is a robotics technique to generate a representation of a physical space. Multiple sensors are used to take observations of the immediate area around a robot, which are then combined with estimates of the robot's motion to build a representation of the overall space, or more generally a map. However, this transformation of a physical space to a representation thereof will necessarily lose contextual information. This contextual map information may be vital to a robot's functionality.
In practice, map generation is reliant on sensor data (e.g. optical systems such as cameras or lasers). Vision-based systems rely on identifiable features in the area to generate a physical representation of distances. Laser-based systems rely on direct distance measurements of the surrounding environment by reflecting laser light at known angles. These techniques do not currently offer a robot the contextual information required for its functionality that a human would naturally understand.
Many aspects of physical environments might not be understandable or easily representable for robots. There are many situations where contextual knowledge of a space can influence the performance of a task by an actor. For instance, a human told to vacuum a room would understand not to vacuum adjoining spaces in the same room such as a bathroom or a pool, but a robot without this knowledge would not. Likewise, a robot instructed to mop a tile portion of a floor needs to know which areas of the floor are tile and which are covered by an area rug so that it only cleans the correct regions. Providing a robot with relevant contextual information (e.g., within a contextual map) can enable a robot to avoid keepout areas (areas through or into which a robot is unable to safely navigate) identified by the map. This is a simple contextual cue to limit the space in which the robot may visit, but not its internal representation of the physical space.
Other approaches might include physical barriers or signal-based “fences” which require a physical device to provide the robot with the context of a keepout area. For example, a doorway may be outfitted with an infrared emitter or magnet strip that the robot treats as a fence. This approach suffers from a dependency on infrastructure (such as installing physical devices) on every region demarcated as a keepout, which can have substantial repercussions on commercial applications.
In large open spaces, such as warehouses or conference centers, sufficient infrastructure may not exist to support physical devices. Businesses that have a strong emphasis on decor, such as hotels and museums, may not be willing to incorporate the required physical changes in their infrastructure. Some environments may also have misaligned external requirements on their infrastructure, such as cleanliness in hospitals or security in government agencies. An additional concern with fence-based approaches is that they do not contextualize which side of the fence a robot should be on. This becomes especially problematic if a robot finds itself on the wrong side of a fence and is unable to cross back over.
One approach is to have multiple overlapping map representations: localization maps, in which the environment is represented as a two-dimensional occupancy grid that models the physical environment the robot would perceive through its sensors; and additional map overlays used to contextualize regions with information pertaining to the actions a robot may or may not perform in that space (“contextual map layers” as used herein). For instance, a localization map may include boundaries representative of the walls within a room, and an overlaid contextual map layer may identify a floor type (such as tile or carpet), enabling the robot to determine that a vacuuming function should be performed only on the carpeted portion and a mopping function should be performed only on the tiled portion. This provides semantic meaning for the physical environment without the reliance on physical infrastructure.
To create a localization map, a trusted initial set of information is accessed, typically the blueprints/floor plans of the building. The localization map is created from the walls and permanent features of the building. This is the base layer that is used by the robot to understand where it is within the building in the global sense, and to visualize data collected by the robot for humans to consume. The blueprints are combined with alternate sources of data such as images, prior sensor measurements, and human input to create the localization map.
As with the localization map, the contextual map layers can include an occupancy grid, with each cell of the grid representing information about that location in that layer. For instance, a floor type map layer would identify for each cell of the grid any relevant floor types within the cell (e.g., carpet, tile, hardwood, etc.) A different layer can identify the room type (such as “hallway”, “bathroom”, “bedroom”, etc.). Example contextual map layers types include:
Obstacle layers (walls, permanent obstacles)
Flooring type (carpet, tile, marble, hardwood)
Keep-out regions
Doors/entryways
Room type (e.g., kitchen, bathroom, bedroom, office, hallway, etc.)
Furniture (or regions that have semi-permanent obstacles)
Historical areas where robots get stuck
Foot traffic density
Historically dirty areas
Historical cleaning history
The localization map and the one or more contextual map layers can be combined to form a combined map. As a robot traverses an area using the combined map, the robot can update or modify the combined map based on image or sensor data captured by the robot. For instance, the robot can traverse a hotel room using a combined map that includes a contextual map layer identifying the location of furniture within the room. In an example where a hotel guest has moved a chair within the room to a different location, the robot can detect the new position and orientation of the chair, and can modify the combined map to reflect the chair's new position. Alternatively, the robot can move the chair back to the location and orientation initially identified by the combined map, or can flag the chair for manual relocation by a human.
A robot can use a combined map to navigate or to perform one or more functions within an area. For instance, the combined map can include a contextual map layer that identifies regions of the map within which point-to-point travel by the robot is permissible, and the robot can navigate through the room based on these regions. Likewise, the combined map can include a keepout layer defining areas the robot cannot or should not enter, and the robot can navigate through the room by avoiding these areas. In some embodiments, the combined map can include a floor type layer, and the robot can perform one or more cleaning functions (e.g., vacuuming, mopping, etc.) based on the identified floor types. In other embodiments, the combined map can include layers identifying areas in which spot cleaning or edge cleaning is required, identifying areas in which particular (such as a vacuum suction power or brush roller RPM) should be used, and the like. In some embodiments, the combined map can define, for each of one or more portions of the map, one or more functions that a robot can perform within the portion of the map, one or more functions that a robot must perform within the portion of the map, and/or one or more functions that a robot is prohibited from performing within the portion of the map.
For example, a robot can be instructed to vacuum a room. A floor type map layer and a can be used to identify two regions within the room that are carpeted. The robot identifies a path that enables vacuuming of the regions that are carpeted. The two carpeted regions can be separated by a hardwood floor on which vacuuming is prohibited. Accordingly, as the robot traverses the hardware floor, the robot can disable the vacuuming functionality. In addition, a keepout map layer can identify a region within the room that includes a table and chairs, and the robot can select a shortest path within the room that enables the robot to vacuum the carpeted regions of the room while avoiding the table and chairs. Finally, a map layer can identify heavily trafficked portions of the carpeted regions, and the robot plan a route that vacuums each of these heavily trafficked portions more than once.
The robot navigates 415 the path, and receives 420 sensor data while navigating the path. For instance, the robot can include a LIDAR sensor and camera that allows it to identify the location of obstacles, people, furniture, and the like within the path. In the event that the robot identifies something that will prevent it from navigating the path or performing the task (such as a previously undetected piece of furniture at a location on the path), the robot can update 425 the accessed map or the planned path based on the detected obstacle, and can perform the task using the updated map or path. For example, the robot can navigate to a room in which the task is to be performed using an alternative route, or can perform the task at a different starting location within the room based on the updated map. The updated map can then be provided by the robot to a building server, computing system, or another robot for subsequent use.
As noted above, one example contextual map layer is a keepout map. The keepout map defines portions of an area through which a robot should not or cannot enter or travel. Each of these portions is referred to as a “keepout region”. For instance, within the context of a hotel room, a keepout region may be defined for a bed within the hotel room (e.g., the portion of the localization map corresponding to the boundary of the bed can be designated as a keepout region, since the robot is unable to enter or travel through the boundary). Likewise, a keepout region can be defined for high-trafficked areas where a robot may inhibit movement of people within the areas. A keepout region can be manually defined, for instance by an owner of a building, a manager of a company, a cleaning or maintenance staff member, a user of the robot, or any other individual. Keepout regions can also correspond to regions where the presence of a robot may risk damaging the floor or objects within an area. Likewise, keepout regions can be defined for areas inaccessible to humans, for areas where the robot may get stuck, areas without wireless reception, walls, building structures, access gates, permanent fixtures, or any other suitable area.
In some embodiments, a contextual map layer can include keepout regions, can include free space regions (where full speed movement is permitted), and can further include reduced speed regions (where movement is permitted but only at a reduced speed). In such embodiments, the reduced speed regions can be defined based on high trafficked areas, based on areas where full speed movement may risk damaging furniture or other objects within the areas, based on particular floor types, based on a presence of a person or type of object, or the like. Likewise, in some embodiments, a contextual map layer can include reduced functionality regions (where the robot can only perform certain operations but not others). For example, a contextual map layer can identify a first set of regions where the robot can perform all operations, and can identify a second set of regions where the robot is not allowed to vacuum. It should be noted that a contextual map can identify reduced functionality regions by either identifying allowed operations within the regions or by identifying operations that aren't allowed within the regions.
In the example of
In the example of
As data is gathered by the robot via sensors within the robot, the contextual map layers can be continuously updated to maintain accuracy. Walls, permanent furniture, or other obstacles that may not be properly positioned on the localization map are updated from high-accuracy sensors like a LIDAR sensor. The robot can then update the contextual map layers to include the proper position of the detected objects or obstacles, or can provide information describing the location of the objects or obstacles to a remote system (such as a building server) to update the map layers. The updated contextual map layers can be distributed to one or more other robots (such as a set of robots within a threshold distance of the robot, on a same floor or in a same building as the robot, or assigned a particular task or navigating a particular route affect by the updated map), for instance in real-time, in order to provide the other robots with the most up-to-date contextual maps available.
In one embodiment, the robot (or a computing system to which the robot communicates sensor information) can update a contextual map layer in response to detecting an obstacle within a free space region. In such an embodiment, the contextual map layer is updated by adding a keepout region to the contextual map layer corresponding to the boundary or location of the detected obstacle. Further, the robot can notify maintenance or security personnel of the detected obstacle. Likewise, the robot (or the computing system) can update a contextual map layer by removing a keepout region in response to determining that the obstacle corresponding to the keepout region is no longer there (for instance, in response to being moved). It should be noted that although reference is made to “contextual map layers”, in practice, a contextual map layer may itself be a map, or may be combined with a localization layer to form a map.
The foregoing description of the embodiments of the invention has been presented for the purpose of illustration; it is not intended to be exhaustive or to limit the invention to the precise forms disclosed. Persons skilled in the relevant art can appreciate that many modifications and variations are possible in light of the above disclosure.
Some portions of this description describe the embodiments of the invention in terms of algorithms and symbolic representations of operations on information. These algorithmic descriptions and representations are commonly used by those skilled in the data processing arts to convey the substance of their work effectively to others skilled in the art. These operations, while described functionally, computationally, or logically, are understood to be implemented by computer programs or equivalent electrical circuits, microcode, or the like, for instance, within a robot or robot system. Furthermore, it has also proven convenient at times, to refer to these arrangements of operations as modules or engines, without loss of generality. The described operations and their associated modules may be embodied in software, firmware, hardware, or any combinations thereof
Any of the steps, operations, or processes described herein may be performed or implemented with one or more hardware or software modules, alone or in combination with other devices. In one embodiment, a software module is implemented with a computer program product comprising a non-transitory computer-readable medium containing computer program code, which can be executed by a computer processor for performing any or all of the steps, operations, or processes described. Further, the functionalities described herein can be performed by a hardware processor or controller located within the robot.
Embodiments of the invention may also relate to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, and/or it may comprise a general-purpose computing device selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a non-transitory, tangible computer readable storage medium, or any type of media suitable for storing electronic instructions, which may be coupled to a computer system bus. Furthermore, any computing systems referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability.
Embodiments of the invention may also relate to a product that is produced by a computing process described herein. Such a product may comprise information resulting from a computing process, where the information is stored on a non-transitory, tangible computer readable storage medium and may include any embodiment of a computer program product or other data combination described herein.
Finally, the language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the inventive subject matter. It is therefore intended that the scope of the invention be limited not by this detailed description, but rather by any claims that issue on an application based hereon. Accordingly, the disclosure of the embodiments of the invention is intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the following claims.
This application claims the benefit of and priority to U.S. Provisional Application No. 62/663,143, filed Apr. 26, 2018, the contents of which are incorporated by reference herein in their entirety.
Number | Date | Country | |
---|---|---|---|
62663143 | Apr 2018 | US |