This invention relates generally to carwashes. More specifically, at least one embodiment, relates to an apparatus, system and method for the self-loading of a car in a carwash.
Tunnel-style car washes move an automobile through a set of operations that clean an exterior of the car. The operations can include exterior washing, undercarriage washing, drying, an application of wax or sealant and wheel cleaning as some examples. Typically, the car is first placed on a conveyor and then mechanically moved through a fixed path in the tunnel where the equipment that performs the various operations is located. The cleaning operations are performed in a pre-defined sequence as the car is moved adjacent the equipment. The progress of the car through the tunnel is automated and performed independent of the driver, for example, with the car placed in neutral. As a result, a carwash operator is generally not actively involved in the car washing operation once it begins. However, the car must first be properly loaded on the conveyor. Generally, this requires car wash staff to guide the driver to properly align the car and load the car on the conveyor. Properly loading facilitates the proper operation of the conveyor.
In a conventional chain and roller type conveyor system, a track is located such that the front and back tires on one side of the car are secured on the track to move the vehicle through the carwash. The conveyor track includes a tapered entry ramp to assist in guiding the front tire onto the track such that the rear tire on the same side of the vehicle follows onto the track once the track engages the front tire. With proper spacing, vehicles are loaded one after another onto the conveyor such that multiple cars can be on the conveyor at different stages of the overall car wash process. Belt driven conveyors allow the same type of multi-vehicle loading. However, belt driven conveyors allow the front tires on the left side and the right side of the car to be loaded onto belts that are used to move the vehicle through the cleaning stations in the tunnel.
While the entry ramp provides some margin for error it often remains a challenge for drivers to properly align the tires with the conveyor. Misalignment can also easily occur with belt driven conveyor systems, for example, a vehicle can be loaded at an angle such that it is partly diagonal as it is moved into the tunnel. Unless the car wash system performs an emergency shutdown, the misalignment can damage the vehicle and/or the car washing system. Where multiple vehicles are on the conveyor when the emergency shutdown occurs the vehicles are trapped at a stationary location in the tunnel until the system is restarted. Given the problem created by the preceding, otherwise automated tunnel car washing systems continue to require an attendant (i.e., an operator) to assist in guiding drivers when loading their vehicle onto a car wash conveyor.
While cameras have long been used to monitor the operation of tunnel-style car washes, the use has been limited for the loading operation. For example, Chinese patent publication CN 207424636 describes the display of a video to the driver of a vehicle during the loading process. The video shows the position of the vehicle's tire relative the entrance to the conveyor of the vehicle. However, this approach requires a high degree of focus by the driver as they look at the changes in tire position on the video screen in response to any changes in their steering. It is also known to use a camera for vehicle identification or to determine a vehicle location for washing by a movable robot-style arm.
Because an attendant is generally required to guide the driver loading their vehicle onto the conveyor, car wash systems typically do not display any user (i.e., driver) feedback during operation. The most frequent exception is to provide some form of instruction to place the vehicle in neutral once the vehicle is pulled into the proper position by the driver. Where feedback is provided, it is typically in an analog form or an overly simplistic communication using signal lights or perhaps static left-right directional arrows, see for example, U.S. Pat. No. 8,421,650.
Therefore, there is a need for apparatus, systems and methods that monitor a vehicle's location and display user feedback to allow the driver to effectively align and load their vehicle on a car wash conveyor without the aid of an attendant. According to various embodiments, a fully automated loading process results to allow the driver to self-load their vehicle without aid of car wash personnel. According to some embodiments, dynamic graphical icons are employed to readily convey information concerning an adjustment a driver must make to align their vehicle for loading. In one embodiment, the dynamic graphical icons display non-numeric symbols to convey a magnitude of the adjustment. That is, the dynamic graphical icons can be rendered in a manner that conveys a scale or amount of adjustment needed.
According to one aspect, a method is provided to assist a driver self-loading a vehicle on a car wash conveyor using feedback concerning a location of the vehicle relative to an ideal loading path. The feedback is displayed to the driver using a graphical display located such that it is visible to the driver while in the vehicle approaching a loading point for the car wash conveyor. According to some embodiments, the method includes mapping an ideal loading path of a vehicle at an entrance to the car wash, the ideal loading path corresponding to a path of a vehicle that is properly aligned on its approach to load the vehicle onto the car wash conveyor and evaluating images included in at least one video stream of the entrance to determine whether a vehicle is approaching a loading point for the car wash conveyor. According to one embodiment, the vehicle is identified, images included in the at least one video stream are evaluated to determine an amount of an error between an actual location of the vehicle and the ideal loading path where the error is determined as a lateral alignment of the vehicle relative to the ideal loading path. Images included in the at least one video stream are evaluated to determine a forward distance the vehicle must travel to reach the loading point and a plurality of dynamic graphical icons are displayed on the graphical display to illustrate to the driver a first adjustment required to correct for the error and a second adjustment required to drive the vehicle forward to reach the loading point.
According to another aspect, a non-transitory computer-readable medium is provided where the non-transitory computer-readable medium includes computer program instructions executable by at least one computer processor to perform a method to assist a driver self-loading a vehicle on a car wash conveyor using feedback concerning a location of the vehicle relative to an ideal loading path. The feedback is displayed to the driver using a graphical display located such that it is visible to the driver while in the vehicle approaching a loading point for the car wash conveyor. According to some embodiments, the method includes mapping an ideal loading path of a vehicle at an entrance to the car wash, the ideal loading path corresponding to a path of a vehicle that is properly aligned on its approach to load onto the car wash conveyor and evaluating images included in at least one video stream of the entrance to determine whether a vehicle is approaching a loading point for the car wash conveyor. According to one embodiment, the vehicle is identified, images included in the at least one video stream are evaluated to determine an amount of an error between an actual location of the vehicle and the ideal loading path where the error is determined as a lateral alignment of the vehicle relative to the ideal loading path. Images included in the at least one video stream are evaluated to determine a forward distance the vehicle must travel to reach the loading point and a plurality of dynamic graphical icons are displayed on the graphical display to illustrate to the driver a first adjustment required to correct for the error and a second adjustment required to drive the vehicle forward to reach the loading point.
As used herein, the term “dynamic graphical icon” refers to a graphics element rendered in a display that can operate to change characteristics in a manner that conveys information to a viewer of the icon. For example, depending on the embodiment, the dynamic graphical icon can vary in size, shape, color, location within a display or any combination of the preceding. These changes to the dynamical graphical icon include a temporal element because the changes occur over time while the dynamic graphical icon is viewed by the viewer. Depending on the changes being represented by the dynamic graphical icon and the circumstances, a change in characteristics can range from those that appear as rapid changes to those that appear as gradual changes. One of ordinary skill in the art will recognize in view of the disclosure provided herein that an icon found on a desktop of a PC or the home screen of a mobile phone (typically representative of a software application) are not dynamic graphical icons because they have a static appearance and location in the graphical user interface.
As used herein, the term “loading point” refers to a location on either a chain and roller conveyor or a belt driven conveyor at which the vehicle that is being loaded must be located before the conveyor is started. While referred to as “point” those of ordinary skill in the art will understand based on the disclosure provided herein that the loading point includes a region that allows for a limited amount of variance in an at rest position of the vehicle before the conveyor is started.
The accompanying drawings are not intended to be drawn to scale. In the drawings, each identical or nearly identical component that is illustrated in various figures is represented by a like numeral. For purposes of clarity, not every component may be labeled in every drawing. In the drawings:
This invention is not limited in its application to the details of construction and the arrangement of components set forth in the following description or illustrated in the drawings. The invention is capable of other embodiments and of being practiced or of being carried out in various ways. Also, the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. The use of “including,” “comprising,” or “having,” “containing”, “involving”, and variations thereof herein, is meant to encompass the items listed thereafter and equivalents thereof as well as additional items.
Referring to
In general, the control unit 102 allows a car wash operator and/or a system administrator who can be either local or remote from the site of the car wash to monitor and control an operation of the self-loading car wash system 100. According to some embodiments, the control unit 102 is processing device such as a personal computer. Depending on the embodiment, the control unit can include a desktop computer, a laptop computer or a portable electronic device such as a tablet computer, a hand-held computer or a combination of any two or more of the preceding processing devices and/or other processing devices. Further, the control unit 102 can include cloud-based resources hosted on remote servers accessed over the Internet. Each of the preceding embodiments may be employed provided that the image processing and driver feedback is provided rapidly enough to allow the driver to make corrections without having to back the vehicle up. In some embodiments, the image processing and associated feedback provided to the driver (for example, via display of dynamic graphical icons) is provided in near real-time, i.e., with a delay of approximately one second. In another embodiment, the image processing and associated feedback is provided to the driver in substantially real time, i.e., with no discernible delay.
The image processing system 112 can include one or more algorithms that evaluate a series of images provided by one or more video streams to determine: a) whether a vehicle is approaching a loading point for the car wash conveyor; b) an amount of error between an actual location of the vehicle and a loading path that properly aligns vehicles for loading; and c) a forward distance the vehicle must travel to reach the loading point from its current location. According to some embodiments, the image processing system 112 employs edge detection to identify vehicles in a video stream and determine a respective centroid for each of the vehicles. The image processing system 112 tracks the loading path of the centroid of each vehicle for comparison with the loading path of a centroid corresponding to a vehicle that is properly aligned with the car wash conveyor system. With this information, the image processing system 112 determines a direction and magnitude of any misalignment between the loading path of a vehicle that is being loaded on the conveyor and the ideal loading path for loading.
Depending on the embodiment, the image processing system 112 can be implemented in software, hardware or firmware or any combination thereof. Accordingly, any of the embodiments described herein can provide the image processing algorithms included in the system 112 in the form of a non-transitory computer readable medium in which instructions are stored that when executed by a processing system implement the image processing described herein. Depending on the embodiment, the image processing system 112 can be executed by a central processing unit such as the processor 116 and/or a more specialized processor such as a graphics processing unit (GPU). Further, aspects of the image processing system 112 can be implemented with a specially-programmed, special purpose hardware, for example, an application-specific integrated circuit (ASIC).
The display 114 provides an operator of the control unit 102 with the display of information that is employed to review the video streams and results of image processing in real time as described in greater detail with reference of
The processor 116 provides a central processing unit for the control unit 102. The processor can perform operations that include image processing. Further, the processor 116 can include a single processor, multiple processors and specialized processors such as a GPU depending on the embodiment. In general, the processor 116 operates to control the overall operation of the control unit 102, for example, the operation of the hardware elements included in the control unit, reading and writing information to the memory and execution of software instructions by the control unit 102.
The power source 118 can include a converter for converting AC power to DC power for operation of the elements of the control unit 102. Accordingly, the recharging circuitry can include a wired electrical connection available from an exterior of the control unit 102. In some embodiments (for example, where the control unit 102 includes a laptop computer), the power source 118 includes one or more batteries, for example, lithium or alkaline batteries integral to the control unit 102. In these embodiments, the power source 118 can include a replaceable power source or a rechargeable power source. Where a rechargeable power source is employed, the power source 118 can include recharging circuitry to regulate charging operations.
The memory 120 is configured to store software instructions 121 in accordance with various embodiments. The software instructions can include one or more algorithms or other programs, for example, algorithms for determining whether a vehicle is approaching the loading point, determining any error in a vehicle's approach to the loading point, determining a forward progress of a vehicle relative to the loading point and controlling the generation and display of dynamic graphical icons used to provide instructions to the operator of a vehicle approaching the loading zone. In one embodiment, the memory 120 is included in the processor 116. In another embodiment, the memory 120 includes memory internal to the processor 116 and memory external to the processor 116.
Depending on the embodiment, the communication system 122 can include wired or wireless communication. The communication system 122 can communicate via either or both of local-area networks (LANs), wide area networks (WANs), wireless communication, wired communication and may include the Internet. According to a further embodiment, the communication system 122 provides access “over-the-cloud” to one or more remote resources such as servers, applications and/or data storage systems. Communication can occur using any of Wi-Fi networks, BLUETOOTH communication, cellular networks and satellite communication. Other communication protocols and topologies can also be implemented in accordance with various embodiments. The communication system 122 can be included as a standalone element or included in the processor 116 depending on the embodiment.
The user interface 124 can include any one of or any combination of a keyboard, a mouse, a joy stick, voice recognition and the display 114 (for example, a graphical user interface rendered in the display). According to some embodiments, the user interface includes speakers to provide audio notifications or alerts. In general, the user interface 124 allows a car wash operator and/or a system administrator (whether local or remote) to monitor and control an operation of the self-loading car wash system 100.
In various embodiments, the control unit 102 can monitor and control an operation of the first camera 104, the second camera 106, the graphical display 108 and the conveyor controller 110. The communication between the control unit 102 and the preceding system elements can include a communication of any of analog signals, digital signals, video signals, audio signals and graphics data.
According to one embodiment, each of the first camera 104 and the second camera 106 are installed in the vicinity of the loading area which can include the loading point (i.e., the point at which the vehicle is located at the correct position to start the conveyor) and the approach to the loading point (i.e., the area immediately adjacent the loading point through which a vehicle travels as it is steered by the driver toward the conveyor). In some embodiments, two cameras are employed to allow a first of the two cameras (for example, the first camera 104) to record video of the approach while a second of the two cameras (for example, the second camera 106) is focused on the loading point and pointed in a direction facing into the car wash tunnel. According to these embodiments, the video signal provided by the first camera 104 is processed by the control unit 102 to determine a lateral alignment of the vehicle on approach to the loading point. In these embodiments, the video signal provided by the second camera 106 is processed by the control unit 102 to determine how much further the vehicle must travel to reach the loading point. According to other embodiments, three or more cameras are employed, for example, to add to any of the functionality, the precision and/or the speed of operation of the system 100. The cameras can provide conventional video or HD video depending on the embodiment. In various embodiments, the cameras are configured to decrease the stream delay and optimize performance. In a preferred embodiment, the cameras 104, 106 are color HD cameras with a fame-rate of at least fifteen frames per second. According to one embodiment, an IP camera from Axis Communications is employed.
The graphical display unit 108 is located in the loading area within the line of sight of a driver of a vehicle as the vehicle approaches the loading point. In general, the operation of graphical display unit 108 provides a set of dynamic graphical icons that convey instructions to the driver of the vehicle to guide them through the self-loading of their vehicle on the car wash conveyor. The graphical display unit 108 can communicate other information to the driver, for example, a graphic including text and/or images that identifies the type of car wash service that the driver has requested, an amount paid for the service and text instructions concerning the self-loading operation. According to some embodiments, the graphical display unit 108 is included in a console or cabinet that includes other hardware, for example, a speaker to provide audio information to the driver, a microphone to allow the driver to communicate with car wash staff, video hardware including a video monitor and/or a camera and control systems employed to control an operation of the conveyor. According to one embodiment, one or more functions and/or elements described with reference to the control unit 102 are included in the console with the graphical display unit 108.
In general, the conveyor controller 110 includes an electromechanical device that operates to start and stop the car wash conveyor. According to one embodiment, the conveyor controller includes a relay that operates to engage rollers to start the car wash conveyor when a vehicle reaches the loading point.
While the description concerning the self-loading car wash system 100 is described with a control unit 102 that is remote from the graphical display 108, all or a portion of the control unit 102 can be installed at the same location (for example, in a single enclosure). For example, a separate standalone control unit 102 may not be employed. Instead, a system including the image processing system 112, the processor 116, the power source 118, the memory 120 and the communication system 124 can be included with the graphical display 108 as one control and display unit in a single enclosure. According to this embodiment, each of the display 114 and the user interface 124 may optionally be included in the system. Further, these embodiments, can allow an operator, developer or system administrator to remotely access the system 100, for example, via a LAN, wireless communication system and/or the Internet.
According to the illustrated embodiment, the vehicle 228 is located at the loading point 236. The first camera 204 located forward of the vehicle captures video of the vehicle's approach to the loading point 236 to determine whether the vehicle 228 is properly aligned with the chain and roller conveyor 230. That is, the video feed from the first camera 204 is processed to determine whether the vehicle 228 should be steered straight ahead and/or left or right to properly align the vehicle 228 with the chain and roller conveyor 230. A second camera 206 located rear of the vehicle 228 captures video of the loading point 236 to determine whether the vehicle 228 is located far enough forward to reach the loading point 236. Generally, the driver places the vehicle 228 in neutral when the loading point 236 is reached. The video stream captured by the second camera 206 can also be processed to determine whether the driver is stepping on the brake. For example, to avoid having the vehicle 228 to jump off the conveyor during operation, it is important to monitor the status of the vehicle's brakes.
Referring now to
The relay 237 is also located in the vicinity of the chain and roller conveyor 230. The relay 237 can provide an electronic, electrical or electromechanical operation to signal an operation of the roller and the start of the conveyor 230. For example, where an electromechanical relay is used, an electrical signal energizes a coil included in the relay when the vehicle 228 reaches the loading point 236. The relay 237 operates to change the state of electrical contacts to supply power to an operator for the mechanical rollers. The mechanical roller activates to rise up and engage the back of a rear tire such that the car is moved forward with the operation of the conveyor 230. According to some embodiments, the relay 237 can also operate to stop the conveyor 230, for example, during either normal operation when the vehicle 228 exits the conveyor 230 or on an emergency basis if there is a problem in the operation. In general, the relay 237 can provide logic to communicate a wide variety of commands to the car wash system. For example, the relay 237 can output signals that provide information for the type of car wash selected by the driver of a particular vehicle.
Various elements of the system 100 illustrated in
Referring now to
The video frames illustrated and described with reference to
In general,
The overall image processing operation includes an identification of an ideal loading path in the video frame. The ideal loading path is the loading path that properly aligns an approaching vehicle with the loading ramp 446. According to the embodiment illustrated in
The central-location reference is determined by the image processing system as the center of the centroid associated with each vehicle, respectively. A location of the central-location reference is also known the image processing system at the pixel-coordinate level. The location of the centroid moves forward with the vehicle as the vehicle approaches the loading ramp 446. As a result, the central-location reference also moves forward. As the vehicle moves forward, the image processing system tracks the central-location reference and compares that location against the loading-path reference 448. Where a difference exists, the system operates to display instructions to the vehicle operator to indicate the direction that the vehicle must be steered to properly align the vehicle with the loading ramp.
In
With the first vehicle 442 properly aligned on the conveyor, the image processing system transitions to process image data concerning a location of the second vehicle 444 relative to the loading-path reference 448. In general, the image processing system operates to track a position of the vehicle closest to the loading point that has not passed the required position checks while also displaying instructions for the driver of that vehicle on the graphical display 108.
Referring now to
In various embodiments, the system 100, 200, 300 also determines the forward progress of the vehicle onto the conveyor as a second type of position check. Referring now to
Referring to
Referring now to
The process 600 begins with an act of locating vehicles on the video stream and defining the edges of the vehicles in the images in the video stream 662. The act 662 and others of the acts included in the process 600 are directed to determining whether the vehicle has reached the loading point. As a result, these acts are rapidly repeated, in one embodiment, at a rate of 25 times per second. According to one embodiment, the act 662 includes the act of grabbing the next frame from the video stream to identify any vehicles in the frame and to define the edges of those vehicles. From the act 662, the process moves to the act of finding a center for any of the vehicles identified in the video stream and assigning a unique ID to each vehicle, respectively 664. With the vehicles identified, an act of comparing pixel coordinates of the center of the vehicle-object to the coordinates of the ideal loading path 665 is performed. The process next performs an act of determining whether the license plate on the vehicle is available for recognition in the video stream 666. If the license plate is unavailable, the process moves to an act of determining whether the center of the vehicle is positioned on the ideal loading path 667. However, if the license plate is available for recognition, the process moves to the act of recognizing the license plate and assigning a wash type to the vehicle 668 before proceeding to the act of determining whether the center of the vehicle is positioned on the ideal loading path 667.
The act of determining whether the center of the vehicle is positioned on the ideal loading path 667 continues for the vehicle closest to the loading ramp until the vehicle successfully reaches the loading point. During this time, if the vehicle center is not positioned on the ideal loading path, the process 600 moves to an act of calculating a deviation between the center of the vehicle and the ideal loading path 669. To provide the driver with assistance, the process 600 determines both the magnitude and direction of any deviation/error. The process moves to an act of determining whether the calculated value is a negative value 670. If the calculated value is positive rather than negative, the process moves to an act of adjusting the graphical display to display a symbol pointing in a left direction with a length that is proportional to the magnitude of the deviation 672. If the calculated value is negative, the process moves to an act of adjusting the graphical display to display a symbol pointing in a right direction with a length that is proportional to the magnitude of the deviation 674. That is, a dynamic graphical icon is displayed in a manner that indicates both the direction and the amount of adjustment that is required. According to some embodiments, the preceding is accomplished without the need to display any text or numerical information. Following each of the acts 672, 674, the process moves to an act of calculating a distance the vehicle is from the loading point 676 and adjusting the graphical display to display a symbol representative of a magnitude of the distance to the loading point 676. The process 600 evaluates the position of the center of the vehicle at an act of determining whether the vehicle has reached the loading point 678. If the vehicle has not reached the loading point, the process returns to the act of locating vehicles on the video stream and defining the edges of the vehicles in the images in the video stream 662.
If it is determined that the vehicle has reached the loading point at the act 678, the process moves to an act of updating the graphical display to display a stop notification to the driver 680. Following an act of displaying a stop notification 680, the process 600 moves to an act of updating the graphical display to display notifications to place the vehicle in neutral and keep off the brake 681. The process 600 includes an act of monitoring whether the vehicle is in neutral 682. If the vehicle is not in neutral, the process 600 returns to the act of displaying a notification to place the vehicle in neutral and keep off the brake 681. If the vehicle is in neutral, the process 600 moves to an act of activating the conveyor to engage the vehicle with the conveyor and begin the washing operation 683. According to the illustrated embodiment, the act 683 also includes updating the graphical display to display a notification that thanks the driver.
The process 600 monitors whether the vehicle is secure in the conveyor track at an act of determining whether the vehicle has jumped the conveyor track 684. The car washing operation for the vehicle continues and the process moves to an act of entering the unique vehicle ID into a “washed” list 686 if the system determines that the vehicle has not jumped the conveyor track at the act 684. According to the illustrated embodiment, the act 684 also includes ignoring the vehicle where it appears in future video frames because the vehicle is successfully located on the conveyor. The process 600 continues by returning to the act of locating vehicles on the video stream and defining the edges of the vehicles in the images in the video stream 662. The process 600 then proceeds with the preceding steps for the vehicle that is closest to the loading point of the conveyor. However, if the preceding vehicle is identified as having jumped the track at the act of determining whether the vehicle has jumped the conveyor track 684, the process 600 moves to an act of implementing an emergency shutdown of the conveyor. According to the illustrated embodiment, the act 684 also includes providing a notification to an individual responsible for operation of the car wash.
Embodiments of the various self-loading car wash systems 100, 200, 300 described herein employ dynamic graphical icons to communicate information to assist a driver in guiding their vehicle to the loading point on the car wash conveyor system.
At least one of
In general, the state of the graphics display 788 is presented at a first point in time as illustrated in
Referring to
As previously indicated, the image processing system 112 can operate to identifying any deviation between the actual travel path of the vehicle and the ideal travel path. According to the illustrated embodiment, the right-directional graphics element 792 and the left-directional graphics element 793 communicate information concerning a direction and magnitude of any correction required by the driver to steer the vehicle onto the loading conveyor. In various embodiments, the information is communicated in a dynamic manner that does not include any numeric information, or in some embodiments, does not include any alpha-numeric information.
In further embodiments, the image processing system 112 is employed to detect various accessories that are included in the vehicle that might otherwise be damaged during a standard car washing operation. According to these embodiments, the image processing system 112 identifies an existence of objects such as bike racks, roof racks, back racks and the like that expand an overall profile of the vehicle. An identification of any of these items associated with a vehicle entering the loading area allows the system 100 to modify an operation of the car wash to accommodate the vehicle (including accessories) without damage. For example, the car wash can retract specific brushes based on the video image detection where a standard operation of the brush will otherwise damage the accessory and/or vehicle.
The techniques described above may be implemented, for example, in hardware, one or more computer programs tangibly stored on one or more computer-readable media, firmware, or any combination thereof. The techniques described above may be implemented in one or more computer programs executing on (or executable by) a programmable computer including any combination of any number of the following: a processor, a storage medium readable and/or writable by the processor (including, for example, volatile and non-volatile memory and/or storage elements), an input device, and an output device. Program code may be applied to input entered using the input device to perform the functions described and to generate output using the output device.
Embodiments of the present invention include features which are only possible and/or feasible to implement with the use of one or more computers, computer processors, and/or other elements of a computer system. Such features are either impossible or impractical to implement mentally and/or manually. Mapping pixel-coordinates in a video frame and rendering dynamic graphical icons in a display provide two such examples.
Each computer program within the scope of the claims below may be implemented in a computer program product tangibly embodied in a machine-readable storage device for execution by a computer processor. Method steps of the invention may be performed by one or more computer processors executing a program tangibly embodied on a computer-readable medium to perform functions of the invention by operating on input and generating output. Suitable processors include, by way of example, both general and special purpose microprocessors.
Any step or act disclosed herein as being performed, or capable of being performed, by a computer or other machine, may be performed automatically by a computer or other machine, whether or not explicitly disclosed as such herein. A step or act that is performed automatically is performed solely by a computer or other machine, without human intervention. A step or act that is performed automatically may, for example, operate solely on inputs received from a computer or other machine, and not from a human. A step or act that is performed automatically may, for example, be initiated by a signal received from a computer or other machine, and not from a human. A step or act that is performed automatically may, for example, provide output to a computer or other machine, and not to a human.
Having thus described several aspects of at least one embodiment of this invention, it is to be appreciated that various alterations, modifications, and improvements will readily occur to those skilled in the art. Such alterations, modifications, and improvements are intended to be part of this disclosure, and are intended to be within the spirit and scope of the invention. Accordingly, the foregoing description and drawings are by way of example only.
Number | Date | Country | |
---|---|---|---|
63080167 | Sep 2020 | US |