The present disclosure relates generally to autonomous vehicles. More particularly, the present disclosure is related to a system for intrinsic calibration of cameras in autonomous vehicles.
One aim of autonomous vehicle technologies is to provide vehicles that can safely navigate towards a destination with limited or no driver assistance. The safe navigation of an autonomous vehicle (AV) from one point to another may include the ability to signal other vehicles, navigating around other vehicles in shoulders or emergency lanes, changing lanes, biasing appropriately in a lane, and navigate all portions or types of highway lanes. Autonomous vehicle technologies may enable an AV to operate without requiring extensive learning or training by surrounding drivers, by ensuring that the AV can operate safely, in a way that is evident, logical, or familiar to surrounding drivers and pedestrians.
An AV may include multiple cameras mounted on it for several purposes including security purposes, driving aid, or facilitating autonomous driving. Cameras mounted on an AV can obtain images from surrounding areas of the AV. These images can be processed to obtain information about the road or the objects surrounding the AV, which can be further used to safely maneuver the AV through traffic or on a highway.
Images obtained by the cameras mounted on an AV, however, may often be distorted due to inaccuracies in camera intrinsic parameters. Examples of camera intrinsic parameter inaccuracies include focal length inaccuracies, lens center misalignment, imperfect lens shape causing image radial distortion, and/or other camera intrinsic parameter inaccuracies. Thus, an intrinsic calibration is needed to correct the inaccuracies in the intrinsic parameters of the cameras before the cameras are used in AVs. Conventional intrinsic calibration technique consists of mounting cameras on the AV and placing calibration boards around the AV, each board at a different angle. The AVs in conventional calibration systems are placed on rotating platforms in order to capture images of calibration boards at different angles. This approach is time-consuming and expensive. Therefore, prior art solutions for intrinsic camera calibration for AVs are not entirely satisfactory.
Images obtained by AV cameras may often be distorted due to inaccuracies in the intrinsic parameters of the cameras. Systems, apparatus, and methods are provided to calibrate intrinsic parameters of multiple cameras in an AV simultaneously before the cameras are mounted on the AV. A calibration target and a camera holding jig are moved in a systematic, predictable, and repeatable manner for intrinsic camera calibration.
In an example embodiment, intrinsic parameter calibration for one or more cameras using one or more calibration targets is performed before the one or more cameras are mounted on an AV. The one or more cameras may be mounted on a camera holding jig mounted on a support frame. The support frame may comprise horizontal and vertical rails used to hold the camera holding jig and facilitate movements of the camera holding jig. In one example, the camera holding jig and the support frame are connected to a processing unit used to control movements of the camera holding jig and the support frame. The support frame may be a linear guide, a linear actuator, a rotary actuator support, and/or any other types of support frames. In some embodiments, the processing unit is configured to send a moving command signal to the camera holding jig such that the camera holding jig moves along a horizontal rail or a vertical rail for a predetermined distance towards a predetermined direction. The camera holding jig may comprise a set of roller bearings configured to reduce coefficient of friction during its movement.
In another embodiment, a total number of desired calibration target images is predetermined for camera intrinsic parameter calibration. In one example, each of the desired calibration target images corresponds to a desired distance from a camera and a desired angle of the calibration target. The camera holding jig and the one or more calibration targets are then moved for a predetermined number of steps, wherein in each step, the camera captures images of the one or more calibration targets corresponding to the desired distances and the desired angels.
In yet another embodiment, intrinsic camera calibration is performed at the processing unit using the captured calibration target images and a reference calibration target image. In one example, intrinsic camera calibration is performed by first identifying a set of calibration control points on the captured calibration target images. The set of calibration control points may be detected using a corner detection operator, an iterative subpixel localization method with a gradient-based search, a maximally stable extremal region method combined with an ellipse fitting method and a nearest neighbors' method. The detected set of calibration control points may be then used to estimate camera intrinsic parameters using an iterative refinement approach by undistorting the captured calibration target images to a canonical image pattern and re-estimating the camera intrinsic parameters in each iteration until convergence is reached.
For a more complete understanding of this disclosure, reference is now made to the following brief description, taken in connection with the accompanying drawings and detailed description, wherein like reference numerals represent like parts.
The example headings for the various sections below are used to facilitate the understanding of the disclosed subject matter and do not limit the scope of the claimed subject matter in any way. Accordingly, one or more features of one example section can be combined with one or more features of another example section.
Vehicle sensor subsystems 144 can include sensors for general operation of the autonomous vehicle 105, including those which would indicate a malfunction in the AV or another cause for an AV to perform a limited or minimal risk condition (MRC) maneuver. The sensors for general operation of the autonomous vehicle may include cameras, a temperature sensor, an inertial sensor (IMU), a global positioning system, a light sensor, a LIDAR system, a radar system, and wireless communications.
A sound detection array, such as a microphone or array of microphones, may be included in the vehicle sensor subsystem 144. The microphones of the sound detection array are configured to receive audio indications of the presence of, or instructions from, authorities, including sirens and command such as “Pull over.” These microphones are mounted, or located, on the external portion of the vehicle, specifically on the outside of the tractor portion of an autonomous vehicle 105. Microphones used may be any suitable type, mounted such that they are effective both when the autonomous vehicle 105 is at rest, as well as when it is moving at normal driving speeds.
Cameras included in the vehicle sensor subsystems 144 may be rear-facing so that flashing lights from emergency vehicles may be observed from all around the autonomous vehicle 105. These cameras may include video cameras, cameras with filters for specific wavelengths, as well as any other cameras suitable to detect emergency vehicle lights based on color, flashing, of both color and flashing.
The vehicle control subsystem 146 may be configured to control operation of the autonomous vehicle 105 and its components. Accordingly, the vehicle control subsystem 146 may include various elements such as an engine power output subsystem, a brake unit, a navigation unit, a steering system, and an autonomous control unit. The engine power output may control the operation of the engine, including the torque produced or horsepower provided, as well as provide control of the gear selection of the transmission. The brake unit can include any combination of mechanisms configured to decelerate the autonomous vehicle 105. The brake unit can use friction to slow the wheels in a standard manner. The brake unit may include an Anti-lock brake system (ABS) that can prevent the brakes from locking up when the brakes are applied. The navigation unit may be any system configured to determine a driving path or route for the autonomous vehicle 105. The navigation unit may additionally be configured to update the driving path dynamically while the autonomous vehicle 105 is in operation. In some embodiments, the navigation unit may be configured to incorporate data from the GPS device and one or more predetermined maps so as to determine the driving path for the autonomous vehicle 105. The steering system may represent any combination of mechanisms that may be operable to adjust the heading of the autonomous vehicle 105 in an autonomous mode or in a driver-controlled mode.
The autonomous control unit may represent a control system configured to identify, evaluate, and avoid or otherwise negotiate potential obstacles in the environment of the autonomous vehicle 105. In general, the autonomous control unit may be configured to control the autonomous vehicle 105 for operation without a driver or to provide driver assistance in controlling the autonomous vehicle 105. In some embodiments, the autonomous control unit may be configured to incorporate data from the GPS device, the RADAR, the LiDAR (i.e. LIDAR), the cameras, and/or other vehicle subsystems to determine the driving path or trajectory for the autonomous vehicle 105. The autonomous control that may activate systems that the autonomous vehicle 105 has which are not present in a conventional vehicle, including those systems which can allow an autonomous vehicle to communicate with surrounding drivers or signal surrounding vehicles or drivers for safe operation of the autonomous vehicle.
An in-vehicle control computer 150, which may be referred to as a VCU, includes a vehicle subsystem interface 160, a driving operation module 168, one or more processors 170, a compliance module 166, a memory 175, and a network communications subsystem 178. This in-vehicle control computer 150 controls many, if not all, of the operations of the autonomous vehicle 105 in response to information from the various vehicle subsystems 140. The one or more processors 170 execute the operations that allow the system to determine the health of the autonomous vehicle, such as whether the autonomous vehicle has a malfunction or has encountered a situation requiring service or a deviation from normal operation and giving instructions. Data from the vehicle sensor subsystems 144 is provided to VCU 150 so that the determination of the status of the autonomous vehicle can be made. The compliance module 166 determines what action should be taken by the autonomous vehicle 105 to operate according to the applicable (i.e. local) regulations. Data from other vehicle sensor subsystems 144 may be provided to the compliance module 166 so that the best course of action in light of the AV's status may be appropriately determined and performed. Alternatively, or additionally, the compliance module 166 may determine the course of action in conjunction with another operational or control module, such as the driving operation module 168.
The memory 175 may contain additional instructions as well, including instructions to transmit data to, receive data from, interact with, or control one or more of the vehicle drive subsystem 142, the vehicle sensor subsystem 144, and the vehicle control subsystem 146 including the autonomous Control system. The in-vehicle control computer (VCU) 150 may control the function of the autonomous vehicle 105 based on inputs received from various vehicle subsystems (e.g., the vehicle drive subsystem 142, the vehicle sensor subsystem 144, and the vehicle control subsystem 146). Additionally, the VCU 150 may send information to the vehicle control subsystems 146 to direct the trajectory, velocity, signaling behaviors, and the like, of the autonomous vehicle 105. The autonomous control vehicle control subsystem may receive a course of action to be taken from the compliance module 166 of the VCU 150 and consequently relay instructions to other subsystems to execute the course of action.
As shown in
It should be understood that the specific order or hierarchy of steps in the processes disclosed herein is an example of exemplary approaches. Based upon design preferences, it is understood that the specific order or hierarchy of steps in the processes may be rearranged while remaining within the scope of the present disclosure. The accompanying method claims present elements of the various steps in a sample order, and are not meant to be limited to the specific order or hierarchy presented.
Intrinsic Camera Calibration System
The support frame 302 may be referred to as a support device used to carry loads and ensure stable motion. Examples of a support frame 302 include linear guide support, linear actuator support, rotary actuator support, and/or any other types of support frames. In some embodiment, the support frame 302 comprises one or more horizontal rails 302a-n and one or more vertical rails 302a‘-n’ for supporting and guiding movements of the camera holding jig 304. In one example, the processing unit 310 is configured to send a moving command signal to the camera holding jig 304 such that the camera holding jig 304 moves along the one or more horizontal rails 302a-n or the one or more vertical rails 302a‘-n’ for a predetermined distance towards a predetermined direction. In another example, the camera holding jig 304 is configured to slide over the one or more horizontal rails 302a-n or the one or more vertical rails 302a‘-n’ with the aid of a lubricant. In yet another example, the camera holding jig 304 comprises a set of roller bearings configured to reduce the coefficient of friction between the camera holding jig 304 and the one or more horizontal rails 302a-n or the one or more vertical rails 302a‘-n’. In this way, a force required to move the camera holding jig 304 can be reduced. In still another example, the one or more horizontal rails 302a-n and the one or more vertical rails 302a‘-n’ comprise a set of grooves for the set of roller bearings to move along either on the outside or on the inside of the one or more horizontal rails 302a-n and the one or more vertical rails 302a‘-n’.
The camera holding jig 304 may be referred to as a device configured to hold one or more cameras to control the location and/or motion of the cameras. In one example, the camera holding jig 304 comprises a set of camera holders, each of which is configured to hold one camera. In another example, the camera holding jig 304 is configured to receive/obtain a moving command signal from the processing unit 310 such that the camera holding jig 304 moves along the one or more horizontal rails 302a-n or the one or more vertical rails 302a‘-n’ according to a camera calibration scheme.
The one or more calibration targets 308a-n may be referred to as a board comprising one or more predetermined calibration image patterns used to calibrate cameras.
The calibration target 400A is useful for calibration of cameras in an AV, or other sensors that capture visual data. In particular, the one or more cameras 306a-n with a pattern/image/feature recognition system running on the processing unit 310 can identify points representing vertices between the dark (black) and light (white) checkers on the checkerboard pattern 404A. By drawing lines connecting these points, the one or more cameras 306a-n and the processing unit 310 can generate a grid. In some embodiments, the one or more cameras 306a-n have a wide-angle lens, such as a fisheye lens or a barrel lens. As a result, the resulting grid becomes warped so that some checkers will appear curved rather than straight, and so that checkers near the edges of the one or more cameras 306a-n's point of view will appear more squashed, while checkers near the center of the one or more cameras 306a-n's point of view will appear larger and more even. On the other hand, a rectilinear lens provides an opposite effect.
In some embodiments, the processing unit 310 is configured to identify distortions of camera lens in the one or more cameras 306a-n and counteract the distortions based on a comparison between a reference checkerboard pattern and a captured checkerboard pattern by the one or more cameras 306a-n. The one or more cameras 306a-n and the processing unit 310 may be configured to identify other parameters of the one or more cameras 306a-n in similar fashion, such as any lens color to be filtered out, any crack or defect in the lens to be filtered out, or a combination thereof.
The calibration target 400A may also be configured to calibrate other types of sensors used in an AV, such as LIDAR, ultrasonic sensors, or radar sensors, given that the shape of the substrate 402A can be detected by these sensors. For example, flat planar vision targets such as the calibration target 400A can be detected by LIDAR by relying on planar geometry estimates and using the returned intensity. While
By detecting the pattern in the ArUco pattern 404B, the one or more cameras 306a-n and the processing unit 310 may identify a grid, similarly to the checkerboard pattern 404A, though potentially with fewer points, as some areas of the ArUco pattern 404B may include contiguous dark/black squares or contiguous light/white squares. By identifying the grid from the calibration target 400B captured by the one or more cameras 306a-n (e.g. with lens distortion such as parabolic distortion), and comparing it to a known reference image of the ArUco pattern (e.g. without any distortion), any distortions or other differences may be identified, and appropriate corrections may be applied to counteract these distortions or other differences. Please reference
The one or more cameras 306a-n and the processing unit 310 may be configured to identify ring patterns from the calibration target 400C captured by the one or more cameras 306a-n (e.g. with lens distortion such as parabolic distortion), and comparing the identified ring patterns to a known reference image of the ring patterns (e.g. without any distortion). Based on the comparison, any distortions or other differences may be identified, and appropriate corrections may be applied to counteract these distortions or other differences. Please reference
While the only patterns 404A-C discussed with respect to calibration target 400 are checkerboard pattern 404A, ArUco pattern 404B, and ring pattern 404C, other patterns that are not depicted can additionally or alternatively be used. For example, bar codes or quick response (QR) codes may be used as patterns that can be recognized using the one or more cameras 306a-n and the processing unit 310 during camera calibration.
Referring back to
A processing unit 310 may be referred to an electronic circuitry configured to execute computer instructions to perform one or more specific tasks. Examples of a processing unit 310 include central processing unit, application-specific integrated circuit, and/or any other types of processing unit. In some embodiments, the processing unit 310 is configured to perform an intrinsic camera calibration process by sending moving command signals to the one or more target support devices 312a-n and the camera holding jig 302, receiving captured images from the one or more cameras 306a-n, and processing calibration algorithms.
In some embodiments, the processing unit 510A sends a moving command signal to the camera holding jig 504A such that the camera holding jig 504A moves along either a horizontal rail or a vertical rail of the support frame 502A towards a predetermined direction for a predetermined distance. In one example, the processing unit 510A sends a moving command signal to the camera holding jig 504A according to a specific camera calibration flow. In another example, the camera holding jig 504A starts to move along a horizontal rail of the support frame 502A upon receiving a moving signal from the processing unit 510A. When the camera holding jig 504A reaches an end of the horizontal rail of the support frame 502A, the camera holding jig 504A may be configured to start moving along a vertical rail of the support frame 502A, wherein the vertical rail is attached to the horizontal rail. In still another example, the camera holding jig 504A starts to move along a vertical rail of the support frame 502A upon receiving a moving signal from the processing unit 510A. When the camera holding jig 504A reaches an end of the vertical rail of the support frame 502A, the camera holding jig 504A may be configured to start moving along a horizontal rail of the support frame 502A, wherein the horizontal rail is attached to the vertical rail.
In some embodiments, the camera holding jig 604A is configured to hold a set of cameras 606A-1 to 606A-n for calibration. The camera holding jig 604A may be configured to receive/obtain a moving command signal from a processing unit (not shown) such that the camera holding jig 604A moves along either the vertical rail 602A-1 or the horizontal rail 602A-2 according to a camera calibration scheme.
At step 702, a set of AV cameras for intrinsic parameter calibration is identified and one or more calibration targets are placed. In some embodiments, the one or more calibration targets comprise at least a checkerboard pattern, at least an ArUco pattern, or at least a ring pattern used for calibration. In one example, the one or more calibration targets are mounted on one or more target support devices for guiding movements of the one or more calibration targets.
At step 704, the set of AV cameras are mounted on a camera holding jig used for guiding movements of the set of AV cameras. In some embodiments, the camera holding jig is mounted on a support frame comprising a set of horizontal rails and a set of vertical rails for guiding movements of the camera holding jig.
At set 706, the camera holding jig and/or the one or more calibration targets are moved towards a predetermined direction for a predetermined distance. In one example, the camera holding jig is moved along a horizontal rail or a vertical rail of the support frame. In another example, the one or more calibration targets are moved according to a desired target image angle and distance. For example, if a desired target image angle is 30° with respect to a reference angle, then the one or more calibration targets may be rotated by 30° by the one or more target support devices. In yet another example, the camera holding jig or the one or more target support devices receive/obtain a moving command signal from a processing unit, wherein the moving command signal comprises information of the predetermined direction and the predetermined distance.
At step 708, a calibration target image is captured and stored in each of the set of AV cameras. In one example, after the calibration target images are captured and stored, the set of AV cameras transmit the captured and stored images to the processing unit for further processing. In another example, the processing unit stores a set of image parameters for each of the captured and stored images, such as an angle or a distance from the calibration target image to each of the set of AV cameras.
At step 710, in each of the set of AV cameras, a total number of the captured and stored images for each of the one or more calibration targets is compared to a predetermined number. If the total number matches the predetermined number, then go to step 712; otherwise go back to step 706. In some embodiments, the predetermined number is determined based on an intrinsic calibration plan for the set of AV cameras.
At step 712, an intrinsic camera calibration is performed for each of the set of AV cameras. In some embodiments, the intrinsic camera calibration is performed at the processing unit using the captured and stored calibration target images and a reference calibration target image. Further details about intrinsic calibration are described with reference to
At step 714, the set of AV cameras are mounted on an AV after the intrinsic calibration process is completed. In this way, the intrinsic calibration process is completed before the set of AV cameras are placed on the AV, and the calibration process does not need to place the AV on a conventional rotating platform. Thus, space, time and cost requirements associated with conventional techniques for calibration of AV cameras are significantly reduced.
At step 716, a change of at least one camera intrinsic parameter is verified after the set of AV cameras are mounted on the AV. If a change in at least one camera intrinsic parameter is detected, then go back to step 702; otherwise, stay in step 716 to continuously monitor changes of camera intrinsic parameters. One purpose of performing this step is that once calibrated, camera intrinsic parameters are still subject to drift over long periods of operation. For example, cumulative error may result in changes of camera focal length over time, and these changes may further degrade image quality captured by the set of AV cameras, and in turn impact AV operations. Thus, continuously monitoring changes of camera intrinsic parameters and applying re-calibration when necessary ensures image quality of the AV cameras and maintains AV operations in a safe manner.
At step 802, a set target images captured by an AV camera and a reference target image are received in a processing unit for camera intrinsic parameter calibration. In some embodiments, the set of captured target images are obtained by moving the AV camera mounted on a camera holding jig and a calibration target, as illustrated in the exemplary method in
At step 804, calibration control points are detected on each of the captured target images. Calibration control points may be referred to as specific points on a target image used to locate calibration image patterns. Examples of calibration control points include 4 vertices of a calibration board, corners for square image patterns, centers for circle or ring image patterns, and/or any other types of calibration control points. In one example, the calibration control points are detected using a corner detection operator for square image patterns by taking the differential of a corner score into account with reference to a direction. In another example, the calibration control points are detected using an iterative subpixel localization method with a gradient-based search. In still another example, the calibration control points are detected by first applying a maximally stable extremal regions (MSER) method to extract all regions that can potentially contain calibration patterns for circle or ring image patterns. Then each region containing calibration patterns is fitted using an ellipse fitting method. In some embodiments, the ellipse fitting method comprises fitting an ellipse equation of the form (x−h)2/a2+(y−k)2/b2=1 by finding the values of the parameters h, k, a and b. Next, regions that have the same shape and location may be grouped, and outlier regions that do not have a group may be filtered out. In some embodiments, distances between each region and all other regions are calculated and sorted. An average distance of the 3 nearest neighbors (i.e., the 3 regions with the shortest distances) for a specific region i can be calculated and denoted as di. In one example, a threshold value t is set such that for a specific region i, if di≥t, then the region i is considered as an outlier and removed from further analysis. Once regions are fitted, calibration control points may be identified by localizing geographical centers of each fitted region.
At step 806, camera intrinsic parameters are estimated using the detected calibration control points from step 804. In some embodiments, the camera intrinsic parameters comprise focal length, lens optical center, lens radial distortion, lens tangential distortion, and/or any other intrinsic parameters. In one example, the camera intrinsic parameters are estimated by solving a nonlinear minimization problem which minimizes a difference between a projected model of the captured target images and the reference target image. In another example, the Levenberg-Marquardt algorithm is used to solve the nonlinear minimization problem for estimating camera intrinsic parameters.
At step 808, the estimated camera intrinsic parameters at step 806 are used to undistort the captured target images to a canonical image pattern. A canonical image pattern of the captured target images may be referred to as an undistorted pattern of the captured target images. In some embodiments, the captured target images are undistorted using a matrix transform function with the estimated camera intrinsic parameters.
At step 810, calibration control points on the undistorted captured target images obtained in step 808 are localized. In one example, the calibration control points on the undistorted captured target images are localized using a corner detection operator for square image patterns by taking the differential of a corner score into account with reference to a direction. In another example, the calibration control points on the undistorted captured target images are localized using an iterative subpixel localization method with a gradient-based search.
At step 812, the calibration control points localized on the undistorted captured target images are projected back to the captured target images. In some embodiments, the calibration control points on the undistorted captured target images are projected back to the captured target images using a matrix transform function with the camera intrinsic parameters estimated in step 806.
At step 814, the camera intrinsic parameters are refined using the projected calibration control points from step 812. In some embodiments, the camera intrinsic parameters are re-estimated by solving a nonlinear minimization problem which minimizes a difference between the projected captured target images obtained at step 812 and the reference target image. In another example, the Levenberg-Marquardt algorithm is used to solve the nonlinear minimization problem for re-estimating the camera intrinsic parameters.
At step 816, a convergence of the camera intrinsic parameter estimation process is verified. If the convergence is reached, then end the calibration process; if the convergence is not reached, then go back to step 808. In some embodiments, convergence verification is performed using an iterative refinement algorithm. In one example, the convergence is verified by calculating a sample standard deviation of each of the camera intrinsic parameters. If the sample standard deviation of each of the camera intrinsic parameters is less than a predetermined standard deviation threshold value, then the convergence is reached; otherwise the convergence is not reached. In another example, the convergence is verified by calculating a reprojection error defined as a difference between a projected undistorted target image using the current camera intrinsic parameters and the reference target image. If the reprojection error is less than a predetermined reprojection error threshold value, then the convergence is reached; otherwise the convergence is not reached.
The computer system 900 is shown comprising hardware elements that can be electrically coupled via a bus 905, or may otherwise be in communication, as appropriate. The hardware elements may include one or more processors 910, including without limitation one or more general-purpose processors and/or one or more special-purpose processors such as digital signal processing chips, graphics acceleration processors, and/or the like; one or more input devices 915, which can include without limitation a mouse, a keyboard, a camera, and/or the like; and one or more output devices 920, which can include without limitation a display device, a printer, and/or the like.
The computer system 900 may further include and/or be in communication with one or more non-transitory storage devices 925, which can comprise, without limitation, local and/or network accessible storage, and/or can include, without limitation, a disk drive, a drive array, an optical storage device, a solid-state storage device, such as a random access memory (“RAM”), and/or a read-only memory (“ROM”), which can be programmable, flash-updateable, and/or the like. Such storage devices may be configured to implement any appropriate data stores, including without limitation, various file systems, database structures, and/or the like.
The computer system 900 might also include a communications subsystem 930, which can include without limitation a modem, a network card (wireless or wired), an infrared communication device, a wireless communication device, and/or a chipset such as a Bluetooth™ device, an 1002.11 device, a WiFi device, a WiMax device, cellular communication facilities, etc., and/or the like. The communications subsystem 930 may include one or more input and/or output communication interfaces to permit data to be exchanged with a network such as the network described below to name one example, other computer systems, television, and/or any other devices described herein. Depending on the desired functionality and/or other implementation concerns, a portable electronic device or similar device may communicate image and/or other information via the communications subsystem 930. In other embodiments, a portable electronic device, e.g. the first electronic device, may be incorporated into the computer system 900, e.g., an electronic device as an input device 915. In some embodiments, the computer system 900 will further comprise a working memory 935, which can include a RAM or ROM device, as described above.
The computer system 900 also can include software elements, shown as being currently located within the working memory 935, including an operating system 960, device drivers, executable libraries, and/or other code, which may comprise computer programs provided by various embodiments, and/or may be designed to implement methods, and/or configure systems, provided by other embodiments, as described herein. Merely by way of example, one or more procedures described with respect to the methods discussed above, such as those described in relation to
A set of these instructions and/or code may be stored on a non-transitory computer-readable storage medium, such as the storage device(s) 925 described above. In some cases, the storage medium might be incorporated within a computer system, such as computer system 900. In other embodiments, the storage medium might be separate from a computer system e.g., a removable medium, such as a compact disc, and/or provided in an installation package, such that the storage medium can be used to program, configure, and/or adapt a general purpose computer with the instructions/code stored thereon. These instructions might take the form of executable code, which is executable by the computer system 900 and/or might take the form of source and/or installable code, which, upon compilation and/or installation on the computer system 900 e.g., using any of a variety of generally available compilers, installation programs, compression/decompression utilities, etc., then takes the form of executable code.
It will be apparent to those skilled in the art that substantial variations may be made in accordance with specific requirements. For example, customized hardware might also be used, and/or particular elements might be implemented in hardware, software including portable software, such as applets, etc., or both. Further, connection to other computing devices such as network input/output devices may be employed.
As mentioned above, in one aspect, some embodiments may employ a computer system such as the computer system 900 to perform methods in accordance with various embodiments of the technology. According to a set of embodiments, some or all of the procedures of such methods are performed by the computer system 900 in response to processor 910 executing one or more sequences of one or more instructions, which might be incorporated into the operating system 960 and/or other code contained in the working memory 935. Such instructions may be read into the working memory 935 from another computer-readable medium, such as one or more of the storage device(s) 925. Merely by way of example, execution of the sequences of instructions contained in the working memory 935 might cause the processor(s) 910 to perform one or more procedures of the methods described herein. Additionally or alternatively, portions of the methods described herein may be executed through specialized hardware.
The terms “machine-readable medium” and “computer-readable medium,” as used herein, refer to any medium that participates in providing data that causes a machine to operate in a specific fashion. In an embodiment implemented using the computer system 900, various computer-readable media might be involved in providing instructions/code to processor(s) 910 for execution and/or might be used to store and/or carry such instructions/code. In many implementations, a computer-readable medium is a physical and/or tangible storage medium. Such a medium may take the form of a non-volatile media or volatile media. Non-volatile media include, for example, optical and/or magnetic disks, such as the storage device(s) 925. Volatile media include, without limitation, dynamic memory, such as the working memory 935.
Common forms of physical and/or tangible computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, a CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, EPROM, a FLASH-EPROM, any other memory chip or cartridge, or any other medium from which a computer can read instructions and/or code.
Various forms of computer-readable media may be involved in carrying one or more sequences of one or more instructions to the processor(s) 910 for execution. Merely by way of example, the instructions may initially be carried on a magnetic disk and/or optical disc of a remote computer. A remote computer might load the instructions into its dynamic memory and send the instructions as signals over a transmission medium to be received and/or executed by the computer system 900.
The communications subsystem 930 and/or components thereof generally will receive signals, and the bus 905 then might carry the signals and/or the data, instructions, etc. carried by the signals to the working memory 935, from which the processor(s) 910 retrieves and executes the instructions. The instructions received by the working memory 935 may optionally be stored on a non-transitory storage device 925 either before or after execution by the processor(s) 910.
While several embodiments have been provided in this disclosure, it should be understood that the disclosed systems and methods might be embodied in many other specific forms without departing from the spirit or scope of this disclosure. The present examples are to be considered as illustrative and not restrictive, and the intention is not to be limited to the details given herein. For example, the various elements or components may be combined or integrated in another system or certain features may be omitted, or not implemented.
In addition, techniques, systems, subsystems, and methods described and illustrated in the various embodiments as discrete or separate may be combined or integrated with other systems, modules, techniques, or methods without departing from the scope of this disclosure. Other items shown or discussed as coupled or directly coupled or communicating with each other may be indirectly coupled or communicating through some interface, device, or intermediate component whether electrically, mechanically, or otherwise. Other examples of changes, substitutions, and alterations are ascertainable by one skilled in the art and could be made without departing from the spirit and scope disclosed herein.
To aid the Patent Office, and any readers of any patent issued on this application in interpreting the claims appended hereto, applicants note that they do not intend any of the appended claims to invoke 35 U.S.C. § 112(f) as it exists on the date of filing hereof unless the words “means for” or “step for” are explicitly used in the particular claim.
The present application claims priority to U.S. Provisional Patent Application No. 63/355,514, filed on Jun. 24, 2022, which is incorporated by reference herein in its entirety.
Number | Date | Country | |
---|---|---|---|
63355514 | Jun 2022 | US |