This application claims the benefit under 35 USC § 119(a) of Korean Patent Application No. 10-2022-0075072, filed on Jun. 20, 2022 in the Korean Intellectual Property Office, the entire disclosure of which is incorporated herein by reference for all purposes.
The following description relates to a device and method with vehicle blind spot visualization.
A surround view monitor (SVM) may be a parking safety system configured to provide an image in the form of the surroundings of a vehicle viewed from above. For example, the SVM may provide images input from a total of four cameras—one for the front, one for each of the left and right sides, and one for the rear of a vehicle—as an image in a top-down view mode and an image in a multi-view mode, in the form of a bird's eye view of the surroundings of a vehicle based on time change and image synthesis.
However, in a typical SVM, an image of a part (under the bonnet or body of the vehicle, for example) that is hidden by the body of the vehicle may not be captured by the cameras, and in the presence of an obstacle around this part, a driver of the vehicle may not be readily secure their view and may have difficulty in parking or driving.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
In one general aspect, an electronic device includes: a processor configured to: based on two images captured at two different time points by a camera of a vehicle that is traveling and traveling information of the vehicle, determine a first transformation matrix of a camera coordinate system comprising a rotation matrix and a translation matrix for a movement of the vehicle between the two time points; transform the first transformation matrix into a second transformation matrix of a vehicle coordinate system; update a parameter of the camera to apply the movement of the vehicle to the parameter of the camera, based on either one or both of roll information and pitch information of the vehicle acquired from the rotation matrix; and visualize a blind spot of the camera based on the either one or both of the roll information and the pitch information, and based on the updated parameter and the second transformation matrix.
The two time points may include a previous time point and a current time point, and, for the visualizing of the blind spot, the processor may be configured to: determine a region at the previous time point that corresponds to a region at the current time point corresponding to the blind spot, based on the second transformation matrix; and visualize the region at the previous time point on the blind spot based on the updated parameter.
For the updating of the parameter, the processor may be configured to update the parameter by applying, to the parameter, either one or both of the roll information and the pitch information changed by the movement of the vehicle.
For the determining of the first transformation matrix, the processor may be configured to: determine an essential matrix based on a matching relationship between features extracted from the two images and the parameter; and determine the rotation matrix and the translation matrix by decomposing the essential matrix, and the translation matrix may be scaled by a moving distance that is based on the traveling information.
For the transforming into the second transformation matrix, the processor may be configured to determine the second transformation matrix of the camera coordinate system based on a third transformation matrix that transforms the vehicle coordinate system into the camera coordinate system, the first transformation matrix, and a fourth transformation matrix that transforms the camera coordinate system into the vehicle coordinate system.
For the transforming into the second transformation matrix, the processor may be configured to correct the second transformation matrix of the vehicle coordinate system based on a third transformation matrix of the vehicle coordinate system that is determined from the traveling information of the vehicle.
For the determining of the first transformation matrix, the processor may be configured to correct the first transformation matrix comprising the rotation matrix and the translation matrix, based on a value of a sensor of the vehicle.
For the visualizing of the blind spot, the processor may be configured to, before the vehicle starts traveling again after being parked, visualize the blind spot as a blind spot image determined while the vehicle is traveling before being parked.
The camera coordinate system and the parameter may be based on any one of a plurality of cameras of the vehicle that is determined based on a traveling direction of the vehicle.
The blind spot may include a region under the vehicle that is not captured by a plurality of cameras of the vehicle.
In another general aspect, a processor-implemented method of an electronic device includes: acquiring two images captured at two different time points by a camera of a vehicle that is traveling; acquiring traveling information of the vehicle; determining a first transformation matrix of a camera coordinate system comprising a rotation matrix and a translation matrix between the two time points, based on the two images and the traveling information; transforming the first transformation matrix into a second transformation matrix of a vehicle coordinate system; updating a parameter of the camera to apply a movement of the vehicle to the parameter of the camera, based on either one or both of roll information and pitch information of the vehicle acquired from the rotation matrix; and visualizing a blind spot of the camera based on either one or both of the roll information and the pitch information, and based on the updated parameter and the second transformation matrix.
The updating the parameter may include updating the parameter by applying, to the parameter, either one or both of the roll information and the pitch information changed by the movement of the vehicle.
The two time points may include a previous time point and a current time point, and the visualizing the blind spot may include: determining a region at the previous time point that corresponds to a region at the current time point corresponding to the blind spot, based on the second transformation matrix; and visualizing the region at the previous time point on the blind spot based on the updated parameter.
The determining the first transformation matrix may include: determining an essential matrix based on a matching relationship between features extracted from the two images and the parameter; and determining the rotation matrix and the translation matrix by decomposing the essential matrix, and the translation matrix may be scaled by a moving distance that is based on the traveling information.
The transforming into the second transformation matrix may include determining the second transformation matrix of the camera coordinate system based on a third transformation matrix that transforms the vehicle coordinate system into the camera coordinate system, the first transformation matrix, and a fourth transformation matrix that transforms the camera coordinate system into the vehicle coordinate system.
The transforming into the second transformation matrix may include correcting the second transformation matrix of the vehicle coordinate system based on a third transformation matrix of the vehicle coordinate system that is determined from the traveling information of the vehicle.
The determining the first transformation matrix may include correcting the first transformation matrix comprising the rotation matrix and the translation matrix, based on a value of a sensor of the vehicle.
The visualizing the blind spot may include, before the vehicle starts traveling again after being parked, visualizing the blind spot as a blind spot image determined while the vehicle is traveling before being parked.
The camera coordinate system and the parameter may be based on any one of a plurality of cameras of the vehicle that is determined based on a traveling direction of the vehicle.
In another general aspect, one or more embodiments include a non-transitory computer-readable storage medium storing instructions that, when executed by one or more processors, configure the one or more processors to perform any one, any combination, or all operations and methods described herein.
In another general aspect, a processor-implemented method of an electronic device includes: determining, in a coordinate system of a camera of a vehicle, rotation information and translation information between a previous image captured at a previous time point by the camera and a current image captured at a current time point by the camera, based on traveling information of the vehicle; updating a parameter of the camera based on the rotation information; transforming, into a coordinate system of the vehicle, the rotation information and the translation information; and visualizing a blind spot of the camera in a rendered image generated using the current image, based on the rotation information, the updated parameter, the transformed rotation information, and the transformed translation information.
The determining of the rotation information and the translation information may include determining a first transformation matrix of the coordinate system of the camera comprising a rotation matrix and a translation matrix between the previous time point and the current time point, and the transforming of the rotation information and the translation information may include transforming the first transformation matrix into a second transformation matrix of the coordinate system of the vehicle.
The rendered image may be a top-view image generated based on the current image and one or more other current images captured at the current the current time point by one or more other cameras of the vehicle.
Other features and aspects will be apparent from the following detailed description, the drawings, and the claims.
Throughout the drawings and the detailed description, unless otherwise described or provided, the same drawing reference numerals will be understood to refer to the same elements, features, and structures. The drawings may not be to scale, and the relative size, proportions, and depiction of elements in the drawings may be exaggerated for clarity, illustration, and convenience.
The following detailed description is provided to assist the reader in gaining a comprehensive understanding of the methods, apparatuses, and/or systems described herein. However, various changes, modifications, and equivalents of the methods, apparatuses, and/or systems described herein will be apparent after an understanding of the disclosure of this application. For example, the sequences of operations described herein are merely examples, and are not limited to those set forth herein, but may be changed as will be apparent after an understanding of the disclosure of this application, with the exception of operations necessarily occurring in a certain order. Also, descriptions of features that are known after understanding of the disclosure of this application may be omitted for increased clarity and conciseness.
The features described herein may be embodied in different forms and are not to be construed as being limited to the examples described herein. Rather, the examples described herein have been provided merely to illustrate some of the many possible ways of implementing the methods, apparatuses, and/or systems described herein that will be apparent after an understanding of the disclosure of this application.
As used herein, “A or B,” “at least one of A and B,” “at least one of A or B,” “A, B, or C,” “at least one of A, B, and C,” and “A, B, or C,” each of which may include any one of the items listed together in the corresponding one of the phrases, or all possible combinations thereof. Although terms of “first,” “second,” and “third” may be used to describe various components, members, regions, layers, or sections, these components, members, regions, layers, or sections are not to be limited by these terms (e.g., “first,” “second,” and “third”). Rather, these terms are only used to distinguish one component, member, region, layer, or section from another component, member, region, layer, or section. Thus, for example, a “first” component, member, region, layer, or section referred to in examples described herein may also be referred to as a “second” component, member, region, layer, or section, and a “second” component, member, region, layer, or section referred to in examples described herein may also be referred to as the “first” component without departing from the teachings of the examples.
Throughout the specification, when an element, such as a layer, region, or substrate, is described as being “on,” “connected to,” or “coupled to” another element, it may be directly “on,” “connected to,” or “coupled to” the other element, or there may be one or more other elements intervening therebetween. In contrast, when an element is described as being “directly on,” “directly connected to,” or “directly coupled to” another element, there may be no other elements intervening therebetween. Likewise, similar expressions, for example, “between” and “immediately between,” and “adjacent to” and “immediately adjacent to,” are also to be construed in the same.
The terminology used herein is for describing various examples only and is not to be used to limit the disclosure. The articles “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. The terms “comprises,” “includes,” and “has” specify the presence of stated features, numbers, operations, members, elements, and/or combinations thereof, but do not preclude the presence or addition of one or more other features, numbers, operations, members, elements, and/or combinations thereof. As used herein, the term “and/or” includes any one and any combination of any two or more of the associated listed items. The use of the term “may” herein with respect to an example or embodiment (for example, as to what an example or embodiment may include or implement) means that one or more examples or embodiments exists where such a feature is included or implemented, while all examples are not limited thereto
Unless otherwise defined, all terms, including technical and scientific terms, used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure pertains and based on an understanding of the disclosure of the present application. Terms, such as those defined in commonly used dictionaries, are to be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and the disclosure of the present application and are not to be interpreted in an idealized or overly formal sense unless expressly so defined herein.
Also, in the description of example embodiments, detailed description of structures or functions that are thereby known after an understanding of the disclosure of the present application will be omitted when it is deemed that such description will cause ambiguous interpretation of the example embodiments. Hereinafter, examples will be described in detail with reference to the accompanying drawings, and like reference numerals in the drawings refer to like elements throughout.
Referring to
The electronic device 110 may include a memory 111 (e.g., one or more memories), a processor 113 (e.g., one or more processors), and a camera 115 (e.g., one or more cameras).
The memory 111 may include computer-readable instructions. The processor 113 may perform operations to be described hereinafter as the instructions stored in the memory 111 are executed by the processor 113. The memory 111 may be a volatile or nonvolatile memory. The memory 111 may be or include a non-transitory computer-readable storage medium storing instructions that, when executed by the processor 113, configure the processor 113 to perform any one, any combination, or all of the operations and methods described herein with references to
The processor 113, which is a device configured to execute instructions or programs or control the electronic device 110, may be or include, for example, a central processing unit (CPU) and/or a graphics processing unit (GPU), but examples of which are not limited to the foregoing example.
The processor 113 may render a top-view mode image that displays the surroundings of the vehicle 100 with the vehicle 100 being centered by performing image processing on an image captured by the camera 115 fixed to the vehicle 100. As will be described in detail below with reference to
The camera 115 may include one or more cameras fixed to vehicle 100. For example, the camera 115 may include four cameras respectively arranged on the front, rear, left, and right sides of the vehicle 100. Depending on an arrangement structure of the camera 115, there may be a blind spot under the vehicle 100. An image captured by the camera 115 may be transmitted to the processor 113.
Although the camera 115 is illustrated as being included in the electronic device 110 in the example of
Referring to
An SVM may refer to a system that provides a user with an image in the form of a top-down view of a vehicle viewed from above through image synthesis and camera calibration. The image synthesis may be a technique for removing distortion from a captured image, transforming the captured image into an image of a virtual view, and/or combining four images into one. The camera calibration may be a technique for performing calibration by analyzing optical characteristics of a camera to remove image distortion that may be caused by applying an ultra-wide-angle lens. Using these techniques, an SVM image (e.g., the SVM images 220) to be provided to a user may be determined based on captured images (e.g., the captured images 210).
In the example of
Referring to
In addition, a movement of the vehicle 310 may be represented based on the x, y, and z axes. 3D movement information of the vehicle 310 may be represented by a 3×3 rotation matrix representing a 3D rotation of the vehicle 310 and a 3×1 translation matrix representing a 3D movement of the vehicle 310.
As will be described in detail below, by visualizing a blind spot included in an SVM image according to 3D movement information of a vehicle, the electronic device of one or more embodiments may provide a natural image for a 3D movement of the vehicle in various driving situations, thereby improving the user's driving convenience.
Operations 410 through 460 to be described hereinafter may be performed in sequential order, but may not be necessarily performed in sequential order. For example, the operations 410 through 460 may be performed in different orders, and at least two of the operations 410 through 460 may be performed in parallel or simultaneously. Further, one or more of operations 410 through 460 may be omitted, without departing from the spirit and scope of the shown example. The operations 410 through 460 to be described hereinafter with reference to
In operation 410, the electronic device may acquire two images captured at two different time points from a camera fixed to a vehicle that is traveling. For example, the electronic device may acquire an image captured at a previous time point t−1 and an image captured at a current time point t by the camera fixed to the vehicle, but examples are not limited thereto.
In operation 420, the electronic device may acquire traveling information of the vehicle. For example, the traveling information may include wheel speed information included in a controller area network (CAN) signal of the vehicle, but is not limited to the foregoing example.
In operation 430, the electronic device may determine a first transformation matrix of a camera coordinate system including a rotation matrix and a translation matrix between the two time points, using the two images and the traveling information of the vehicle. The camera coordinate system may refer to a coordinate system that is set with respect to a corresponding camera.
In an example, in operation 410, the electronic device may capture images at two different time points while the vehicle is moving. For example, the electronic device may acquire images captured at a previous time point t−1 and a current time point t from a camera that captures a traveling direction of the vehicle (e.g., a camera fixed to the front in the case of the vehicle traveling forward). For example, as illustrated in
In this example, in operation 430, the electronic device may extract a plurality of features from the images A and B and determine a matching relationship among the extracted features. For example, as illustrated in
For example, an ORB (oriented features from accelerated segment test (FAST) and rotated binary robust independent elementary features (BRIEF)) algorithm may be used for extracting the features p and p′, and a nearest neighbor distance ratio (NNDR) algorithm may be used to determine the matching relationship between the features p and p′. However, examples are not limited to the foregoing example, and various other techniques may be applied without limitation.
In this example, in operation 430, the electronic device may calculate an essential matrix after transforming the matching relationship into a normalized image coordinate system using camera parameters. In this case, distortion correction may be applied to an image, and an eight-point algorithm may be used for calculating the essential matrix. However, examples are not limited thereto. The normalized image coordinate system may be a coordinate system in which units of the coordinate system are removed through normalization, and may be, for example, a coordinate system defining a virtual image plane having the distance of 1 from a camera focus. Since the camera parameters change depending on a camera, the normalized image coordinate system in which an influence of the camera parameters is removed may be used to interpret information from an image. The essential matrix may be a matrix indicating a permanently established relationship between points on a homogeneous coordinate system in a normalized image plane of matching feature points in images captured at two arbitrary points, and may be, for example, a 3×3 matrix. However, examples are not limited thereto. In addition, Equation 1 below, for example, may be used to transform a pixel coordinate system into the normalized image coordinate system using the camera parameters.
In Equation 1 above, (x, y) denotes coordinates in a two-dimensional (2D) image, fx and fy denote a focal length of a camera, c, and cy denote a main point of the camera, and (u, v) denotes coordinates in the normalized image coordinate system.
For example, the electronic device may determine a rotation matrix R and a translation matrix t for a movement of the vehicle between the previous time point t−1 and the current time point t by decomposing the essential matrix. The electronic device may determine a first transformation matrix of a camera coordinate system including the rotation matrix R and the translation matrix t. In this example, the rotation matrix R may be a 3×3 matrix, the translation matrix t may be a 3×1 matrix, and a singular value decomposition (SVD) algorithm may be used to determine the rotation matrix R and the translation matrix t from the essential matrix. However, examples are not limited thereto.
When determining the translation matrix t, the translation matrix t may be scaled by a moving distance based on the traveling information. The traveling information may include, for example, wheel speed information included in a CAN signal of the vehicle. The moving distance may be determined based on the wheel speed information and a time difference between the previous time point t−1 and the current time point t.
In operation 440, the electronic device may transform the first transformation matrix into a second transformation matrix of a vehicle coordinate system. The vehicle coordinate system may be a coordinate system that is set with respect to the vehicle.
The electronic device may transform the first transformation matrix of the camera coordinate system into the second transformation matrix of the vehicle coordinate system, using Equation 2 below, for example.
T
Car(t) to (t−1)
=T
Camera to Car
*T
Camera(t) to (t−1)
*T
Car to Camera Equation 2:
In Equation 2 above, Tcamera to car, which denotes a matrix that transforms the camera coordinate system into the vehicle coordinate system, may be determined in an initial calibration operation performed when an SVM system of the vehicle is initially built. Tcamera (t) to (t−1), which is the first transformation matrix determined in operation 430, may denote a transformation matrix from the previous time t−1 to the current time t in the camera coordinate system. TCar to Camera, which denotes a matrix that transforms the vehicle coordinate system into the camera coordinate system, may correspond to an inverse transformation matrix of TCamera to Car. TCar (t) to (t−1) denotes the second transformation matrix of the vehicle coordinate system. Here, T may denote a 3×4 matrix including a 3×3 rotation matrix and a 3×1 translation matrix, but is not limited thereto. For example, T may be represented as a 4×4 matrix in Equation 2 to facilitate inter-matrix calculation. For example, the 4×4 matrix may be determined by adding a (0, 0, 0, 1) matrix to the 3×4 matrix described above as a fourth row, and the 4×4 matrix implemented as described above may enable inter-T matrix multiplication.
As will be described in detail below, when the second transformation matrix TCar (t) to (t−1) is used, which region at the previous time point t−1 corresponds to a region corresponding to a blind spot at the current time t may be determined. Position information of the region corresponding to the blind spot at the current time t may be based on the vehicle coordinate system, and when the position information is multiplied by TCar to Camera, the position information may be transformed into the camera coordinate system. In addition, when the position information of the camera coordinate system is multiplied by TCamera (t) to (t−1), the position information at the current time point t on the camera coordinate system may be transformed into position information of the previous time point t−1. Since a rendered image provided to a user corresponds to the vehicle coordinate system, the electronic device may transform the position information at the previous time point t−1 on the camera coordinate system into the vehicle coordinate system, which may be performed based on TCamera to Car.
In operation 450, the electronic device may update the camera parameters to apply thereto a movement of the vehicle using either one or both of roll information and pitch information of the vehicle acquired from a rotation matrix.
For example, in operation 450, the electronic device may decompose the rotation matrix R determined in operation 430 into roll information, pitch information, and yaw information of the vehicle, using Equation 3 below, for example. In this example, the roll information, the pitch information, and the yaw information may represent a 3D movement of the vehicle occurring between the previous time point t−1 and the current time point t.
In Equation 3 above, α denotes the roll information, β denotes the pitch information, and y denotes the yaw information.
The electronic device may update the camera parameters to apply a movement of the vehicle to the updated camera parameters, using either one or both of the roll information and the pitch information of the vehicle. For example, when pitch information of 3 degrees occurs between the previous time point t−1 and the current time point t while the vehicle is crossing a speed bump, the electronic device may add 3 degrees to pitch information of 2 degrees in a current camera parameter to update the pitch information to 5 degrees.
For example, the camera parameters may include an intrinsic parameter associated with an intrinsic characteristic of a camera (e.g., the camera 115), such as, for example, a focal length, an aspect ratio, and a principal point of the camera, and an extrinsic parameter associated with a geometric relationship between the camera and an external space, such as for example, an installation height and a direction (e.g., pan, tilt).
By updating the camera parameters using either one or both of the roll information and the pitch information of the vehicle, an image more naturally rendered to a movement of the vehicle may be generated using the updated camera parameters.
In operation 460, the electronic device may visualize a blind spot of the camera based on either one or both of the roll information and the pitch information using the updated parameters and the second transformation matrix.
In the example of
The electronic device of one or more embodiments may visualize the region 610 of the previous time point t−1 in the blind spot 630 based on updated parameters. As described above, the updated parameters may be camera-related parameters to which a movement of a vehicle between the previous time point t−1 and the current time point t is applied, the electronic device of one or more embodiments may visualize the region 610 of the previous time point t−1 on the blind spot 630 based on the updated parameters, and the blind spot 630 may thereby be naturally visualized for the movement of the vehicle. For example, when pitch information is changed between the previous time point t−1 and the current time point t as the vehicle crosses a speed bump, the electronic device of one or more embodiments may visualize the region 610 of the previous time point t−1 in the region 620 of the current time point t based on the changed pitch information.
In addition, the electronic device may determine a third transformation matrix of a vehicle coordinate system based on traveling information of the vehicle. For example, the electronic device may rapidly acquire the third transformation matrix of the vehicle coordinate system from a CAN signal. In addition, since the third transformation matrix of the vehicle coordinate system determined from the CAN signal may have higher accuracy than a second transformation matrix of the vehicle coordinate system determined based on an image, when the vehicle is traveling on a straight road on a flat ground, the electronic device may acquire a transformation matrix of the vehicle coordinate system with higher accuracy by correcting the second transformation matrix using the third transformation matrix.
The electronic device of one or more embodiments may correct the first transformation matrix including the rotation matrix and the translation matrix determined in operation 430, using a value of a sensor fixed to the vehicle. For example, the sensor fixed to the vehicle may include a light detection and ranging (lidar) sensor and a radio detection and ranging (radar) sensor to detect a 3D movement of the vehicle. When the sensor is provided in the vehicle, a transformation matrix of the camera coordinate system with higher accuracy may be acquired by correcting the first transformation matrix additionally using the value of the sensor.
In addition, when the vehicle starts traveling again after being parked, the electronic device of one or more embodiments may visualize a blind spot that is before the parked vehicle starts traveling again, as a blind spot image determined in a state where the vehicle was traveling before being parked. The visualizing of the blind spot of the vehicle that is traveling has been described above, and a region corresponding to a blind spot at a current time point may not be a region that did not belong to the blind spot at a previous time point and may correspond to a region captured by the camera. When the vehicle previously parked starts traveling again, the vehicle may be in a state that is not moving, and thus the region corresponding to the blind spot at the current time point may still correspond to the blind spot at the previous time point. In this case, by storing, in a memory, a blind spot image determined as described above while the vehicle is traveling to a parking position at which the vehicle is to be parked, the electronic device of one or more embodiments may continuously visualize a blind spot by visualizing a blind spot which is one before the vehicle starts traveling again, as the blind spot image stored in the memory.
In addition, the camera coordinate system and the camera parameters determined in operations 410 through 460 may be based on any one determined from among a plurality of cameras fixed to the vehicle based on a traveling direction of the vehicle. For example, when the vehicle travels forward, the camera coordinate system and the camera parameters may be based on a camera that captures an image of the front side of the vehicle. When the vehicle travels backward, the camera coordinate system and the camera parameters may be based on a camera that captures an image of the rear side of the vehicle.
Referring to
When a vehicle crosses a speed bump, a change in pitch information may occur. However, by simply using an image of a previous time point for visualization without applying such a change in the pitch information, the typical electronic device degrades the consistency of a blind spot 730. As illustrated in the example 710 of
In contrast, by applying roll information and/or pitch information to visualize a blind spot 740, the electronic device of one or more embodiments may maintain the high consistency of the blind spot 740 as illustrated in the example 720 of
The electronic devices, vehicles, memories, processors, cameras, vehicle 100, electronic device 110, memory 111, processor 113, camera 115, vehicle 310, and other devices, apparatuses, units, modules, and components described herein with respect to
The methods illustrated in
Instructions or software to control computing hardware, for example, one or more processors or computers, to implement the hardware components and perform the methods as described above may be written as computer programs, code segments, instructions or any combination thereof, for individually or collectively instructing or configuring the one or more processors or computers to operate as a machine or special-purpose computer to perform the operations that are performed by the hardware components and the methods as described above. In one example, the instructions or software include machine code that is directly executed by the one or more processors or computers, such as machine code produced by a compiler. In another example, the instructions or software includes higher-level code that is executed by the one or more processors or computer using an interpreter. The instructions or software may be written using any programming language based on the block diagrams and the flow charts illustrated in the drawings and the corresponding descriptions in the specification, which disclose algorithms for performing the operations that are performed by the hardware components and the methods as described above.
The instructions or software to control computing hardware, for example, one or more processors or computers, to implement the hardware components and perform the methods as described above, and any associated data, data files, and data structures, may be recorded, stored, or fixed in or on one or more non-transitory computer-readable storage media. Examples of a non-transitory computer-readable storage medium include read-only memory (ROM), random-access programmable read only memory (PROM), electrically erasable programmable read-only memory (EEPROM), random-access memory (RAM), dynamic random access memory (DRAM), static random access memory (SRAM), flash memory, non-volatile memory, CD-ROMs, CD-Rs, CD+Rs, CD-RWs, CD+RWs, DVD-ROMs, DVD-Rs, DVD+Rs, DVD-RWs, DVD+RWs, DVD-RAMs, BD-ROMs, BD-Rs, BD-R LTHs, BD-REs, blue-ray or optical disk storage, hard disk drive (HDD), solid state drive (SSD), flash memory, a card type memory such as multimedia card micro or a card (for example, secure digital (SD) or extreme digital (XD)), magnetic tapes, floppy disks, magneto-optical data storage devices, optical data storage devices, hard disks, solid-state disks, and any other device that is configured to store the instructions or software and any associated data, data files, and data structures in a non-transitory manner and provide the instructions or software and any associated data, data files, and data structures to one or more processors or computers so that the one or more processors or computers can execute the instructions. In one example, the instructions or software and any associated data, data files, and data structures are distributed over network-coupled computer systems so that the instructions and software and any associated data, data files, and data structures are stored, accessed, and executed in a distributed fashion by the one or more processors or computers.
While this disclosure includes specific examples, it will be apparent after an understanding of the disclosure of this application that various changes in form and details may be made in these examples without departing from the spirit and scope of the claims and their equivalents. The examples described herein are to be considered in a descriptive sense only, and not for purposes of limitation. Descriptions of features or aspects in each example are to be considered as being applicable to similar features or aspects in other examples. Suitable results may be achieved if the described techniques are performed in a different order, and/or if components in a described system, architecture, device, or circuit are combined in a different manner, and/or replaced or supplemented by other components or their equivalents.
Therefore, the scope of the disclosure is defined not by the detailed description, but by the claims and their equivalents, and all variations within the scope of the claims and their equivalents are to be construed as being included in the disclosure.
Number | Date | Country | Kind |
---|---|---|---|
10-2022-0075072 | Jun 2022 | KR | national |