This application claims the benefit of Korean Patent Application No. 10-2020-0064597, filed on May 28, 2020, in the Korean Intellectual Property Office, the disclosure of which is incorporated herein in its entirety by reference.
The disclosure relates to an electronic device for performing object detection and an operation method thereof, and more particularly, to an electronic device for performing object detection by using image sensors having photographing areas, which overlap each other, and an operation method of the electronic device.
A self-driving system (or an Advanced Driver Assistance System (ADAS)) may obtain information regarding a host vehicle and a surrounding environment from various types of sensors and may safely navigate by controlling the host vehicle based on the obtained information. In detail, the self-driving system may capture images of a surrounding environment of the host vehicle by using image sensors, perform object detection on the captured images, and may control a driving direction, speed, and the like of the host vehicle according to an object detection result.
The self-driving system may include an image sensor that mainly photographs a front view of the host vehicle and perform object detection on an image of the front view. When an object comes close to a left side or a right side of the host vehicle, the image of the front view may include only part of the object. Accordingly, it is difficult for the self-driving system to accurately detect, from the image of the front view, the object coming close to the left side or the right side of the host vehicle.
According to one or more embodiments, an electronic device detects an image area corresponding to a proximity object based on two images captured in one direction, merges an image, which is captured in another direction and includes the proximity object, with the detected image area, and performs object detection on a merged image.
An electronic device includes a first image sensor configured to output a first image produced by photographing a first photographing area. A second image sensor outputs a second image produced by photographing a second photographing area that overlaps at least some portions of the first photographing area. A third image sensor outputs a third image produced by photographing a third photographing area. A processor performs object detection on at least one object included in an image. The processor generates disparity information indicating a separation degree of at least one feature point of the first image and the second image, transforms the third image based on the disparity information, and performs the object detection on the transformed third image.
According to one or more embodiments, an electronic device includes a first image sensor configured to output a first color image captured in a first direction. A depth sensor outputs a depth image corresponding to the first color image. A second image sensor outputs a second color image captured in a second direction. A processor performs object detection on at least one object included in an image. The processor transforms the second color image based on the first color image and the depth image and performs the object detection on the second color image that is transformed.
According to one or more embodiments, an operation method of an electronic device includes obtaining a first image produced by photographing a first photographing area; obtaining a second image produced by photographing a second photographing area that overlaps at least some portions of the first photographing area; obtaining a third image produced by photographing a third photographing area; generating disparity information indicating a separation degree of at least one feature point of the first image and the second image; transforming the third image based on the disparity information; and performing object detection on the third image that is transformed.
Embodiments of the disclosure will be more clearly understood from the following detailed description taken in conjunction with the accompanying drawings in which:
Referring to
The electronic device 10 may be realized as a personal computer (PC), an Internet of Things (IoT) device, or a portable electronic device. The portable electronic device may be included in various devices such as a laptop computer, a mobile phone, a smart phone, a tablet PC, a personal digital assistant (PDA), an enterprise digital assistant (EDA), a digital still camera, a digital video camera, an audio device, a portable multimedia player (PMP), a personal navigation device (PND), an MP3 player, a handheld game console, an e-book reader, and a wearable device.
In an embodiment, the electronic device 10 may be a device that controls a host vehicle. The electronic device 10 may perform object detection based on images capturing a surrounding environment of the host vehicle and control the host vehicle according to an object detection result. Hereinafter, for convenience, it is assumed that the electronic device 10 is a device that controls the host vehicle.
The sensor 100 may include sensors that generate information regarding the surrounding environment. For example, the sensor 100 may include an image sensor such as a Charge Coupled Device (CCD) or a Complementary Metal Oxide Semiconductor (CMOS). In an example, the sensor 100 may include a first image sensor 110, a second image sensor 120, and a third image sensor 130.
The first image sensor 110 may output a first image IMG1 of a first photographing area, the second image sensor 120 may output a second image IMG2 of a second photographing area, and the third image sensor 130 may output a third image IMG3 of a third photographing area. In an embodiment, the first image sensor 110 and the second image sensor 120 may be arranged adjacent to each other and capture images in the same direction or a similar direction. Accordingly, the first photographing area of the first image sensor 110 may overlap most of the second photographing area of the second image sensor 120. As a gap between the first image sensor 110 and the second image sensor 120 is small, a region where the first photographing area overlaps the second photographing area may increase. The first image sensor 110 and the second image sensor 120 may each be realized as a stereo camera (not shown). Also, a first image IMG1 and a second image IMG2 may be referred to as stereo images.
In an embodiment, the third image sensor 130 may capture an image in a direction perpendicular to a photographing direction of the first image sensor 110 or the second image sensor 120. For example, the first image sensor 110 and the second image sensor 120 may photograph a front view of the host vehicle, and the third image sensor 130 may photograph a side view of the host vehicle. As another example, the first image sensor 110 and the second image sensor 120 may photograph a rear view of the host vehicle, and the third image sensor 130 may photograph the side view of the host vehicle.
In the above examples, according to embodiments, the third image sensor 130, which photographs the side view of the host vehicle, may include at least two image sensors photographing a left side view and/or a right side view of the host vehicle. For example, when the third image sensor 130 includes two image sensors and photographs one of the left side view and the right side view of the host vehicle, at least two image sensors may have photographing areas overlapping each other. As another example, when the third image sensor 130 includes two image sensors and photographs the left and right side views of the host vehicle, at least two image sensors may include different photographing areas.
For convenience of explanation, hereinafter, it is assumed that the first image sensor 110 and the second image sensor 120 photograph the front view of the host vehicle and that the third image sensor 130 includes one image sensor and photographs the side view of the host vehicle.
The third photographing area of the third image sensor 130 may overlap at least one of the first photographing area and the second photographing area. Because the third image sensor 130 captures an image in a vertical direction, overlapping of the third photographing area with the first or second photographing area may be relatively smaller than overlapping of the first photographing area with the second photographing area. A photographing direction of the third image sensor 130 is not limited thereto. The photographing direction of the third image sensor 130 is a direction in which the third photographing area overlaps the first or second photographing area.
When the object (e.g., a peripheral vehicle) comes close to a left side or a right side of the front of the electronic device 10, only part of the object (e.g., a front portion of the peripheral vehicle) may be included in the first image IMG1 or the second image IMG2. Also, when the object is located in a photographing direction of the third image sensor 130, other portions of the object (e.g., middle and rear portions of the peripheral vehicle) may be included in the third image IMG3. In this case, although the first image IMG1, the second image IMG2, and the third image IMG3 are analyzed, it may be difficult for the processor 300 to detect a proximity object close to the electronic device 10.
As a storage in which data is stored, the memory 200 may store data generated by the sensor 100 and various pieces of data generated while the processor 300 performs calculations. For example, the memory 200 may store the first to third images IMG1 to IMG3 that are obtained by the first to third image sensors 110 to 130. As described below with regard to the operation of the processor 300, the memory 200 may store a processing result according to the image processing of the processor 300.
The processor 300 may control all operations of the electronic device 10. The processor 300 may include various operation processors such as a Central Processing Unit (CPU), a Graphic Processing Unit (GPU), an Application Processor (AP), a Digital Signal Processor (DSP), a Field-Programmable Gate Array (FPGA), a Neural network Processing unit (NPU), an Electronic Control Unit (ECU), and an Image Signal Processor (ISP).
The processor 300 according to an embodiment may include an image transformation module 310. The image transformation module 310 may transform the third image IMG3 based on the first and second images IMG1 and IMG2. In an embodiment, the image transformation module 310 may generate disparity information indicating a separation degree of at least one common feature point of the first image IMG1 and the second image IMG2 and may transform the third image IMG3 based on the generated disparity information. An image transformation operation of the image transformation module 310 will be described below in more detail.
Because the first image sensor 110 and the second image sensor 120 are arranged adjacent to each other, at least one object may be commonly included in the first image IMG1 and the second image IMG2. Therefore, feature points of the object that is commonly included may be commonly included in the first image IMG1 and the second image IMG2 as well. Here, the term ‘feature point’ is a point indicating a feature of an object and denotes pixels forming the object. The image transformation module 310 may detect a common feature point by analyzing the first image IMG1 and the second image IMG2 and may generate the disparity information based on a difference between a location of the feature point of the first image IMG1 and a location of the feature point of the second image IMG2.
The disparity information may have different values according to a distance between the electronic device 10 and the object. For example, when the first object is in a remote place, the difference between the location of the feature point of the first image IMG1 and the location of the feature point of the second image IMG2 is small. Therefore, a disparity value of a feature point of a first object may be relatively small. As another example, when a second object comes close to the electronic device 10, the difference between the location of the feature point of the first image IMG1 and the location of the feature point of the second image IMG2 is great. Therefore, a disparity value regarding the feature point of the second object may be relatively great. The image transformation module 310 may transform the third image IMG3 by using features of the above disparity values.
In detail, the image transformation module 310 may detect an area, which corresponds to the proximity object close to the electronic device 10, from the first image IMG1 or the second image IMG2, based on the disparity information. For example, because a disparity value regarding a feature point forming the proximity object is relatively great, the image transformation module 310 may detect an area of the first image IMG1 or the second image IMG2, which has a great disparity value, as an area corresponding to the proximity object.
The image transformation module 310 may merge an area including the detected area with the third image IMG3. For example, the image transformation module 310 may merge an area of the first image IMG1 or the second image IMG2, which corresponds to a portion (e.g., a front portion of the peripheral vehicle) of the proximity object, with an area of the third image IMG3, which relates to other portions (e.g., middle and rear portions of the peripheral vehicle) of the proximity object, thereby transforming the third image IMG3 to include the entire object. In the above examples, it is described that a portion of the proximity object is included in the first image IMG1 or the second image IM2 and that other portions of the proximity object are included in the third image IMG3, but this is merely an example. Portions of the proximity object may be overlappingly included in the first to third images IMG1 to IMG3.
The processor 300 may include an object detection module 320. The object detection module 320 may detect at least one object included in an image. In an embodiment, the object detection module 320 may detect an object included in at least one of the first to third images IMG1 to IMG3. Also, the object detection module 320 may detect an object included in the third image IMG3 that is converted by the image transformation module 310. In detail, the object detection module 320 may detect the object from the third image IMG3 transformed to include the entire proximity object that is close to the electronic device 10.
The image transformation module 310 and the object detection module 320 may each be realized as firmware or software and may be loaded on the memory 200 and executed by the processor 300. However, one or more embodiments are not limited thereto. The image transformation module 310 and the object detection module 320 may each be realized as hardware or a combination of software and hardware.
Also,
The electronic device 10 according to an embodiment may detect an image area corresponding to the proximity object by using two images captured in one direction, merge an image, which is captured in another direction and includes the proximity object, with the detected area, and perform the object detection on the merged image, thereby accurately detecting the proximity object.
Hereinafter, referring to
Referring to
Objects OB1 to OB4 may be in vicinity of the electronic device 10. Referring to
Referring to
The first image IMG1 and the second image IMG2 may include only a front portion of the third object OB3 that is the closest to the host vehicle. The third image IMG3 captured by the third image sensor 130 may include a middle portion and a rear portion of the third object OB3. Therefore, the electronic device 10 may unlikely detect the third object OB3 even though the object detection is performed on the first image IMG1, the second image IMG2, or the third image IMG3.
The processor 300 may transform the third image IMG3 to accurately detect the third object OB3. In detail, the processor 300 may generate disparity information of the first image IMG1 and the second image IMG2. Also, the processor 300 may detect an area of the first image IMG1 or the second image IMG2 (e.g., a front portion), which corresponds to the third object OB3, based on the generated disparity information. The processor 300 may transform the third image IMG3 by merging the detected area (e.g., the front portion) with an area associated with the third object OB3 of the third image IMG3. The processor 300 may detect the third object OB3 by performing the object detection on the transformed third image IMG3.
Referring to
In operation S120, the electronic device 10 may obtain the second image IMG2 of the second photographing area that overlaps at least some portions of the first photographing area. In detail, the electronic device 10 may obtain the second image IMG2 of the second photographing area that is captured by the second image sensor 120. However, one or more embodiments are not limited thereto, and the electronic device 10 may obtain the second image IMG2 from the external device.
In operation S130, the electronic device 10 may obtain the third image IMG3 of the third photographing area. In detail, the electronic device 10 may obtain the third image IMG3 of the third photographing area that is captured by the third image sensor 130. However, one or more embodiments are not limited thereto, and the electronic device 10 may obtain the third image IMG3 from the external device. The third photographing area may overlap the first photographing area or the second photographing area.
In operation S140, the electronic device 10 may generate the disparity information. In detail, the electronic device 10 may generate the disparity information indicating a separation degree of at least one common feature point of the first image IMG1 and the second image IMG2.
In operation S150, the electronic device 10 may transform the third image IMG3 based on the generated disparity information. In detail, the electronic device 10 may detect the area, which indicates the proximity object close to the electronic device 10, from the first image IMG1 or the second image IM2 based on the disparity information and may transform the third image IMG3 by merging the detected area with the third image IMG3. In operation S160, the electronic device 10 may perform the object detection on the transformed third image IMG3.
Referring to
In operation S145, the electronic device 10 may perform feature matching between the first feature point and the second feature point. In detail, the electronic device 10 may match the first and second feature points, which correspond to each other, for each of objects that are commonly included in the first image IMG1 and the second image IMG2.
In operation S147, the electronic device 10 may calculate a separation degree of the matched feature points. In detail, the electronic device 10 may generate the disparity information by calculating a difference between the locations of the first feature point and the second feature point that are matched. A method whereby the electronic device 10 generates the disparity information is not limited thereto, and the disparity information may be generated in various manners.
Referring to
In operation S145, the electronic device 10 may match the first and second feature points that are detected. For example, for a peripheral vehicle that is commonly included in the first image IMG1 and the second image IMG2, the electronic device 10 may match a first feature point constituting the peripheral vehicle in the first image IMG1 and a second feature point constituting the peripheral vehicle in the second image IMG2. The electronic device 10 may perform feature matching identically on roads, trees, and the like that are commonly included in the first image IMG1 and the second image IMG2.
The electronic device 10 may determine areas of the first image IMG1 and the second image IMG2, which include the matched feature points, and generate the disparity information by using the determined areas. For example, because the first image IMG1 is an image captured by the first image sensor 110 located on a left side of the second image sensor 120, a left edge of the first image IMG1 may correspond to an area excluded from the photographing area of the second image sensor 120. Therefore, the electronic device 10 may not detect the second feature point that is matched with the first feature point that is located on the left edge of the first image IMG1. Accordingly, the electronic device 10 may determine that a region A except for the left edge of the first image IMG1 is used to generate the disparity information.
Also, because the second image IMG2 is an image captured by the second image sensor 120 located on a right side of the first image sensor 110, a right edge of the second image IMG2 may be an area excluded from the photographing area of the first image sensor 110. Therefore, the electronic device 10 may not detect the first feature point matched with the second feature point that is located on the right edge of the second image IMG2. Accordingly, the electronic device 10 may determine that a region B except for the right edge of the second image IMG2 is used to generate the disparity information.
Referring to
Referring to
In another embodiment, the electronic device 10 may extract an area, which overlaps the third photographing area of the third image sensor 130, from the first image IMG1 as the target area. Alternatively, the electronic device 10 may extract an area, which overlaps the third photographing area of the third image sensor 130, from the second image IMG2 as the target area.
In operation S153, the electronic device 10 may transform the third image IMG3 by merging the extracted target area with the third image IMG3. In detail, the electronic device 10 may warp the target area and may merge the warped target area with the third image IMG3.
In an embodiment, the electronic device 10 may include mapping information including a coordinate value of the third image IMG3 that corresponds to each coordinate value of the first image IMG1 or the second image IMG2. The electronic device 10 may identify the corresponding coordinate value of the third image IMG3 for each pixel forming the target area, based on the mapping information. The electronic device 10 may merge a pixel value of each pixel forming the target area with the identified coordinate value of the third image IMG3.
In another embodiment, for quicker calculation, the electronic device 10 may only detect a coordinate value of the third image IMG3 corresponding to each feature point included in the target area, instead of each pixel forming the target area. The electronic device 10 may merge a preset-sized image (that is, a portion of the target area), which includes each feature point, with the detected coordinate value of the third image IMG3.
When merging the target area with the third image IMG3, the electronic device 10 may merge, with the third image IMG3, the pixel values of the target area as well as the disparity values corresponding to the target area. For example, the electronic device 10 may merge the disparity values corresponding to the target area with the third image IMG3. As another example, the electronic device 10 may generate depth information indicating a depth value based on the disparity values corresponding to the target area and may merge the generated depth information with the third image IMG3. As another example, the electronic device 10 may generate distance information indicating a distance to the host vehicle based on the generated depth information and may merge the generated distance information with the third image IMG3. A method whereby the electronic device 10 merges the disparity values with the third image IMG3 is not limited thereto. The electronic device 10 may perform the object detection based on the disparity values and the pixel values of the third image IMG3 that is transformed.
A method whereby the electronic device 10 merges the target area with the third image IMG3 is not limited thereto and may vary.
Referring to
Referring to
The coordinate value of the third image IMG3 corresponding to the feature point may exceed a coordinate value range of pixels forming the existing third image IMG3. Therefore, the transformed third image IMG3_T may have a greater size than the existing third image IMG3.
Referring to
In operation S157, the electronic device 10 may merge the masked image with the third image IMG3. A method of merging the masked image with the third image IMG3 may be substantially the same as the above-described method of
Referring to
Referring to
Referring to
Referring to
In an embodiment, the first AI model 311 may receive the first image IMG1 and the second image IMG2 and generate the disparity information Info_D based on the received first and second images IMG1 and IMG2. For example, the first AI model 311 may receive the first image IMG1 and the second image IMG2 from the first image sensor 110 (of
The second AI model 313 may receive the disparity information Info_D and the first to third images IMG1 to IMG3 and may transform the third image IMG3 based on the received disparity information Info_D and the received first and second images IMG1 and IMG2. For example, the second AI model 313 may receive the first to third images IMG1 to IMG3 from the first to third image sensors 110 to 130 (of
The second AI model 313 may generate the transformed third image IMG3_T by merging, with the third image IMG3, pixel values of some areas of at least one of the first image IMG1 and the second image IMG2. Alternatively, according to an embodiment, the second AI model 313 may generate the transformed third image IMG3_T by merging, with the third image IMG3, the pixel values of areas of at least one of the first image IMG1 and the second image IMG2 and disparity values corresponding to the areas. A method of merging the disparity values with the third image IMG3 may be substantially the same as the above-described method of
According to an embodiment, the second AI model 313 may receive at least one of the first image IMG1 and the second image IMG2. For example, the second AI model 313 may receive the first image IMG1, the third image IMG3, and the disparity information Info_D, transform the third image IMG3 based on the first image IMG1 and the disparity information Info_D, and output the transformed third image IMG3_T. As another example, the second AI model 313 may receive the second image IMG2, the third image IMG3, and the disparity information Info_D, transform the third image IMG3 based on the second image IMG2 and the disparity information Info_D, and output the transformed third image IMG3_T.
The first AI model 311 and the second AI model 313 may respectively perform neural network-based neural tasks based on various neural networks. A neural network may be a model based on at least one of various neural networks such as an Artificial Neural Network (ANN) model, a Multi-Layer Perceptrons (MLP) model, a Convolutional Neural Network (CNN) model, a Decision Tree model, a Random Forest model, an AdaBoost model, a Multiple Regression Analysis model, a Logistic Regression model, and a RANdom SAmple Consensus (RANSAC) model. However, types of the neural network are not limited thereto. Also, a neural network for performing one task may include sub-neural networks, and the sub-neural networks may be realized as heterogeneous or homogeneous neural network models.
The first AI model 311 or the second AI model 313 may each be realized as software, hardware, or a combination thereof. Each of the first AI model 311 and the second AI model 313 may be trained by a manufacturer in advance and may be included in the electronic device 10 during the manufacture. However, one or more embodiments are not limited thereto, and the processor 300 may train the first AI model 311 and/or the second AI model 313.
Referring to
The sensor 100a may include a second image sensor 130a corresponding to the third image sensor 130 of
The processor 300a may transform the second color image C_IMG3 based on the first color image C_IMG1 and the depth image D_IMG2 by using an image transformation module 310a and may perform the object detection on the second color image C_IMG3 that is transformed by using an object detection module 320a.
The image transformation module 310a according to an embodiment may extract a target area from the first color image C_IMG1 based on the depth image D_IMG2, instead of the disparity information of
The image transformation module 310a may transform the second color image C_IMG3 by merging the extracted target area with the second color image C_IMG3.
In an embodiment, the electronic device 10 may include mapping information including a coordinate value of the second color image C_IMG3 that corresponds to each coordinate value of the first color image C_IMG1. The image transformation module 310a may detect the corresponding coordinate value of the second color image C_IMG3 with regard to each pixel forming the target area, based on the mapping information. The image transformation module 310a may merge a pixel value of each pixel forming the target area with the detected coordinate value of the second color image C_IMG3.
In another embodiment, for quicker calculation, the image transformation module 310a may only detect a coordinate value of the second color image C_IMG3 that corresponds to each feature point included in the target area, instead of each pixel forming the target area. The image transformation module 310a may merge an image having a preset size and including each feature point (e.g., a portion of the target area) with the detected coordinate value of the second color image C_IMG3. The object detection module 320a may perform object detection based on pixel values of the second color image C_IMG3 that is transformed.
When merging the target area with the second color image C_IMG3, the image transformation module 310a may merge, with the second color image C_IMG3, the pixel values of the target area and depth values of the depth image D_IMG2 that correspond to the target area. For example, the image transformation module 310a may merge the depth values corresponding to the target area with the second color image C_IMG3. As another example, the image transformation module 310a may generate distance information indicating a distance to the host vehicle based on the depth values and may merge the generated distance information with the second color image C_IMG3. A method whereby the image transformation module 310a merges the depth values with the second color image C_IMG3 is not limited thereto. The object detection module 320a may perform the object detection based on the pixel values and the depth values (or the distance information) of the second color image C_IMG3 that is transformed.
The electronic device 10a according to an embodiment may detect an area indicating a proximity object by using a color image captured in one direction and a depth image corresponding to the color image, merge an image, which is captured in another direction and includes other portions of the proximity object, with the detected area, and perform object detection on the merged image, thereby accurately detecting the proximity object.
Referring to
Referring to
In operation S220, the electronic device 10a may obtain the depth image D_IMG2 captured in the first direction. In detail, the electronic device 10a may obtain the depth image D_IMG2 captured in the first direction by using the depth sensor 120a. However, one or more embodiments are not limited thereto, and the electronic device 10a may obtain the depth image D_IMG2 from the external device.
In operation S230, the electronic device 10a may obtain the second color image C_IMG3 captured in a second direction. In detail, the electronic device 10a may obtain the second color image C_IMG3 captured in the second direction by using the second image sensor 130a. However, one or more embodiments are not limited thereto, and the electronic device 10a may obtain the second color image C_IMG3 from the external device.
In operation S240, the electronic device 10a may transform the second color image C_IMG3 based on the first color image C_IMG1 and the depth image D_IMG2. In detail, the electronic device 10a may detect an area of the first color image C_IMG1, which indicates the proximity object close to the electronic device 10a, based on the depth image D_IMG2 and may transform the second color image C_IMG3 by merging the detected area with the second color image C_IMG3. In operation S250, the electronic device 10a may perform the object detection on the second color image C_IMG3.
Referring to
The vehicle controller 410 may control driving of the host vehicle 400 overall. The vehicle controller 410 may determine situations around the host vehicle 400 and control a navigation direction, speed, or the like of the host vehicle 400 according to a determination result. In an embodiment, the vehicle controller 410 may receive an object detection result of the electronic device 10, determine the situations around the host vehicle 400 according to the received object detection result, and transmit a control signal to a driver (not shown) of the host vehicle 400 according to a determination result, thereby controlling the navigation direction, speed, or the like of the host vehicle 400.
Referring to
Referring to
The sensor 510 may include multiple sensors for generating information regarding a surrounding environment of the self-driving device 500. For example, the sensor 510 may include sensors that receive image signals regarding the surrounding environment of the self-driving device 500 and output the received image signals into images. The sensor 510 may include an image sensor 511 such as a Charge Coupled Device (CCD) or a Complementary Metal Oxide Semiconductor (CMOS), a depth camera 513, a LiDAR sensor 515, a Radar sensor 517, and the like.
In this case, the image sensor 511 included in the sensor 510 may include multiple image sensors 511. The image sensors 511 may correspond to the first image sensor 110, the second image sensor 120, and the third image sensor 130 of
The memory 520 may correspond to the memories 200 and 200a according to the one or more embodiments, and the processor 530 may correspond to the processors 300 and 300a according to the one or more embodiments. Also, the main processor 550 may correspond to the vehicle controller 410 of
The main processor 550 may control the operation of the self-driving device 500 overall. For example, the main processor 550 may control a function of the processor 530 by executing programs stored in the RAM 540. The RAM 540 may temporarily store programs, data, applications, or instructions.
The main processor 550 may control the operation of the self-driving device 500 according to an operation result of the processor 530. In an embodiment, the main processor 550 may receive an object detection result from the processor 530 and control operation of the driver 560 based on the received object detection result.
As components for driving the self-driving device 500, the driver 560 may include an engine/motor 561, a steering unit 563, and a brake unit 565. In an embodiment, the driver 560 may adjust acceleration, brakes, speed, directions, and the like of the self-driving device 500 by using the engine/motor 561, the steering unit 563, and the brake unit 565 according to the control of the processor 530.
The communication interface 570 may communicate with an external device in a wired or wireless communication manner. For example, the communication interface 570 may perform communication in a wired communication manner such as Ethernet or in a wireless manner such as Wi-Fi or Bluetooth.
As is traditional in the field, embodiments may be described and illustrated in terms of blocks which carry out a described function or functions. These blocks, which may be referred to herein as units or modules or the like, are physically implemented by analog and/or digital circuits such as logic gates, integrated circuits, microprocessors, microcontrollers, memory circuits, passive electronic components, active electronic components, optical components, hardwired circuits and the like, and may optionally be driven by firmware and/or software. The circuits may, for example, be embodied in one or more semiconductor chips, or on substrate supports such as printed circuit boards and the like. The circuits constituting a block may be implemented by dedicated hardware, or by a processor (e.g., one or more programmed microprocessors and associated circuitry), or by a combination of dedicated hardware to perform some functions of the block and a processor to perform other functions of the block. Each block of the embodiments may be physically separated into two or more interacting and discrete blocks without departing from the scope of the disclosure. Likewise, the blocks of the embodiments may be physically combined into more complex blocks without departing from the scope of the disclosure. An aspect of an embodiment may be achieved through instructions stored within a non-transitory storage medium and executed by a processor.
While the disclosure has been particularly shown and described with reference to embodiments thereof, it will be understood that various changes in form and details may be made therein without departing from the spirit and scope of the following claims.
Number | Date | Country | Kind |
---|---|---|---|
10-2020-0064597 | May 2020 | KR | national |
Number | Name | Date | Kind |
---|---|---|---|
8928753 | Han et al. | Jan 2015 | B2 |
9131120 | Schofield et al. | Sep 2015 | B2 |
10313584 | Pan et al. | Jun 2019 | B2 |
10377309 | Lee et al. | Aug 2019 | B2 |
10733723 | Diao | Aug 2020 | B2 |
20060274302 | Shylanski | Dec 2006 | A1 |
20130250046 | Schofield et al. | Sep 2013 | A1 |
20130286193 | Pflug | Oct 2013 | A1 |
20140218531 | Michiguchi | Aug 2014 | A1 |
20160086322 | Arita | Mar 2016 | A1 |
20180031364 | Kallay | Feb 2018 | A1 |
20190088135 | Do | Mar 2019 | A1 |
20190143896 | Rathi et al. | May 2019 | A1 |
20190197667 | Paluri | Jun 2019 | A1 |
20190213746 | Azuma | Jul 2019 | A1 |
20190248288 | Oba | Aug 2019 | A1 |
20190253625 | Pan et al. | Aug 2019 | A1 |
20190304117 | Bitan | Oct 2019 | A1 |
20190362480 | Diao | Nov 2019 | A1 |
20190362486 | Diao | Nov 2019 | A1 |
20200217972 | Kim | Jul 2020 | A1 |
20200262344 | Ihlenburg | Aug 2020 | A1 |
Number | Date | Country |
---|---|---|
2009140305 | Jun 2009 | JP |
100833704 | May 2008 | KR |
Number | Date | Country | |
---|---|---|---|
20210377437 A1 | Dec 2021 | US |