The present application claims priority to Korean Patent Application No. 10-2022-0061419, filed May 19, 2022, the entire contents of which is incorporated herein for all purposes by this reference.
The present disclosure relates to a device that activates a safety device for protecting occupants in a vehicle and an operating method thereof.
Recently, advanced driver assistance systems (ADAS) are being developed to assist the driving of a driver. The ADAS has multiple sub-classifications of technologies and provides convenience to the driver. Such ADAS is also called autonomous driving or automated driving system (ADS).
While the vehicle is autonomously driven through the ADS, occupants may do other things other than driving. Accordingly, a seat in a vehicle supporting autonomous driving may be rotatably provided so that the occupant is able to easily do other things. For example, a driver's seat of the vehicle supporting autonomous driving may be rotated toward the rear or the side of the vehicle rather than the front.
Meanwhile, the vehicle may be provided with a safety device such as an airbag and/or a pre-safe seat belt (PSB) to protect occupants and may operate the safety device when a collision occurs.
The information included in this Background of the present disclosure is only for enhancement of understanding of the general background of the present disclosure and may not be taken as an acknowledgement or any form of suggestion that this information forms the prior art already known to a person skilled in the art.
As described above, as the driver's seat in the vehicle is freely rotatable, there may occur a situation where it is difficult to protect the occupant even if an airbag is operated to protect the occupant in a collision situation. For example, even if the vehicle detects the occurrence of a collision and deploys an airbag provided on the driver's seat side, when the driver's seat is rotated toward the rear of the vehicle, a situation where the occupant accommodated in the driver's seat cannot be protected may occur.
Accordingly, various aspects of the present disclosure are directed to providing a method and device configured for operating a safety device for occupant protection based on state information of the seat and/or an occupant in the vehicle.
Various embodiments of the present disclosure include a method and device configured for determining at least one of an operating method and an operating time point of the safety device by use of at least one sensor in the vehicle based on rotation state information of the seat and/or an occupant.
The technical problem to be overcome in the present specification is not limited to the above-mentioned technical problems. Other technical problems not mentioned may be clearly understood from those described below by a person having ordinary skill in the art.
An exemplary embodiment of the present disclosure is a vehicle for protecting an occupant. The vehicle includes: a plurality of safe devices provided in the vehicle for protecting the occupant; first sensors configured to obtain information on a seat or the occupant within the vehicle; second sensors configured to detect a collision with other objects; and a processor which is operatively connected to the safe devices, the first sensors, and the second sensors. The processor is configured to obtain state information on at least one of the seat or the occupant based on the information obtained from the first sensors, determines at least one safe device to be operated among the plurality of safe devices based on the state information on the at least one of the seat or the occupant, and operates the determined at least one safe device when at least one of the second sensors detects a collision satisfying a predetermined condition. The state information on the at least one of the seat or the occupant includes at least one of a rotation angle of the seat, a position of the seat, a tilt of the seat, a rotation angle of the occupant, a position of the occupant, and a tilt of the occupant.
The plurality of safe devices includes a plurality of airbags provided at different positions within the vehicle, and a plurality of pre-safe seat belts (PSBs) provided in different seats in the vehicle.
The processor is configured to determine an operation threshold of the at least one safety device to be operated based on the state information on the at least one of the seat or the occupant, compares an impact strength detected from at least one of the second sensors with the operation threshold, and operates the determined at least one safety device when the detected impact strength is greater than the operation threshold.
The first sensors include at least one of a sensor configured to detect the rotation angle of the seat, a sensor configured to detect the position of the seat, or a sensor configured to detect the tilt of the seat.
The first sensors include a camera configured to capture the occupant. The processor extracts three-dimensional (3D) human body keypoints from an image captured by the camera by use of an artificial neural network-based deep learning model, and obtains the state information on the occupant based on the extracted 3D human body keypoints.
The deep learning model is trained based on a new 3D body joint coordinate true which is generated by transforming a 3D body joint coordinate truth value.
The processor is configured to estimate a first rotation angle between a predetermined first reference line and a shoulder line in an x-y plane based on the 3D human body keypoints, and is configured to determine the first rotation angle as the rotation angle of the occupant. The predetermined first reference line is set parallel to the shoulder line when a body of the occupant faces a front of the vehicle.
The processor is configured to estimate a second rotation angle based on the 3D human body keypoints based on a width and a height of a body in a y-z plane, and is configured to determine the second rotation angle as the rotation angle of the occupant.
The processor is configured to estimate a first rotation angle between a predetermined first reference line and a shoulder line in an x-y plane based on the 3D human body keypoints, estimates a second rotation angle based on the 3D human body keypoints based on a width and a height of a body in a y-z plane, and is configured to determine the rotation angle of the occupant based on the first rotation angle and the second rotation angle.
The processor measures a distance to a keypoint corresponding to a predetermined body portion among the 3D human body keypoints, and is configured to determine the position of the occupant based on the measured distance.
The processor is configured to estimate an angle between a predetermined second reference line and a line connecting keypoints corresponding to a predetermined body portion among the 3D human body keypoints, and is configured to determine the estimated angle as the tilt of the occupant. The predetermined second reference line is perpendicular to the ground.
Another exemplary embodiment of the present disclosure is an operating method of a vehicle for protecting an occupant. The operating method includes: obtaining state information on at least one of a seat or the occupant within the vehicle based on information obtained from first sensors; determining at least one safe device to be operated among a plurality of safe devices provided in the vehicle based on the state information on the at least one of the seat or the occupant; and operating the determined at least one safe device when at least one of second sensors detects a collision satisfying a predetermined condition. The state information on the at least one of the seat or the occupant includes at least one of a rotation angle of the seat, a position of the seat, a tilt of the seat, a rotation angle of the occupant, a position of the occupant, and a tilt of the occupant.
The plurality of safe devices includes a plurality of airbags provided at different positions within the vehicle, and a plurality of pre-safe seat belts (PSBs) provided in different seats in the vehicle.
The operating the determined at least one safe device includes: comparing an impact strength detected from at least one of the second sensors with an operation threshold of the at least one safety device; and operating the determined at least one safety device when the detected impact strength is greater than the operation threshold of the at least one safety device. The operation threshold of the at least one safety device is determined based on the state information on the at least one of the seat or the occupant.
The first sensors include at least one of a sensor configured to detect the rotation angle of the seat, a sensor configured to detect the position of the seat, or a sensor configured to detect the tilt of the seat.
The first sensors include a camera configured to capture the occupant. The obtaining the state information on the at least one of the seat or the occupant includes: extracting three-dimensional (3D) human body keypoints from an image captured by the camera by use of an artificial neural network-based deep learning model; and
The deep learning model is trained based on a new 3D body joint coordinate true which is generated by transforming a 3D body joint coordinate truth value.
The obtaining the state information on the occupant based on the extracted 3D human body keypoints includes: estimating a first rotation angle between a predetermined first reference line and a shoulder line in an x-y plane based on the 3D human body keypoints; and determining the first rotation angle as the rotation angle of the occupant. The predetermined first reference line is set parallel to the shoulder line when a body of the occupant faces a front of the vehicle.
The obtaining the state information on the occupant based on the extracted 3D human body keypoints includes: estimating a second rotation angle based on the 3D human body keypoints based on a width and a height of a body in a y-z plane; and determining the second rotation angle as the rotation angle of the occupant.
The obtaining the state information on the occupant based on the extracted 3D human body keypoints includes: estimating a first rotation angle between a predetermined first reference line and a shoulder line in an x-y plane based on the 3D human body keypoints; estimating a second rotation angle based on the 3D human body keypoints based on a width and a height of a body in a y-z plane; and determining the rotation angle of the occupant based on the first rotation angle and the second rotation angle.
The obtaining the state information on the occupant based on the extracted 3D human body keypoints includes: measuring a distance to a keypoint corresponding to a predetermined body portion among the 3D human body keypoints; and determining the position of the occupant based on the measured distance.
The obtaining the state information on the occupant based on the extracted 3D human body keypoints includes: estimating an angle between a predetermined second reference line and a line connecting keypoints corresponding to a predetermined body portion among the 3D human body keypoints; and determining the estimated angle as the tilt of the occupant. The predetermined second reference line is perpendicular to the ground.
According to various embodiments of the present disclosure, the vehicle operates the safety device for occupant protection based on state information of the seat and/or an occupant, safely protecting the occupant regardless of the state of the seat.
The methods and apparatuses of the present disclosure have other features and advantages which will be apparent from or are set forth in more detail in the accompanying drawings, which are incorporated herein, and the following Detailed Description, which together serve to explain certain principles of the present disclosure.
It may be understood that the appended drawings are not necessarily to scale, presenting a somewhat simplified representation of various features illustrative of the basic principles of the present disclosure. The predetermined design features of the present disclosure as included herein, including, for example, specific dimensions, orientations, locations, and shapes will be determined in part by the particularly intended application and use environment.
In the figures, reference numbers refer to the same or equivalent portions of the present disclosure throughout the several figures of the drawing.
Reference will now be made in detail to various embodiments of the present disclosure(s), examples of which are illustrated in the accompanying drawings and described below. While the present disclosure(s) will be described in conjunction with exemplary embodiments of the present disclosure, it will be understood that the present description is not intended to limit the present disclosure(s) to those exemplary embodiments of the present disclosure. On the other hand, the present disclosure(s) is/are intended to cover not only the exemplary embodiments of the present disclosure, but also various alternatives, modifications, equivalents and other embodiments, which may be included within the spirit and scope of the present disclosure as defined by the appended claims.
Hereinafter, embodiments included in the present specification will be described in detail with reference to the accompanying drawings. The same or similar elements will be denoted by the same reference numerals irrespective of drawing numbers, and repetitive descriptions thereof will be omitted.
A suffix “module” or “part” for the component, which is used in the following description, is provided or mixed in consideration of only convenience for ease of specification, and does not have any distinguishing meaning or function per se. Also, the “module” or “part” may mean software components or hardware components such as a field programmable gate array (FPGA), an application specific integrated circuit (ASIC). The “part” or “module” performs certain functions. However, the “part” or “module” is not meant to be limited to software or hardware. The “part” or “module” may be configured to be placed in an addressable storage medium or to restore one or more processors. Thus, for one example, the “part” or “module” may include components such as software components, object-oriented software components, class components, and task components, and may include processes, functions, attributes, procedures, subroutines, segments of a program code, drivers, firmware, microcode, circuits, data, databases, data structures, tables, arrays, and variables. Components and functions provided in the “part” or “module” may be combined with a smaller number of components and “parts” or “modules” or may be further divided into additional components and “parts” or “modules”.
Methods or algorithm steps described relative to various exemplary embodiments of the present disclosure may be directly implemented by hardware and software modules that are executed by a processor or may be directly implemented by a combination thereof. The software module may be resident on a RAM, a flash memory, a ROM, an EPROM, an EEPROM, a resistor, a hard disk, a removable disk, a CD-ROM, or any other type of record medium known to those skilled in the art. An exemplary record medium is coupled to a processor and the processor can read information from the record medium and can record the information in a storage medium. In another way, the record medium may be integrally formed with the processor. The processor and the record medium may be resident within an application specific integrated circuit (ASIC). The ASIC may be resident within a user's terminal.
Also, in the following description of the exemplary embodiment included in the present specification, the detailed description of known technologies incorporated herein is omitted to avoid making the subject matter of the exemplary embodiment included in the present specification unclear. Also, the accompanied drawings are provided only for more easily describing the exemplary embodiment included in the present specification. The technical spirit included in the present specification is not limited by the accompanying drawings. All modification, equivalents and substitutes included in the spirit and scope of the present disclosure are understood to be included in the accompanying drawings.
While terms including ordinal numbers such as the first and the second, etc., may be used to describe various components, the components are not limited by the terms mentioned above. The terms are used only for distinguishing between one component and other components.
In the case where a component is referred to as being “connected” or “accessed” to another component, it should be understood that not only the component is directly connected or accessed to the other component, but also there may exist another component between them. Meanwhile, in the case where a component is referred to as being “directly connected” or “directly accessed” to another component, it should be understood that there is no component therebetween.
Hereinafter, in an exemplary embodiment of the present disclosure, a vehicle is provided with an automated driving system (ADS) and thus may be autonomously driven. For example, the vehicle may perform at least one of steering, acceleration, deceleration, lane change, and stopping without a driver's manipulation by the ADS. The ADS may include, for example, at least one of pedestrian detection and collision mitigation system (PDCMS), lane change decision aid system (LCAS), land departure warning system (LDWS), adaptive cruise control (ACC), lane keeping assistance system (LKAS), road boundary departure prevention system (RBDPS), curve speed warning system (CSWS), forward vehicle collision warning system (FVCWS), and low speed following (LSF).
A vehicle shown in
Referring to
According to various exemplary embodiments of the present disclosure, the sensor unit 110 may detect the internal and/or external environment of the vehicle 100 by use of a plurality of sensors, and may be configured to generate data related to the internal and/or external environment of the vehicle based on the detection result.
According to the exemplary embodiment of the present disclosure, the sensor unit 110 may include a collision detection sensor 112, a seat state detection sensor 114, and a camera 116.
The collision detection sensor 112 may detect a collision between the vehicle and an object (e.g., another vehicle, a pedestrian, an obstacle, etc.) and may be configured to generate a collision detection signal. For example, the collision detection sensor 112, as shown in
The seat state detection sensor 114 may measure state information on at least one seat in the vehicle and may be configured to generate the state information on the at least one seat. For example, the seat state detection sensor 114 may include, as shown in
The camera 116 may include at least one camera that obtains a vehicle interior image by capturing. The vehicle interior image may be an image obtained by capturing an occupant withing the vehicle. To capture the occupant, the camera 116 may be, as shown in
The sensor unit 110 may further include at least one other sensor in addition to the above-described sensors. For example, the sensor unit 110 may further include at least one of a camera that captures the environment outside the vehicle, a radio detection and ranging (RADAR) and a light detection and ranging (LIDAR) that detect an object around the vehicle, or a position measuring sensor configured for measuring that is configured to measure the position of the vehicle. The listed sensors are only examples for understanding, and the sensors of the present disclosure are not limited thereto.
The processor 120 may control the overall operation of the vehicle 100. According to the exemplary embodiment of the present disclosure, the processor 120 may include an electrical control unit (ECU) configured for integrally controlling the components within the vehicle 100. For example, the processor 120 may include a central processing unit (CPU) or micro processing unit (MCU) configured for performing arithmetic processing. According to the exemplary embodiment of the present disclosure, the processor 120 may include an airbag control unit (ACU) 231 that is configured to control the airbag which is a safety device.
According to various exemplary embodiments of the present disclosure, based on the state information of the seat and/or the occupant within the vehicle, the processor 120 may be configured to determine at least one safety device 130 to be operated and may control the operation of the determined safety device 130. The state information of the seat and/or the occupant may include at least one of a rotation angle θt, a position dt, and a tilt φt of the seat and/or the occupant. The safety device 130 may include at least one of an airbag and a pre-safe seat belt (PSB). For example, when the state information of the seat and/or the occupant indicates that the occupant is facing the front of the vehicle, the processor 120 may, as shown in
According to the exemplary embodiment of the present disclosure, when a specified event is detected, the processor 120 may be configured to determine a safety device operation threshold. The specified event may include at least one of an event in which the collision detection signal is input from the sensor unit 110, an event in which a collision with a nearby object is predicted based on detecting data of the sensor unit 110, and an event at which a collision prediction time point arrives. The listed specified events are merely examples for understanding, and various embodiments of the present disclosure are not limited thereto. When the specified event is detected, the processor 120 may obtain vehicle state information and may be configured to determine the safety device operation threshold to control an operating time point of the safety device 130 based on the obtained vehicle state information. The vehicle state information may include, for example, at least one of a vehicle speed, steering, or a yaw rate. For example, the processor 120 may select a safety device operation threshold corresponding to the speed, steering, and/or yaw rate of the current vehicle from among the safety device operation thresholds preset for airbag deployment.
According to the exemplary embodiment of the present disclosure, the processor 120 may obtain the state information of the seat and/or the occupant from the detecting data obtained from the seat state detection sensor 114 or image data obtained from the camera 116.
According to the exemplary embodiment of the present disclosure, when an image recognition function for the image obtained from the camera 116 does not operate normally, the processor 120 may obtain the state information of the seat from the seat state detection sensor 114. For example, when the camera 116 operates abnormally or a recognition error occurs on the image obtained from the camera 116, the processor 120 may obtain at least one of the rotation angle θt of the seat, the position dt of the seat, and the tilt φt of the seat from the seat state detection sensor 114.
According to the exemplary embodiment of the present disclosure, when the image recognition function for the image obtained from the camera 116 operates normally, the processor 120 inputs the image data obtained from the camera 116 to a pre-trained deep learning model, obtaining the state information of the occupant. The state information of the occupant may obtain at least one of the rotation angle θt of the occupant, the position dt of the occupant, or the tilt φt of the occupant. The tilt of the occupant may indicate the tilt of the upper body of the occupant. The rotation angle of the occupant may indicate, for example, how much the occupant is rotated in the right direction based on when the occupant faces the front of the vehicle. The rotation angle of the occupant may be represented by any one of a plurality of previously separated stages. For example, the rotation angle of the occupant may be represented by any one of a first stage (rotation between about −30 degrees and +30 degrees), a second stage (rotation between about +30 degrees and +90 degrees), a third stage (rotation between about +90 degrees and +150 degrees), a fourth stage (rotation between about +150 degrees and +180 degrees or rotation between about −150 degrees and −180 degrees), a fifth stage (rotation between about −90 degrees and −150 degrees), and a sixth stage (rotation between about −30 degrees and −90 degrees). The position of the occupant may indicate, for example, how much the occupant is moved forward or backward from the specified reference position and/or the direction of movement (e.g., front, normal, or backward). For example, the position of the occupant may be represented by distinguishing whether the occupant is in front of the reference position, in the reference position, or behind the reference position. The reference position may be set and/or changed by a designer. The tilt of the occupant may indicate the tilt of the upper body of the occupant. The tilt of the upper body of the occupant may indicate an angle formed by the upper body with respect to the ground, a plane parallel to the ground, or the floor surface of the vehicle. The tilt of the upper body of the occupant may be represented by any one of a plurality of previously separated stages. For example, the tilt of the upper body of the occupant may be represented by any one of a first stage (tilt between about 60 degrees and 69 degrees), a second stage (tilt between about 70 degrees and 79 degrees), a third stage (tilt between about 80 degrees and 89 degrees), and a fourth stage (tilt between about 90 degrees and 99 degrees).
The processor 120 may obtain, as shown in
According to the exemplary embodiment of the present disclosure, the feature extraction deep learning network model 410 may, as shown in
According to the exemplary embodiment of the present disclosure, the feature extraction deep learning network model 410 may use a secondary trained model 531. The secondary trained model may be, as shown in
A projection unit (Prediction) 535 may obtain a 2D body joint coordinates 537 corresponding to a 2D pose by projecting the 3D body joint coordinates 533 predicted by the model 531 and may provide the 2D body joint coordinates 537 to the discriminator 539. The discriminator 539 may compare whether the 2D body joint coordinates 537 that are the result predicted by the model 531 are the same as the transformed 2D body joint coordinates 529, and may compare the transformed 3D body joint coordinate true value 525 and the 3D body joint coordinates 533 predicted by the model 531, and then may provide the results to the generator 523.
The generator 523 may be configured to generate a new transformed 3D body joint coordinate true value based on the result provided from the discriminator 539. According to the exemplary embodiment of the present disclosure, when it is determined that at least one of the two comparison results received from the discriminator 539 represents that they are not the same, the generator 523 is configured to determine the training as not having been completed, and thus, may be configured to generate a new transformed 3D body joint coordinate true value. Conversely, when it is determined that at least one of the two comparison results received from the discriminator 539 represents that they are the same, the generator 523 is configured to determine the training of the model 531 as having been completed, and thus, may terminate the training without generating an additional transformed 3D body joint coordinate true value. Here, the meaning of what they are same is that a difference between the 3D body joint coordinates 533 predicted by the model 531 and the transformed 3D body joint coordinate true value 525 is within a predetermined value, or is that a difference between the 2D body joint coordinates 537 that are the result predicted by the model 531 and the transformed 2D body joint coordinates 529 is within a predetermined value.
As an additional transformation, the generator 523 may limit the transformed 3D body joint coordinate true value 525 which may be generated based on one 3D body joint coordinate truth value 521 to a predetermined value. According to the exemplary embodiment of the present disclosure, when the generator 523 intends to generate an additionally transformed 3D body joint coordinate true value 525 based on the comparison result of the discriminator 539, if the generator 523 generates already the transformed 3D body joint coordinate true value 525 by a predetermined value, the generator 523 transforms the 3D body joint coordinate truth value 521 used as an original, and then may be configured to generate an additional transformed 3D body joint coordinate true value 525.
In various embodiments of the present disclosure, it is possible to improve the performance of the feature extraction deep learning network model by generating a lot of learning data by only limited image data through the generative adversarial augmentation training described above.
According to the exemplary embodiment of the present disclosure, the feature extraction deep learning network model 410 may be trained by use of at least one of the learning methods shown in
According to the exemplary embodiment of the present disclosure, the feature extraction deep learning network model 410 may be pre-trained in an external server or the like before being mounted in the vehicle, by use of at least one of the learning methods shown in
According to the exemplary embodiment of the present disclosure, the processor 120 may update the safety device operation threshold in accordance with the state information of the seat and/or the occupant. The processor 120 may update the safety device operation threshold so that a preset safety device combination corresponding to the combination of the rotation angle, the position, and the tilt of the seat and/or the occupant is operated at a preset operating time. The operating time and the safety device combination corresponding to the combination of the rotation angle, the position, and the tilt of the seat and/or the occupant may be set in advance through a collision analysis or a sled test for each combination of the rotation angle, the position, and the tilt of the seat and/or the occupant. For example, if the rotation angle, the position, and the tilt are in the “first stage (rotation between about −30 degrees and +30 degrees), normal, the fourth stage (tilt between about 90 degrees and 99 degrees)”, the safety device combination may be determined as “driver's seat airbag” and the operating time may be determined as a case where “the impact strength is greater than or equal to a predetermined threshold value+α” through the collision analysis or sled test. The processor 120 may update the safety device operation threshold to a value which is greater than the predetermined safety device operation threshold by a. For another example, when the rotation angle, the position, and the tilt are in the “third stage (rotation between about +90 degrees and +150 degrees), front, the fourth stage (tilt between about 90 degrees and 99 degrees)”, the safety device combination may be determined as “driver's seat airbag and center airbag” and the operating time may be determined as a case where “the impact strength is greater than or equal to a predetermined threshold value+β” through the collision analysis or sled test. The processor 120 may update the safety device operation threshold to a value which is greater than the predetermined safety device operation threshold by β. According to the exemplary embodiment of the present disclosure, when the safety device combination is determined, a position where the collision is detected and whether an occupant is accommodated in each seat may be additionally taken into consideration.
According to the exemplary embodiment of the present disclosure, when an impact strength greater than the updated safety device operation threshold is detected, the processor 120 may operate at least one safety device in accordance with the safety device combination corresponding to the state information of the seat and/or the occupant. The impact strength may be obtained from the collision detection signal of the collision detection sensor 112.
According to the exemplary embodiment of the present disclosure, the processor 120 may include a controller 122 which is configured to control the operation of at least one component included in the vehicle and/or at least one function of the vehicle. The controller 122 may operate at least one safety device corresponding to the safety device combination determined based on the state information of the seat and/or the occupant among various safety devices included in the vehicle. For example, when the safety device combination determined according to the state information of the seat and/or the occupant is “driver's seat airbag”, the controller 122 may control the driver's seat airbag to be deployed. For another example, when the safety device combination determined according to the state information of the seat and/or the occupant is “driver's seat airbag, center airbag and PSB”, the controller 122 may control the PSB to operate while controlling the driver's seat airbag and center airbag to be deployed.
The safety device 130 may include safety devices for protecting occupants. For example, the safety device 130 may include a plurality of airbags and/or a plurality of PSBs. The plurality of airbags may be provided at different positions within the vehicle respectively. The plurality of PSBs may be provided in different seats in the vehicle respectively.
The storage unit 140 may store various programs and data for the operation of the vehicle and/or the processor 120. According to the exemplary embodiment of the present disclosure, the storage unit 140 may store various programs and data required to operate the safety device according to the state information of the seat and/or the occupant. For example, the storage unit 140 may store information on the safety device combination corresponding to each combination of the rotation angle, the position, and the tilt of the seat and/or the occupant. The storage unit 140 may store information on the safety device operation threshold corresponding to each combination of the rotation angle, the position, and the tilt of the seat and/or the occupant, and/or information on the amount of the update of the safety device operation threshold.
The communication device 150 may communicate with an external device of the vehicle 100. According to various exemplary embodiments of the present disclosure, the communication device 150 may receive data from the outside of the vehicle 100 or transmit data to the outside of the vehicle 100 under the control of the processor 120. For example, the communication device 150 may perform a communication by use of a wireless communication protocol or a wired communication protocol.
The foregoing description has described a method for controlling the operation of the safety device 130 by use of the seat state information obtained through the seat state detection sensor 114 or by use of the occupant state information obtained through the camera 116. However, according to various exemplary embodiments of the present disclosure, the operation of the safety device 130 can also be controlled by use of the seat state information and the occupant state information.
Referring to
When the specified event is detected, the vehicle 100 may be configured to determine a threshold value for operating the safety device in step 603. According to the exemplary embodiment of the present disclosure, when the specified event is detected, the vehicle 100 may be configured to determine the safety device operation threshold for controlling the operation timing of the safety device 130 based on the vehicle state information (e.g., vehicle speed, steering, and/or yaw rate). For example, the vehicle 100 may select and determine the safety device operation threshold corresponding to information on the current vehicle state from among pre-stored safety device operation thresholds for each vehicle state. For another example, the vehicle 100 may be configured to determine the threshold value for operating the safety device based on a function which has at least one of the vehicle speed, steering, and yaw rate as an input variable and outputs the safety device operation threshold through a specified operation.
In step 605, the vehicle 100 may be configured to determine whether the image recognition function normally operates. For example, the vehicle 100 may be configured to determine whether the image recognition function for an indoor captured image obtained through the in-vehicle camera 116 normally operates. When the camera 116 for obtaining the indoor captured image does not operate normally or an image recognition error for the indoor captured image is detected, the vehicle 100 may be configured to determine the image recognition function as not operating normally.
When the image recognition function does not operate normally, the vehicle 100 may obtain the state information of the seat from the sensors provided in the vehicle in step 615. For example, the vehicle 100 may obtain at least one of the rotation angle θt of the seat, the position dt of the seat, and the tilt φt of the seat from the seat state detection sensor 114 provided in the vehicle.
When the image recognition function operates normally, the vehicle 100 may obtain the state information of the occupant in step 607 by use of an image recognition-based deep learning model. For example, the vehicle 100 may obtain the state information of the occupant by inputting the indoor captured image obtained from the camera 116 to a pre-trained deep learning model. The state information of the occupant may obtain at least one of the rotation angle θt of the occupant, the position dt of the occupant, and the tilt φt of the occupant. The image recognition-based deep learning model may include the feature extraction deep learning network model 410 that extracts keypoints related to the body of the occupant from the image. The feature extraction deep learning network model 410 may be pre-trained as shown in
In step 609, the vehicle 100 may update the safety device operation threshold according to the state information of the seat or the occupant. According to the exemplary embodiment of the present disclosure, the vehicle 100 may update the safety device operation threshold so that at least one safety device is operated at a preset operating time in accordance with the combination of the rotation angle, the position, and the tilt of the seat and/or the occupant. For example, the vehicle 100 may update the safety device operation threshold by adding the amount of the update of the threshold according to the combination of the rotation angle, the position, and the tilt of the seat and/or the occupant to the safety device operation threshold determined in step 603. According to the exemplary embodiment of the present disclosure, for each combination of the rotation angle, the position, and the tilt of the seat and/or the occupant, the amount of the update of the threshold corresponding to each combination may be stored in a form of a table. According to the exemplary embodiment of the present disclosure, the vehicle 100 may update the safety device operation threshold based on a specified function which has at least one of the rotation angle, the position, and the tilt as an input variable and outputs the amount of the update of the threshold through a specified operation.
In step 611, the vehicle 100 may be configured to determine whether the impact strength is greater than the updated safety device operation threshold. For example, the vehicle 100 may check the impact strength based on the collision detection signal obtained from the collision detection sensor 112 and may compare the checked impact strength with the safety device operation threshold.
When the impact strength is greater than the updated safety device operation threshold, the vehicle 100 may determine, in step 613, the combination of the safety devices to be operated according to the state information of the seat or the occupant, and may operate the safety devices corresponding to the determined safety device combination. When the impact strength is greater than the updated safety device operation threshold, the vehicle 100 may be configured to determine that the safety device needs to be operated and may be configured to determine the combination of the safety devices to be operated based on the state information of the seat or the occupant. The safety device combination corresponding to the state information of the seat or the occupant may be set in advance and stored in the storage unit 140 of the vehicle in a form of a table. For example, if the rotation angle, the position, and the tilt of the occupant are in the “first stage (rotation between about −30 degrees and +30 degrees), normal, the fourth stage (tilt between about 90 degrees and 99 degrees)”, the combination of the safety devices to be operated may be determined as “driver's seat airbag” by the vehicle 100 based on the table stored in the storage 140, and the driver's seat airbag may be deployed. For another example, if the rotation angle, the position, and the tilt of the occupant are in the “third stage (rotation between about +90 degrees and +150 degrees), front, the fourth stage (tilt between about 90 degrees and 99 degrees)”, the combination of the safety devices to be operated may be determined as “driver's seat airbag and center airbag” by the vehicle 100 based on the table stored in the storage 140, and the driver's seat airbag and the center airbag may be deployed.
Referring to
In step 703, the vehicle 100 may estimate a first rotation angle between a shoulder line and a first reference line in a first plane. For example, the vehicle 100 may estimate the shoulder line of the occupant in the first plane (x-y plane) based on 3D human body keypoints and may estimate the rotation angle between the estimated shoulder line and the first reference line. When the body of the occupant faces the front of the vehicle, which is to say, when the body of the occupant does not rotate, the first reference line may be set as a line parallel to the shoulder line. According to the exemplary embodiment of the present disclosure, the shoulder line of the occupant may be obtained based on two keypoints corresponding to both shoulder joints among the human body keypoints estimated in step 701. For example, as shown in
In step 705, the vehicle 100 may estimate a second rotation angle based on the width/height of the body in a second plane. For example, the vehicle 100 may estimate the body of the occupant in the second plane (y-z plane) based on the 3D human body keypoints and may estimate a width and a height of a bounding box with respect to the estimated body. The vehicle 100 may estimate the rotation angle of the occupant based on the width and height of the bounding box with respect to the body. For example, when the occupant rotates from the front to the left or from the front to the right, the heights of the bounding boxes for continuously input second images are all the same, and the widths may gradually taper. Conversely, when the occupant returns to the front from the state where the occupant has rotated to the right or returns to the front from the state where the occupant has rotated to the left, the heights of the bounding boxes for the continuously input second images are all the same, and the widths may gradually increase. Accordingly, the vehicle 100 may be configured to determine the rotation angle of the occupant based on a ratio of the height to the width of the bounding box. According to the exemplary embodiment of the present disclosure, as shown in
In step 707, the vehicle 100 may be configured to determine the rotation angle of the occupant based on the first rotation angle and the second rotation angle. For example, the vehicle 100 may be configured to determine an average of the first rotation angle estimated in step 703 and the second rotation angle estimated in step 705 as the rotation angle of the occupant. For example, the vehicle 100 may add the first rotation angle and the second rotation angle and may be configured to determine the result obtained by dividing the added value by 2 as the rotation angle of the occupant.
The average of the first rotation angle and the second rotation angle is determined as the rotation angle of the occupant in
Referring to
In step 803, the vehicle 100 may measure a distance to the keypoint corresponding to a specified body portion. For example, the vehicle 100 may check the coordinates of a specified body portion (e.g., hip) in the 3D body joint coordinates and may measure an x-axis distance to the coordinates of the body portion in the coordinate axis. One or two keypoints of the hip may be extracted from the 3D body joint coordinates. As shown in
In step 805, the vehicle 100 may be configured to determine the position of the occupant based on the measured distance. According to the exemplary embodiment of the present disclosure, the vehicle 100 may compare the measured distance and a specified distance and may be configured to determine whether the occupant is located in front of or behind the specified reference position. The specified distance may be set as the x-axis distance to the specified reference position in the coordinate axis. For example, when the measured distance is greater than the specified distance, the vehicle 100 may be configured to determine the occupant as being located behind the reference position. For another example, when the measured distance is smaller than the specified distance, the vehicle 100 may be configured to determine the occupant as being located in front of the reference position. When the measured distance and the specified distance are the same, the vehicle 100 may be configured to determine the occupant as being located at the specified reference position.
Referring to
In step 903, the vehicle 100 may estimate an angle between a neck-hip line and a second reference line in a third plane. For example, the vehicle 100 may estimate a line connecting the neck and hip of the occupant in the third plane (x-z plane) based on the 3D human body keypoints and may estimate an angle between the estimated neck-hip line and the second reference line. The second reference line may be perpendicular to the ground. According to the exemplary embodiment of the present disclosure, the neck-hip line of the occupant may be obtained based on a keypoint corresponding to the neck and one or more keypoints corresponding to the hip among the human body keypoints estimated in step 901. For example, as shown in
In step 905, the vehicle 100 may be configured to determine the estimated angle as the tilt of the upper body of the occupant. For example, as shown in
Furthermore, the term related to a control device such as “controller”, “control apparatus”, “control unit”, “control device”, “control module”, or “server”, etc refers to a hardware device including a memory and a processor configured to execute one or more steps interpreted as an algorithm structure. The memory stores algorithm steps, and the processor executes the algorithm steps to perform one or more processes of a method in accordance with various exemplary embodiments of the present disclosure. The control device according to exemplary embodiments of the present disclosure may be implemented through a nonvolatile memory configured to store algorithms for controlling operation of various components of a vehicle or data about software commands for executing the algorithms, and a processor configured to perform operation to be described above using the data stored in the memory. The memory and the processor may be individual chips. Alternatively, the memory and the processor may be integrated in a single chip. The processor may be implemented as one or more processors. The processor may include various logic circuits and operation circuits, may process data according to a program provided from the memory, and may be configured to generate a control signal according to the processing result.
The control device may be at least one microprocessor operated by a predetermined program which may include a series of commands for carrying out the method included in the aforementioned various exemplary embodiments of the present disclosure.
The aforementioned invention can also be embodied as computer readable codes on a computer readable recording medium. The computer readable recording medium is any data storage device that can store data which may be thereafter read by a computer system and store and execute program instructions which may be thereafter read by a computer system. Examples of the computer readable recording medium include Hard Disk Drive (HDD), solid state disk (SSD), silicon disk drive (SDD), read-only memory (ROM), random-access memory (RAM), CD-ROMs, magnetic tapes, floppy discs, optical data storage devices, etc and implementation as carrier waves (e.g., transmission over the Internet). Examples of the program instruction include machine language code such as those generated by a compiler, as well as high-level language code which may be executed by a computer using an interpreter or the like.
In various exemplary embodiments of the present disclosure, each operation described above may be performed by a control device, and the control device may be configured by a plurality of control devices, or an integrated single control device.
In various exemplary embodiments of the present disclosure, the scope of the present disclosure includes software or machine-executable commands (e.g., an operating system, an application, firmware, a program, etc.) for facilitating operations according to the methods of various embodiments to be executed on an apparatus or a computer, a non-transitory computer-readable medium including such software or commands stored thereon and executable on the apparatus or the computer.
In various exemplary embodiments of the present disclosure, the control device may be implemented in a form of hardware or software, or may be implemented in a combination of hardware and software.
Furthermore, the terms such as “unit”, “module”, etc. included in the specification mean units for processing at least one function or operation, which may be implemented by hardware, software, or a combination thereof.
For convenience in explanation and accurate definition in the appended claims, the terms “upper”, “lower”, “inner”, “outer”, “up”, “down”, “upwards”, “downwards”, “front”, “rear”, “back”, “inside”, “outside”, “inwardly”, “outwardly”, “interior”, “exterior”, “internal”, “external”, “forwards”, and “backwards” are used to describe features of the exemplary embodiments with reference to the positions of such features as displayed in the figures. It will be further understood that the term “connect” or its derivatives refer both to direct and indirect connection.
The foregoing descriptions of specific exemplary embodiments of the present disclosure have been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the present disclosure to the precise forms disclosed, and obviously many modifications and variations are possible in light of the above teachings. The exemplary embodiments were chosen and described in order to explain certain principles of the invention and their practical application, to enable others skilled in the art to make and utilize various exemplary embodiments of the present disclosure, as well as various alternatives and modifications thereof. It is intended that the scope of the present disclosure be defined by the Claims appended hereto and their equivalents.
Number | Date | Country | Kind |
---|---|---|---|
10-2022-0061419 | May 2022 | KR | national |