This application claims priority to EP App. No. 22 199 831 filed Oct. 5, 2022, the entire disclosure of which is incorporated by reference.
The present disclosure generally relates to safety improvements for vehicles and, in particular, to methods and systems of pose detection of a person on a vehicle seat.
Smart vehicles, such as smart cars, smart busses, and the like, significantly improve the safety of passengers. One task in such smart vehicles is seat occupancy detection, which aims at detecting persons, objects, child seats or the like placed on a seat. Other tasks involve control functionalities such as seat belt control, airbag control, air condition control, and so forth.
Early seat occupancy detection systems were built on weight sensors for detecting weights on seats. More recent seat occupancy detection systems alternatively or additionally process images taken by cameras in the vehicle. With the development of 2D object detection, it becomes more and more popular to use object detection for support of seat occupancy detection in a vehicle's cabin. Therefore, those detected objects are assigned to seats. For this purpose, the position of the detected objection in relation to a fixed area, which is assigned to a seat, is considered.
The background description provided here is for the purpose of generally presenting the context of the disclosure. Work of the presently named inventors, to the extent it is described in this background section, as well as aspects of the description that may not otherwise qualify as prior art at the time of filing, are neither expressly nor impliedly admitted as prior art against the present disclosure.
Wrong seat assignments may happen if the object or person is in an unclear position or moves. For instance, imagine one person sitting on the seat behind the driver seat and leaning to the middle seat. In this scenario, the front seat will cover some part of the person. Hence, only the visible part of person may be taken into account and the seat occupancy system may decide to assign the person to the rear middle seat.
Since some safety means have to be controlled differently if a seat is occupied or not, there is a need for reliably detecting a seat occupancy state in the vehicle. In this context, methods, systems and computer program products are presented as defined by the independent claims.
More specifically, a computerized method of pose detection of a person in a vehicle is presented. The method includes receiving, from an on-board camera, an image of an interior of the vehicle showing a seat of the vehicle occupied by the person, obtaining at least one first characteristic of a first face bounding area and a first body bounding area associated with the occupied seat of the vehicle, determining a second body bounding area and an associated second face bounding area of the person from the image, determining at least one second characteristic of the second face bounding area and the second body bounding area, and determining a pose of the person based on at least one second characteristic and on at least one first characteristic.
In various implementations, obtaining the at least one first characteristic includes retrieving the first face bounding area and the first body bounding area from a first memory based on a facial identification of the person. In some various implementations, obtaining the at least one first characteristic includes retrieving the first face bounding area and the first body bounding area from a second memory, wherein the first face bounding area and the first body bounding area are determined based on a mean position and/or mean area of face bounding areas and body bounding areas historically captured for persons on the seat. In yet some various implementations, obtaining the at least one first characteristic includes determining the first face bounding area and the first body bounding area based on a mean position and/or mean area of face bounding areas and body bounding areas captured on a plurality of images of the interior of the vehicle for the person on the seat in a certain initialization period.
In various implementations, the at least one characteristic includes a position of a face bounding area and a position of a body bounding area. Then, determining the pose of the person includes calculating a left-right-leaning value of the person based on the position of the first face bounding area and the first body bounding area and the position of the second face bounding area and the second body bounding area and determining that the person is leaning to the right or to the left based on the left-right-leaning value.
In various implementations, the at least one characteristic includes an area covered by a face bounding area and an area covered by a body bounding area. Then, determining the pose of the person includes calculating a forward-backward-leaning value of the person based on at least two of the area of the first face bounding area, the area of the first body bounding area, the area of the second face bounding area, and area of the second body bounding area and determining that the person is leaning forward or backward based on the forward-backward-leaning value.
In various implementations, the pose detection for a seat occupancy system is used to control safety functions in the vehicle. In some various implementations, a graphical representation on a display within the vehicle is based on an output of the seat occupancy system, wherein the output of the seat occupancy system is based on at least one of the pose detection, the second body bounding area, and the second face bounding area. In yet some various implementations, the method is performed at a periodicity and/or in response to a person-to-seat assignment change of any of the seats in the vehicle.
Another aspect concerns a seat occupancy system in a vehicle including a seat assignment logic, the seat occupancy system being adapted to perform the method described herein. In various implementations, the seat assignment logic of the seat occupancy system is updated based on the determined pose, in response to determining the pose of the person. In further various implementations, updating the seat assignment logic includes adapting at least one position of a corners and/or an edge of a seat bounding area according to at least the second body bounding box. In yet further various implementations, updating the seat assignment logic includes adapting parameters associated to corners and/or edges of the seat bounding area according to at least the second body bounding box.
Yet another aspect concerns a vehicle that includes a camera for taking images of an interior of the vehicle and the seat occupancy system as described herein.
Finally, a computer program is presented that includes instructions which, when the program is executed by a computer, cause the computer to carry out the methods described herein.
Further refinements are set forth by the dependent claims.
These and other objects, various implementations and advantages will become readily apparent to those skilled in the art from the following detailed description of the various implementations having reference to the attached figures, the disclosure not being limited to any particular various implementations.
Further areas of applicability of the present disclosure will become apparent from the detailed description, the claims, and the drawings. The detailed description and specific examples are intended for purposes of illustration only and are not intended to limit the scope of the disclosure.
The present disclosure will become more fully understood from the detailed description and the accompanying drawings.
In the drawings, reference numbers may be reused to identify similar and/or identical elements.
The present disclosure relates to methods and systems of leaning detection in collaboration with a seat occupancy classification system that improves the safety of smart vehicles, such as cars, trains, busses, ships, and the like.
Smart vehicles already improve the safety of passengers as they detect an occupancy state of vehicle's seats. However, people do not sit motionless in cars, but typically move. Such movements may lead to false seat assignments and, thus, to false seat occupancy states. Since seat occupancy states are used, e.g., to control safety in the vehicle, such as airbag regulation, door locking, and/or seatbelt tensioning control, it is relevant to have a reliable seat assignment system in a seat occupancy system.
The method starts with receiving an image 11 of an interior of the vehicle showing a seat of the vehicle occupied by a person. The image 11 is also referred to as a current image because it is the latest image of the vehicle's interior, e.g., captured in the last time frame, and may show more than one person on more than one seat. This is not excluded in various implementations focusing on the leaning detection for the (one) person on the (one) seat. Hence, if there are more persons on the image, the method may be performed for each person on each seat or for a particular person of the multiple persons on a on a particular seat.
The image 11 is taken by an on-board camera of the vehicle. Such a camera is usually located in the middle of the front window above or below the rear mirror but may also be located at a different position. The camera may be an RGB camera, i.e., an imager that collects visible light (400-700 nm) and converts that to an electrical signal, then organizes that information to render images and video streams, an infrared camera, or a mixture of both (RGB-IR). The image 11 may also be processed by other modules of the vehicle, in particular, by modules of a seat occupancy system.
The method further includes in box 12 obtaining at least one first characteristic of a first face bounding area and a first body bounding area associated with the occupied seat of the vehicle. The first characteristic and first bounding areas are also referred to as reference characteristic and reference bounding area. In other words, the method includes obtaining at least one reference characteristic of a reference face bounding area and a reference body bounding area associated with the seat of the vehicle. Obtaining is to be understood broadly. Hence, obtaining may comprise retrieving the reference face bounding area and reference body bounding area from a storage and calculating or determining the at least one reference characteristic after retrieving the reference bounding areas. Obtaining may also comprise retrieving the at least one reference characteristic from a storage, wherein the at least one reference characteristic was determined before from the reference bounding areas.
The at least one reference characteristic relates to a position and/or an area of the reference face bounding area. Additionally or alternatively, the at least one reference characteristic relates to a position and/or an area of the reference body bounding area. For example, the reference characteristics may be the x and y value of the middle position of the reference bounding areas. In this example, the reference characteristics comprise four reference characteristics, namely, the x value and the y value of the center position of the reference face bounding area and the x value and the y value of the center position of the reference body bounding area. In another example, the reference characteristics may consist of six reference characteristics, namely, the x and y value of the center position of the reference bounding areas as in the example before and, additionally, the areas covered by the reference bounding areas. Any other combinations or other values relating to the position (i.e., corner coordinates etc.) may be obtained, too.
A bounding area is a 2D area around an object. In the present disclosure, bounding areas are determined around bodies and faces detected on images of the interior of the vehicle. A bounding area may be a rectangle, i.e., a bounding box, any kind of polygon, or even a circle. Bounding boxes may be defined by at least a width, a height, a center x position, and a center y position relative to an image taken by the on-board camera. The bounding area may be a minimum bounding area, i.e., the smallest area with the defined shape (and in some various implementations also orientation) around the body or face of the person or, e.g., around a seat. For determining bounding areas, e.g., bounding boxes around the body or face of a person, a YOLO (you only look once) algorithm may be used. Other machine learning algorithms or conventional image processing and recognition algorithms may be used, too.
In box 13, the method determines a second body bounding area of the person from the image. The second body bounding area is herein also referred to as a current body bounding area, which is determined around a body of the person based on the current image 11. Therefore, as explained above, algorithms for object detection and bounding area determination can be applied to determine the current body bounding area. In some various implementations, the current body bounding area may be determined by modules of the seat occupancy system.
In box 14, the method determines an associated second face bounding area of the person from the image. The second face bounding area is also referred to as a current face bounding area determined around a face of the person based on the current image 11. Therefore, as explained above, algorithms for object detection and bounding area determination can be used to determine the current face bounding area. In some various implementations, the current face bounding area may be determined by modules of the seat occupancy system.
Based on the determined current bounding areas around the face and the body, the method determines in box 15 at least one second characteristic, i.e., current characteristic, of the second face bounding area and the second body bounding area. The at least one current characteristic relates to the position and/or area of the current face bounding area and/or current body bounding area. Hence, in some various implementations, the determination of the at least one current characteristic may be analogue to the determination of the at least one reference characteristic of box 12.
Finally, the method includes in box 16 determining a pose of the person based on at least one second characteristic and on at least one first characteristic. A pose may comprise a leaning to a direction. The direction may comprise left, right, forward, or backward. Additionally or alternatively, the direction may comprise diagonally left forward, diagonally right forward, and the like. Other poses may comprise kneeling, laying, and the like.
As will be apparent to the skilled person, the order of the processed depicted in boxes 11 to 15 can be different compared to the order shown in
The method of
Alternatively or additionally, the method of
The output of the seat occupancy system can also comprise knowledge derived from the leaning detection as described herein, e.g., in order to present a seat occupancy status of a seat to the passengers and/or driver of the vehicle on a display. The output of the seat occupancy system may in such an example be based on at least one of the leaning detection, the current body bounding area, and the current face bounding area. In some various implementations, a smart display, an infotainment system, or another media system in the vehicle may then present the passengers with a seat occupancy status of the seats and, e.g., may indicate whether the person-to-seat assignment was based on a leaning detection. If the leaning detection and/or the person-to-seat assignment is uncertain (e.g., a confidence value is below a threshold or the like), the passengers could interact with the media system in order to change or verify the assignment manually.
According to the method described with respect to
In the example of
As depicted in
One further example of obtaining the reference characteristic(s) is determining the reference characteristic(s) based on a mean position and/or mean area of bounding areas determined on a plurality of images of the interior of the vehicle for the person on the seat in an initialization period. In the example of
However, although not shown in
The method of
Based on at least one current characteristic of at least one current bounding area 41A, 41B, 42A, 42B, 43A, 43B, 44A, 44B, 45A, and 45B associated to a seat and on at least one reference characteristic of at least one reference bounding area 21A, 21B, 22A, 22B, 23A, 23B, 24A, 24B, 25A, and 25B, it can now be determined whether the person is leaning.
For example, consider the person on the middle rear seat. The areas covered by the reference body bounding box 24A and the reference face bounding box 24B are smaller than the areas covered by the current body bounding box 44A and the current face bounding box 44B. Hence, based on this, it can be determined that the person is—at least slightly—leaning to the front. Or consider the driver. It can be determined that she is leaning to the left based on the change of the current body bounding box 41A and the current face bounding box 41B with respect to the center position of the reference body bounding box 21A and the reference face bounding box 21B.
Determining whether the person is leaning may comprise two determination steps, one for determining whether the person is leaning to the right or to the left, which is shown in box 51, and one for determining whether the person is leaning to the front or to the back, which is shown in box 51. Although both determination processes are depicted in
The right/left determination as shown in box 51 includes calculating a left-right-leaning value based on the position of first bounding area(s) and on the position of the second bounding area(s), e.g., calculating a left-right-leaning value of the person based on at least one of the center position of the reference face bounding area and the reference body bounding area, and on at least one of the center position of the current face bounding area and the current body bounding area. Based on the left-right-leaning leaning value, it is then determined that the person is leaning to the right or to the left. Different calculations may be considered.
For example, the reference characteristic may be determined as the center x-position of the reference face bounding box relative to the reference body bounding box. Such a reference characteristic xrel
The left-right-leaning value Plean
P
lean
=x
rel
−x
rel
(1)
In this example, if the left-right-leaning value Plean
In some various implementations, a threshold is set in order to determine that a person is leaning, i.e., to prevent that also small deviations are already determined as leaning, i.e., the small deviation of the current bounding boxes 55A and 55B with respect to the reference bounding boxes 53A and 53B. For example, if the left-right-leaning value Plean
In a more specific example, a weighting parameter may also be applied to the above-mentioned formula (1) in order to also take the size or position of the bounding boxes into account. This may, e.g., be the case of bounding boxes 55A and 55B in
The forward/backward determination as shown in box 52 includes calculating a forward-backward-leaning value based on the area of the first bounding area(s) and on the area of the second bounding area(s), e.g., calculating a forward-backward-leaning value of the person based on at least one of the area of the reference face bounding area and the area of the reference body bounding area, and on at least one of the area of the current face bounding area and area of the current body bounding area. Based on this leaning value, it is then determined that the person is leaning to the front or to the back. Different calculations may be considered.
For example, the reference characteristic may be determined as the area covered by the reference face bounding box Af
In this example, if the forward-backward-leaning value Plean
In some various implementations, a threshold is set in order to determine that a person is leaning, i.e., to prevent that also small deviations are already determined as leaning. For example, if the forward-backward-leaning value Plean
In some various implementations, determining whether the person is leaning to the front or to the back may additionally take the current body bounding box side ratio into account. For example, if the sides are almost equal, i.e., a ratio 1:1 and the forward-backward-leaning value indicates a forward leaning, the person may be more reliably determined as leaning to the front, which is the case for the current body bounding box 55A of
In contrast to the case of
In contrast to the case of
As explained above, the seat occupancy system is applied for the seat on a seat bounding area. This seat bounding area lies within images taken by the on-board camera. The leaning detection method of
After determining that the person is leaning, e.g., after determining the pose in box 16 (as also depicted in
Adapting corner and/or edges of the seat bounding area, i.e., the process of box 81, is further illustrated in
Adapting parameters associated to corners and/or edges of the seat bounding area, i.e., the process of box 82, may leave the corners and/or edges on the same positions as before, e.g., like shown in
Furthermore, the computing system 100 may also comprise a specified camera interface 104 to communicate with an on-board camera of the vehicle. Alternatively, the computing system 100 may communicate with the camera via the network interface 103. The camera is used for taking the current image 1. The computing system 100 may also be connected to database systems (not shown) via the network interface, wherein the database systems store at least part of the images needed for providing the functionalities described herein.
The main memory 106, which may correspond to the memory 36 depicted in
According to an aspect, a vehicle is provided. The herein described seat state assignment method may be stored as program code 109 and may be at least in part comprised by the vehicle. The seat occupancy system may be stored as program code 108 and may also at least in part be comprised by the vehicle. Parts of the program code 108 may also be stored and executed on a cloud server to reduce the computational effort on the vehicle's computing system 100. The vehicle may also comprise a camera, e.g., connected via the camera interface 104, for capturing the current image 11.
According to an aspect, a computer program includes instructions is provided. These instructions, when the program is executed by a computer, cause the computer to carry out the methods described herein. The program code embodied in any of the systems described herein is capable of being individually or collectively distributed as a program product in a variety of different forms. In particular, the program code may be distributed using a computer readable storage medium having computer readable program instructions thereon for causing a processor to carry out aspects of the various implementations described herein.
Computer readable storage media, which are inherently non-transitory, may include volatile and non-volatile, and removable and non-removable tangible media implemented in any method or technology for storage of information, such as computer-readable instructions, data structures, program modules, or other data. Computer readable storage media may further include random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other solid state memory technology, portable compact disc read-only memory (CD-ROM), or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store the desired information and which can be read by a computer.
A computer readable storage medium should not be construed as transitory signals per se (e.g., radio waves or other propagating electromagnetic waves, electromagnetic waves propagating through a transmission media such as a waveguide, or electrical signals transmitted through a wire). Computer readable program instructions may be downloaded to a computer, another type of programmable data processing apparatus, or another device from a computer readable storage medium or to an external computer or external storage device via a network.
It should be appreciated that while particular various implementations and variations have been described herein, further modifications and alternatives will be apparent to persons skilled in the relevant arts. In particular, the examples are offered by way of illustrating the principles, and to provide a number of specific methods and arrangements for putting those principles into effect.
In certain various implementations, the functions and/or acts specified in the flowcharts, sequence diagrams, and/or block diagrams may be re-ordered, processed serially, and/or processed concurrently without departing from the scope of the invention. Moreover, any of the flowcharts, sequence diagrams, and/or block diagrams may include more or fewer blocks than those illustrated consistent with various implementations of the invention.
The terminology used herein is for the purpose of describing particular various implementations only and is not intended to be limiting of the various implementations of the disclosure. It will be further understood that the terms “comprise” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Furthermore, to the extent that the terms “include”, “having”, “has”, “with”, “comprised of”, or variants thereof are used in either the detailed description or the claims, such terms are intended to be inclusive in a manner similar to the term “comprising”.
While a description of various implementations has illustrated all of the inventions and while these various implementations have been described in considerable detail, it is not the intention of the applicants to restrict or in any way limit the scope of the appended claims to such detail. Additional advantages and modifications will readily appear to those skilled in the art. The invention in its broader aspects is therefore not limited to the specific details, representative apparatus and method, and illustrative examples shown and described. Accordingly, the described various implementations should be understood as being provided by way of example, for the purpose of teaching the general features and principles, but should not be understood as limiting the scope, which is as defined in the appended claims.
The term non-transitory computer-readable medium does not encompass transitory electrical or electromagnetic signals propagating through a medium (such as on a carrier wave). Non-limiting examples of a non-transitory computer-readable medium are nonvolatile memory circuits (such as a flash memory circuit, an erasable programmable read-only memory circuit, or a mask read-only memory circuit), volatile memory circuits (such as a static random access memory circuit or a dynamic random access memory circuit), magnetic storage media (such as an analog or digital magnetic tape or a hard disk drive), and optical storage media (such as a CD, a DVD, or a Blu-ray Disc).
The phrase “at least one of A, B, and C” should be construed to mean a logical (A OR B OR C), using a non-exclusive logical OR, and should not be construed to mean “at least one of A, at least one of B, and at least one of C.” The phrase “at least one of A, B, or C” should be construed to mean a logical (A OR B OR C), using a non-exclusive logical OR.
Number | Date | Country | Kind |
---|---|---|---|
22 199 831 | Oct 2022 | EP | regional |