The present disclosure relates to a seating position determination apparatus, a system, a method, and a computer-readable medium.
As a related art, Patent Literature 1 discloses an airbag apparatus and a deployment control method thereof. In Patent Literature 1, a camera captures an image of a passenger seated in a seat in which an airbag is installed. A passenger recognition unit subjects the image captured by the camera to image processing, and acquires a position of the passenger present in the image. A distance recognition unit acquires a distance from a reference position to the passenger. A position detection unit detects an actual position of the passenger seated in the seat, based on a position of the passenger in the image and a distance from the reference position to the passenger, which is acquired by the distance recognition unit. A deployment control unit determines whether a head of the passenger is positioned within an airbag operation range. When the head of the passenger is positioned within the airbag operation range, the deployment control unit deploys the airbag.
In Patent Literature 1, the position detection unit detects an actual position of the passenger seated in the seat. However, in Patent Literature 1, the position of the head of the passenger is detected for comparison with the airbag operation range, and which seat the passenger is seated in is not specified.
In view of the above-mentioned circumstances, an object of the present disclosure is to provide a seating position determination apparatus, a system, a method, and a computer-readable medium that are able to specify which seat a passenger is seated in.
In order to achieve the above-mentioned object, in a first aspect, the present disclosure provides a seating position determination apparatus. The seating position determination apparatus includes a face detection means for detecting a face region of a passenger from an image captured by a camera installed inside a vehicle in which a plurality of seat rows are arranged, a seating row specification means for specifying a seat row in which the passenger with the face region being detected is seated, and a seating position specification means for specifying a seat position of the passenger in the vehicle, based on a range of a seat position in the image and the seat row being specified, the range of the seat position being prepared for each seat row.
In a second aspect, the present disclosure provides a seating position determination system. The seating position determination system includes a camera being installed in a vehicle in which a plurality of seat rows are arranged and being configured to capture an image of an inside of the vehicle, and a seat position determination apparatus configured to acquire an image captured by the camera and specifying a seat position of a passenger in the vehicle by using the acquired image. The seat position determination apparatus includes a face detection means for detecting a face region of the passenger from the image, a seating row specification means for specifying a seat row in which the passenger with the face region being detected is seated, and a seating position specification means for specifying a seat position of the passenger in the vehicle, based on a range of a seat position in the image and the seat row being specified, the range of the seat position being prepared for each seat row.
In a third aspect, the present disclosure provides a seating position determination method. The seating position determination method includes detecting a face region of a passenger from an image captured by a camera installed inside a vehicle in which a plurality of seat rows are arranged, specifying a seat row in which the passenger with the face region being detected is seated, and specifying a seat position of the passenger in the vehicle, based on a range of a seat position in the image and the seat row being specified, the range of the seat position being prepared for each seat row.
In a fourth aspect, the present disclosure provides a computer-readable medium. The computer-readable medium is configured to store a program for causing a processor to execute processing of detecting a face region of a passenger from an image captured by a camera installed inside a vehicle in which a plurality of seat rows are arranged, specifying a seat row in which the passenger with the face region being detected is seated, and specifying a seat position of the passenger in the vehicle, based on a range of a seat position in the image and the seat row being specified, the range of the seat position being prepared for each seat row.
The seating position determination apparatus, the system, the method, and the computer-readable medium according to the present disclosure are able to specify which seat a passenger is seated in, based on an image captured by a camera.
Example embodiments of the present disclosure are described below in detail with reference to the drawings. Note that the following description and the drawings are appropriately omitted and simplified for clarity of description. Further, in the following drawings, the same elements and similar elements are denoted by the same reference symbols, and repetitive description thereof is omitted as required.
Information relating to seat arrangement inside a cabin is registered in the seating position determination apparatus 110. For example, information indicating the number of seat rows provided to the vehicle 200 and information indicating the number of seats in each seat row are registered in the seating position determination apparatus 110. Further, information indicating a distance from the camera 130 to each seat row and width of each seat is registered in the seating position determination apparatus 110. The seating position determination apparatus 110 detects a passenger from the video of the camera 130, and specifies which seat the passenger being detected is seated in. For example, the seating position determination apparatus 110 is mounted to the vehicle 200. Alternatively, the seating position determination apparatus 110 may be an apparatus installed outside the vehicle. In such a case, the seating position determination apparatus 110 may acquire the video being captured by the camera 130, via a wireless communication network.
The seating position determination apparatus 110 includes a face detection unit 111, a seating row specification unit 112, and a seating position specification unit 113. For example, the seating position determination apparatus 110 is configured as an apparatus including one or more processors and one or more memories. At least some of the functions of the units in the seating position determination apparatus 110 may be realized by executing processing according to a program being read from a memory by a processor.
The face detection unit (face detection means) 111 detects a face region of a subject (passenger) from the video (image) being captured by the camera 130. When the image includes a plurality of subjects, the face detection unit 111 detects a plurality of face regions. For example, when a face region is newly detected, the face detection unit 111 provides a tracking identifier (ID) to the face region being detected. In the video, the face detection unit 111 tracks a position of the face region being detected, for each tracking ID in a time direction. A detection method of a face region and a tracking method thereof in the face detection unit 111 are not limited to specific methods. For example, the face detection unit 111 is capable of detecting a face region by using a face recognition technique. Further, the face detection unit 111 may track the same face region by using position information relating to a face region.
The seating row specification unit (seating row specification means) 112 specifies a seat row in which a passenger with the face region being detected is seated. For example, the seating row specification unit 112 estimates a distance from a position in the vehicle 200 at which a face region is present to the camera 130 for each tracking ID. In other words, the seating row specification unit 112 estimates a distance from a position of a face of a passenger to the camera 130 for each passenger. For example, the seating row specification unit 112 estimates a distance from a position of a face of a passenger to the camera 130, based on a distance between feature points in a face region. The seating row specification unit 112 estimates a seat row (seating row) a passenger is seated in for each passenger, based on the distance being estimated.
For example, the seating row specification unit 112 extracts eyes of a passenger as feature points, and acquires a distance between both the eyes in the image. For example, the seating row specification unit 112 assumes that the distance between both the eyes is a predetermined value, for example, 6 cm (=0.06 m), and estimates a distance (position) of the passenger in a depth direction in an actual space, based on the distance between both the eyes in the image, and an angle θ of view of the camera 130 and the number D of pixels in a horizontal direction. Specifically, it is assumed that a distance E between both the eyes is 6 cm (=0.06 m), the distance between both the eyes in the image is De, and a distance H from eyes to a back of a head of a person is a predetermined value of 30 cm (=0.3 m). In a case of a central projection method, when a pinhole camera model is used, a distance Ye of the passenger in the depth direction in the actual space can be calculated by using the following expression.
In a case of an equidistant projection method, the distance Ye of the passenger in the depth direction in the actual space can be calculated by using the following expression.
The distance Ye given above is a simplified expression without consideration of lens distortion or the like. The seating row specification unit 112 may calculate the distance Ye by using an expression considering distortion aberration in a lens used in the camera 130 or the like.
The seating row specification unit 112 specifies which seat row the passenger is seated in, based on the distance Ye being acquired as described above and a distance from each seat row to the camera 130. For example, the seating row specification unit 112 calculates a difference between the distance Ye and the distance from each seat row to the camera 130. The seating row specification unit 112 specifies a seat row the passenger is seated in, based on the difference being calculated. For example, the seating row specification unit specifies a seat row with the smallest difference as the seat row the passenger is seated in. When a seat is configured in such a way that a reclining angle and a front-and-rear position are adjustable, the seating row specification unit 112 may acquire control information relating to the seat from the vehicle, and may change the distance from each seat row to the camera 130 according to the control information being acquired. For example, the seating row specification unit 112 may acquire a reclining angle of the seat from the vehicle, and may calculate the distance from each seat row to the camera 130, based on the reclining angle being acquired.
Note that the method of estimating a distance of a passenger in the depth direction by the seating row specification unit 112 is not particularly limited to the above-mentioned method. For example, the seating row specification unit 112 may estimate a distance of the passenger in the depth direction by using feature points other than the eyes. Alternatively, the seating row specification unit 112 may estimate a distance of the passenger in the depth direction from an image of the camera 130 by depth degree estimation using deep learning. When the camera 130 is a stereo camera, the seating row specification unit 112 may estimate a distance of the passenger in the depth direction by using a parallax image. Moreover, the seating row specification unit 112 may acquire distance information from a time-of-flight (ToF) camera or a distance measurement apparatus.
In the seating position specification unit (seating position specification means) 113, information indicating a position (range) of each seat in a camera image is registered for each seat row. A position of each seat (a region of a seat position) in a camera image can be acquired by
In a case of a central projection method, when a pinhole camera model is used, the coordinates Ds of each seat position in the vehicle width direction can be calculated by using the expression given below.
Further, in a case of an equidistant projection method, the coordinates Ds of each seat position in the vehicle width direction (X direction) can be calculated by using the expression given below.
In the expression given above, θs indicates an angle formed between a linear line connecting the position of the camera 130 (the origin) and a head rest of each seat to each other and a longitudinal direction (Y axis) of the vehicle. The expression for the coordinates Ds given above is a simplified expression without consideration of lens distortion or the like. The seating position specification unit 113 may calculate the coordinates Ds by using an expression considering distortion aberration in a lens used in the camera 130 or the like. The method of specifying a position of each seat in a camera image is not particularly limited to the above-mentioned method.
For example, when the number of seat rows is two, the seating position specification unit 113 stores a seat range in an image for each seat in the front row and each seat in the rear row. For example, when it is specified that a passenger is seated in the rear row, the seating position specification unit 113 compares a range of a face region with a range of each of the seat 0, the seat 1, and the seat 2 (see
The seating position specification unit 113 calculates the superposition rate of the face region and the seat range, in other words, a degree at which the range of the face region and the range of each seat overlap with each other. In the example of
When it is determined that a passenger is seated in the rear row, the seating position specification unit 113 calculates the superposition rate of the face region and the range of the seat 0, the superposition rate of the face region and the range of the seat 1, and the superposition rate of the face region and the range of the seat 2. The seating position specification unit 113 specifies a seat with the highest superposition rate of the seat 0, the seat 1, and the seat 2 as the seat the passenger is seated in.
Next, an operation procedure is described.
The seating position specification unit 113 specifies a seat position of the passenger, based on the range of each seat in the seating row being specified in step A3 in the image and the range of the face region (step A4). For example, in step A4, the seating position specification unit 113 calculates a superposition rate of the range of each seat in the seating row being specified in the image and the range of the face region. The seating position specification unit 113 specifies a seat with the highest superposition rate as the seating position of the passenger. The seating position specification unit 113 is capable of outputting the specified seat position of the passenger to an external apparatus, which is omitted in illustration.
In the present example embodiment, the seating row specification unit 112 specifies the seat row in which the passenger with the face region being detected by the face detection unit 111 is seated. The seating position specification unit 113 specifies the seat position of the passenger in the vehicle, based on the range of the seat position in the image and the seat row being specified by the seating row specification unit 112, the range of the seat position being prepared for each seat row. With this, the seating position determination apparatus 110 is capable of specifying not only the position of the passenger in the vehicle but also the seat the passenger is seated in, based on the camera image. In the present example embodiment, for example, the seating position determination apparatus 110 is capable of specifying the seating position of the passenger, by using a video of one camera 130 installed between the driver's seat and the passenger's seat. In this case, only one camera 130 being used is required, and the seating position of the passenger can be specified at low cost.
Subsequently, a second example embodiment of the present disclosure is described.
The attribute acquisition unit (attribute acquisition means) 115 acquires attribute information relating to the person being identified by the face recognition unit 114. For example, the attribute acquisition unit 115 acquires the attribute information relating to the person being identified by the face recognition unit 114 from a database in which attribute information relating to a plurality of persons is stored. For example, the attribute information includes information indicating age, gender, an occupation, a hobby, and the like. In the present example embodiment, in addition to the specified seat position of the passenger, the seating position specification unit 113 is capable of outputting information for identifying the passenger or attribute information relating to the passenger to an external apparatus, which is omitted in illustration.
In the present example embodiment, the face recognition unit 114 subjects the face region being detected to face recognition, and identifies an individual. Further, the attribute acquisition unit 115 acquires the attribute information relating to the passenger. In the present example embodiment, a seating position determination apparatus 110a is capable of specifying not only a seat position of a passenger but also who is seated in which seat. Alternatively, the seating position determination apparatus 110a is capable of specifying which person with specific attribute information is seated in which seat.
Note that description is made above on an example in which the attribute acquisition unit 115 acquires a recognition result from the face recognition unit 114 and acquires attribute information relating to a person identified by the face recognition unit 114. However, the present example embodiment is not limited thereto. For example, the attribute acquisition unit 115 may acquire attribute information such as an age group and gender from the face region being detected by the face detection unit 111.
For example, the seating position determination apparatus 110a according to the present example embodiment may be used in a content distribution system.
The content distribution apparatus 310 acquires information indicating the seat the passenger is seated in from the seating position determination apparatus 110a. Alternatively, the content distribution apparatus 310 acquires information indicating whether a passenger is seated in each seat from the seating position determination apparatus 110a. Further, the content distribution apparatus 310 acquires information for identifying a person seated in each seat or attribute information relating to a person seated in each seat from the seating position determination apparatus 110a. The content distribution apparatus 310 output contents to the monitors 320 to 340. For example, the contents being output to the monitors 320 to 340 include an advertisement content and a video content.
For example, the content distribution apparatus 310 outputs a content to a monitor associated with a seat a passenger is seated in. The content distribution apparatus 310 may not output a content to a monitor associated with a seat no one is seated in. When the content distribution apparatus 310 acquires information for identifying a person seated in a seat, the content distribution apparatus 310 may output a content, which is customized according to the person being identified, to the monitor. When attribute information relating to a person seated in a seat is acquired, the content distribution apparatus 310 may output a content according to the attribute information being acquired to the monitor. The content distribution apparatus 310 may distribute a general content to a monitor associated with a seat for which a person is not identified or attribute information is not acquired.
The information for identifying a person seated in each seat or the attribute information relating to a person seated in each seat, which is acquired by the seating position determination apparatus 110a, may be used for control of the vehicle 200. For example, control of the vehicle 200 may include adjustment of a reclining angle or a front-and-rear position of a seat, a temperature setting or an airflow setting of an air-conditioning apparatus, and the like. For example, the vehicle 200 may adjust a reclining angle or a front-and-rear position of a seat according to the information for identifying a person seated in a seat in the front row or the attribute information relating to a person seated in a seat in the front row. Alternatively, the vehicle 200 may change a setting of the air-conditioning apparatus according to the information for identifying a person seated in each seat or the attribute information relating to a person seated in each seat.
Subsequently, a third example embodiment of the present disclosure is described. The configuration of the seating position determination apparatus according to the present example embodiment is similar to the configuration of the seating position determination apparatus 110 according to the first example embodiment illustrated in
In the first example embodiment, in a case in which the seating row specification unit 112 estimates a distance in the depth direction by using the fact that the distance between the feature points is a constant value, when a passenger turns to a side, the distance between the feature points is reduced, and the distance being estimated in the depth direction is longer than the actual distance in some cases. For example, a passenger seated in a seat in the frontmost row closest to the camera turns to a side, the distance between both the eyes in the image is equal to or less than a half of the actual distance in some cases. In such a case, when the distance being estimated in the depth direction is longer than the actual distance, the seating row specification unit 112 erroneously specifies a seating row of the passenger seated in the front row as the rear row in some cases. The present example embodiment is an example embodiment that solves at least a part of such a problem.
In the present example embodiment, it is assumed that, when a face region is detected, the face detection unit 111 provides the face region with a tracking ID and tracks the face region for each tracking ID. When a passenger seated in the front row is to move to the rear row, it is assumed that the passenger faces backward while moving. In this case, detection of the face region is temporarily interrupted, and it cannot be assumed that the face region moves from the front row to the rear row while maintaining the same tracking ID. In the present example embodiment, the seating row specification unit 112 counts the number of times for which a seating row of each passenger is specified as the front row. When the number of times for which a seating row of a passenger is specified as the front row (front row count) is equal to or greater than a certain value, the seating row specification unit 112 specifies that the passenger is seated in the front row as long as the face region is successively detected.
Note that, it is assumed that a distance between feature points of a person seated in the rear row is not changed by twice or more according to orientation of the face. Thus, it is assumed that the rear row the passenger is seated in is not erroneously recognized as the front row. Further, in the description given above, description is made on a case in which the front row the passenger is seated in is erroneously recognized as the rear row by estimating the distance in the depth direction, based on the distance between the feature points in the face region. However, in the present example embodiment, distance estimation in the depth direction is not particularly limited to estimation based on the distance between the feature points in the face region. The present example embodiment is also applicable to a case other than the case in which a distance in the depth direction is estimated based on the distance between the feature points in the face region.
The seating row specification unit 112 determines whether the face region being detected in step B2 is a new face region (step B3). In other words, the seating row specification unit 112 determines whether the face region being detected in step B2 is the face region being continuously detected from before or a face region being newly detected. For example, the seating row specification unit 112 determines whether the face region being detected is a new face region, based on the tracking ID being provided to the face region by the face detection unit 111.
When it is determined that the face region being detected is a new face region in step B3, the seating row specification unit 112 specifies a seating row for the face region being detected (step B4). Step B4 may be similar to step A3 illustrated in
When it is determined that, in step B3, the face region being detected is not a new face region, in other words, the face region being detected is the face region being previously detected, the seating row specification unit 112 determines whether the front row count is equal to or greater than a predetermined value (step B5). When it is determined that the front row count is equal to or greater than the predetermined value in step B5, the seating row specification unit 112 specifies that the passenger is seated in the front row (step B6). When it is determined that the front row count is not equal to or greater than the predetermined value in step B5, the seating row specification unit 112 proceeds to step B4, and specifies a seating row for the face region being detected.
The seating row specification unit 112 determines whether the row the passenger is seated in is specified as the front row (step B7). When it is determined that the row the passenger is seated in is specified as the front row in step B7, the seating row specification unit 112 adds one to the front row count (step B8). When it is determined that the row the passenger is seated in is not specified as the front row in step B7, the seating row specification unit 112 resets the front row count (step B9).
The seating position specification unit 113 specifies a seating position of the passenger, based on the range of each seat in the seating row being specified in step B4 or B6 in the image and the range of the face region (step B10). Step B10 may be similar to step A4 illustrated in
In the present example embodiment, the seating row specification unit 112 increments the front row count whenever the seating row of the passenger whose face region is successively detected is specified as the front row. When the front row count is equal to or greater than the predetermined value, the seating row specification unit 112 specifies the seating row of the passenger as the front row. In this case, when the seating row of the passenger is successively specified as the front row for the predetermined number of times or more, the seating row is specified as the front row as long as the face detection of the passenger successively succeeds. Thus, in the present example embodiment, it can be prevented that the front row the passenger is seated in is erroneously specified as the rear row. Other effects are similar to the effects in the first example embodiment or the second example embodiment.
A fourth example embodiment of the present disclosure is described. The configuration of the seating position determination apparatus according to the present example embodiment is similar to the configuration of the seating position determination apparatus 110 according to the first example embodiment illustrated in
In the third example embodiment, it is assumed that the rear row the passenger is seated in is not erroneously specified as the front row. However, when the passenger seated in the rear row moves laterally while being seated in the rear row, the posture leans forward, and the distance being estimated in the 5 depth direction is reduced in some cases. For example, when which the seating row specification unit 112 estimates a distance in the depth direction by using the fact that the distance between the feature points is a constant value, the distance between the feature points of the passenger seated in the rear row is reduced in some cases. Specifically, when the passenger leans forward, the distance between both the eyes in the image is equal to or larger than a half of the actual distance. In such a case, when the distance being estimated in the depth direction is shorter than the actual distance, the seating row specification unit 112 erroneously specifies the seating row the passenger is seated in, which is the rear row, as the front row in some cases. In this case, the premise assumed in the third example embodiment that the seating row the passenger seated in, which is the rear row, is not erroneously specified as the front row does not hold. The present example embodiment is an example embodiment that solves at least a part of the above-mentioned problem.
In the present example embodiment, even in a case in which the seating row is specified as the front row, when the passenger moves, the seating row specification unit 112 does not add one to the front row count. For example, the seating row specification unit 112 determines whether the passenger moves, based on whether a position of the face region is changed. More specifically, for example, the seating row specification unit 112 determines whether the passenger moves, based on a distance between a center of a frame indicating a face region, in other words, a face detection frame and a center of a face detection frame from predetermined frames prior thereto. For example, the seating row specification unit 112 may determine whether the passenger moves, based on whether the distance between the centers of the face detection frames is equal to or less than a predetermined ratio (L %, where L is a positive integer) of the width of the face detection frame. For example, when the distance between the centers of the face detection frames exceeds L % of the width of the face detection frame, the seating row specification unit 112 determines that the passenger moves. When the passenger moves, the seating row specification unit 112 resets the front row count without increment. With this, the front row count can be prevented from being incremented to the predetermined value or greater while the passenger moves, and the seating row the passenger is seated in, which is the rear row, can be prevented from being erroneously specified as the front row.
When it is determined that the face region being detected is a new face region in step C3, the seating row specification unit 112 specifies a seating row for the face region being detected (step C4). When it is determined that the face region being detected is not a new face region in step C3, the seating row specification unit 112 determines whether the front row count is equal to or greater than the predetermined value (step C5). When it is determined that the front row count is equal to or greater than the predetermined value in step C5, the seating row specification unit 112 specifies the row the passenger is seated in is the front row (step C6). When it is determined that the front row count is not equal to or greater than the predetermined value in step C5, the seating row specification unit 112 proceeds to step C4, and specifies a seating row for the face region being detected.
The seating row specification unit 112 determines whether the row the passenger is seated in is specified as the front row (step C7). Steps C1 to C7 may be similar to steps B1 to B7 illustrated in
The seating position specification unit 113 specifies a seating position of the passenger, based on the range of each seat in the seating row being specified in step C4 or C6 in the image and the range of the face region (step C11). Further, steps C8, C10, and C11 may be similar to steps B9, B8, and B10 illustrated in
In the present example embodiment, the seating row specification unit 112 determines whether a position of the face region is changed for the passenger (face region) whose seating row is specified as the front row. When it is determined that the position of the face region is changed, the seating row specification unit 112 does not increment the front row count. With this, even when the row the passenger is seated in, which is the rear row, is erroneously specified as the front row, the front row count can be prevented from being incremented to the predetermined value or greater, and successive specification of the seating row as the front row can be prevented. Other effects are similar to the effects in the third example embodiment.
Subsequently, a hardware configuration of the seating position determination apparatus 110 is described.
The ROM 502 is a non-volatile storage device. For example, a semiconductor storage device such as a flash memory with relatively small capacity is used as the ROM 502. The ROM 502 stores a program executed by the processor 501.
The above-mentioned program includes a command group (or a software code) for causing a computer to execute one or more of the functions described in the example embodiments when the program is read by the computer. The program may be stored in a non-transitory computer-readable medium or a solid storage medium. Examples of the computer-readable medium or the solid storage medium include, but are not limited to, a RAM, a ROM, a flash memory, a solid-state drive (SSD), or other memory techniques, a compact disc (CD), a digital versatile disc (DVD), a Blu-ray (registered trademark) disc, or other optical disc storages, and a magnetic cassette, a magnetic tape, a magnetic disc storage, or other magnetic storage devices. The program may be transmitted via a non-transitory computer-readable medium or a communication medium. Examples of the non-transitory computer-readable medium or the communication medium include, but are not limited to, a propagation signal in an electric, optical, acoustic, or other formats.
The RAM 503 is a volatile storage device. Various types of semiconductor memory devices such as a dynamic random access memory (DRAM) and a static random access memory (SRAM) are used as the RAM 503. The RAM 503 may be used as an internal buffer for temporarily storing data and the like.
The processor 501 loads the program being stored in the ROM 502 into the RAM 503, and executes the program. When the CPU 501 executes the program, the functions of the units in the seating position determination apparatus 110 may be realized.
While the example embodiments of the present disclosure are described above in detail, the present disclosure is not limited to the above-mentioned example embodiments, and modifications or amendments that are made to the above-mentioned example embodiments within the scope of the present disclosure are also considered part of the present disclosure.
For example, the whole or a part of the example embodiments described above can be described as, but not limited to, the following supplementary notes.
A seating position determination apparatus including:
a face detection means for detecting a face region of a passenger from an image captured by a camera installed inside a vehicle in which a plurality of seat rows are arranged:
a seating row specification means for specifying a seat row in which the passenger with the face region being detected is seated; and
a seating position specification means for specifying a seat position of the passenger in the vehicle, based on a range of a seat position in the image and the seat row being specified, the range of the seat position being prepared for each seat row.
The seating position determination apparatus according to Supplementary Note 1, wherein the seating position specification means specifies a seat position of the passenger, based on a position of the face region and a position range of each of a plurality of seats in the seat row being specified.
The seating position determination apparatus according to Supplementary Note 1 or 2, wherein the seating position specification means specifies a seat position of the passenger, based on a ratio of the face region overlapping with a position range of each of a plurality of seats in the seat row being specified.
The seating position determination apparatus according to Supplementary Note 3, wherein the seating position specification means specifies a seat having the highest ratio in the seat row being specified as a seat position of the passenger.
The seating position determination apparatus according to any one of Supplementary Notes 1 to 4, wherein the seat rows include a first row on a side closer to the camera in a longitudinal direction of the vehicle and a second row behind the first row, the face detection means tracks the detected face region, in a time direction, and the seating row specification means counts, as a front row count, the number of times at which the first row is specified as a row in which the passenger is seated, and specifies the first row as a row in which the passenger is seated when the front row count for the passenger with the face region being successively detected by the face detection means is equal to or greater than a predetermined value.
The seating position determination apparatus according to Supplementary Note 5, wherein, when the second row is specified as a row in which the passenger is seated, the seating row specification means resets the front row count.
The seating position determination apparatus according to Supplementary Note 5 or 6, wherein the seating row specification means determines whether a position of the face region changes when the first row is specified as a row in which the passenger is seated, and resets the front row count when it is determined that a position of the face region changes.
The seating position determination apparatus according to any one of Supplementary Notes 1 to 7, wherein the seating row specification means extracts a plurality of feature points from an image of the face region, estimates a position in the longitudinal direction of the vehicle, based on a distance between a plurality of the extracted feature points, and specifies the seat, based on the position being estimated in the longitudinal direction.
The seating position determination apparatus according to Supplementary Note 8, wherein, while assuming that a distance between a plurality of feature points in an actual space is a constant value, the seating row specification means estimates a position in the longitudinal direction of the vehicle, based on a distance between a plurality of the extracted feature points and a distance between a plurality of the feature points in the actual space.
The seating position determination apparatus according to any one of Supplementary Notes 1 to 9, further including a face recognition means for subjecting an image of the detected face region to face recognition and specifying the passenger with the face region being detected.
The seating position determination apparatus according to any one of Supplementary Notes 1 to 10, further including an attribute acquisition means for acquiring attribute information relating to the passenger with the face region being detected.
The seating position determination apparatus according to any one of Supplementary Notes 1 to 11, wherein the seating row specification means acquires control information relating to the seat and specifying a seat row in which the passenger is seated by using the acquired control information relating to the seat.
A seating position determination system including:
a camera being installed in a vehicle in which a plurality of seat rows are arranged and being configured to capture an image of an inside of the vehicle; and
a seat position determination apparatus configured to acquire an image captured by the camera and specifying a seat position of a passenger in the vehicle by using the acquired image, wherein the seat position determination apparatus includes:
The seating position determination system according to Supplementary Note 13, wherein the seating position specification means specifies a seat position of the passenger, based on a position of the face region and a position range of each of a plurality of seats in the seat row being specified.
The seating position determination system according to Supplementary Note 13 or 14, wherein the seating position specification means specifies a seat position of the passenger, based on a ratio of the face region overlapping with a position range of each of a plurality of seats in the seat row being specified.
A seating position determination method including:
detecting a face region of a passenger from an image captured by a camera installed inside a vehicle in which a plurality of seat rows are arranged:
specifying a seat row in which the passenger with the face region being detected is seated; and
specifying a seat position of the passenger in the vehicle, based on a range of a seat position in the image and the seat row being specified, the range of the seat position being prepared for each seat row.
A non-transitory computer-readable medium configured to store a program for causing a processor to execute processing of:
detecting a face region of a passenger from an image captured by a camera installed inside a vehicle in which a plurality of seat rows are arranged; specifying a seat row in which the passenger with the face region being detected is seated; and
specifying a seat position of the passenger in the vehicle, based on a range of a seat position in the image and the seat row being specified, the range of the seat position being prepared for each seat row.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2021/043434 | 11/26/2021 | WO |