This application claims priority to Japanese Patent Application No. 2021-193954 filed on Nov. 30, 2021, incorporated herein by reference in its entirety.
The present disclosure relates to an image processing system.
A user who likes to drive may have a desire (need) to photograph the appearance of the traveling vehicle of the user. The user can post (upload) a photographed image to, for example, a social networking service (hereinafter referred to as “SNS”), such that many people can see the photographed image. Note that, it is difficult for the user to photograph the appearance of the traveling vehicle while the user drives. Therefore, a service that photographs the appearance of the traveling vehicle has been proposed. For example, Japanese Unexamined Patent Application Publication No. 2019-121319 (JP 2019-121319 A) discloses a vehicle photographing support device.
The vehicle photographing support device described in JP 2019-121319 A includes a specification unit that specifies an external camera configured to photograph a vehicle in a traveling state from the outside, an instruction unit that instructs the external camera specified by the specification unit to photograph the vehicle in a traveling state, and an acquisition unit that acquires a photographed image obtained by the external camera in response to an instruction by the instruction unit.
Further, the vehicle photographing support device further includes a reception unit that receives photographing request information input from the user, and the photographing request information includes, for example, information about an editing pattern specified by the user.
Note that, the user cannot know in advance what kind of image can be acquired from the photographed image photographed by the external camera, and there is a possibility that the editing pattern specified by the user does not result in an attractive image.
The present disclosure provides an image processing system capable of acquiring an attractive image of a traveling vehicle.
An image processing system according to the present disclosure includes a memory configured to store moving image data photographed by a camera, and a processor configured to perform image processing on the moving image data stored in the memory, extract a plurality of frames in which a target vehicle registered in advance is imaged from a moving image photographed by the camera, and select a frame in which the target vehicle is positioned at a specific position among the frames.
Features, advantages, and technical and industrial significance of exemplary embodiments of the disclosure will be described below with reference to the accompanying drawings, in which like signs denote like elements, and wherein:
Hereinafter, embodiments of the present disclosure will be described in detail with reference to the drawings. The same or corresponding parts in the drawings are designated by the same reference numerals, and the description thereof will not be repeated.
System Configuration
The photographing system 1 is installed, for example, near a road, and photographs a vehicle 9 (see
The server 2 is, for example, an in-house server of a business operator that provides a vehicle photographing service. The server 2 may be a cloud server provided by a cloud server management company. The server 2 generates an image for a user to view (hereinafter, also referred to as “viewing image”) from the moving image received from the photographing system 1, and provides the generated viewing image to the user. The viewing image is generally a still image, but may be a short moving image. The user is often the driver of the vehicle 9, but is not particularly limited.
The processor 11 controls the overall operation of the photographing system 1. The memory 12 stores a program (an operating system and an application program) executed by the processor 11 and data (a map, a table, a mathematical formula, a parameter, and the like) used in the program. Further, the memory 12 temporarily stores the moving image photographed by the photographing system 1.
The recognition camera 13 photographs a moving image (hereinafter, also referred to as “identification moving image”) for the processor 11 to recognize the number of the license plate provided on the vehicle 9. The viewing camera 14 photographs a moving image (hereinafter, also referred to as “viewing moving image”) used for generating the viewing image. Each of the recognition camera 13 and the viewing camera 14 is desirably a high-sensitivity kind camera with a polarizing lens.
The communication IF 15 is an interface for communicating with the server 2. The communication IF 15 is, for example, a communication module compliant with 4G (Generation) or 5G.
In the example shown in
The target vehicle TV and the other vehicle OV are not limited to the four-wheeled vehicles as shown in
The processor 21 executes various arithmetic processing on the server 2. The memory 22 stores a program executed by the processor 21 and data used in the program. Further, the memory 22 stores data used for image processing by the server 2 and stores image-processed data by the server 2. The input device 23 receives the input of the administrator of the server 2. The input device 23 is typically a keyboard and a mouse. The display 24 displays various information. The communication IF 25 is an interface to communicate with the photographing system 1.
Functional Configuration of Image Processing System
The identification moving image photographing unit 31 photographs an identification moving image for the number recognition unit 342 to recognize the number of the license plate. The identification moving image photographing unit 31 outputs the identification moving image to the vehicle extraction unit 341. The identification moving image photographing unit 31 corresponds to the recognition camera 13 of
The viewing moving image photographing unit 32 photographs the viewing moving image for the user to view of the vehicle 9. The viewing moving image photographing unit 32 outputs the viewing moving image to the moving image buffer 346. The viewing moving image photographing unit 32 corresponds to the viewing camera 14 shown in
The communication unit 33 performs bidirectional communication with a communication unit 42 (described later) of the server 2 via the network NW. The communication unit 33 receives the number of the target vehicle from the server 2. Further, the communication unit 33 transmits a viewing moving image (more specifically, a moving image cut out from the viewing moving image to include the target vehicle) to the server 2. The communication unit 33 corresponds to the communication IF 15 of
The vehicle extraction unit 341 extracts a vehicle (including the entire vehicles, not limited to the target vehicle) from the identification moving image. The processing is also referred to as “vehicle extraction processing”. For the vehicle extraction processing, for example, a trained model generated by a machine learning technique such as deep learning (deep layer learning) can be used. In the example, the vehicle extraction unit 341 is realized by a “vehicle extraction model”. The vehicle extraction model will be described with reference to
The number recognition unit 342 recognizes the number of the license plate from the moving image in which the vehicle is extracted by the vehicle extraction unit 341. The processing is also referred to as “number recognition processing”. A trained model generated by a machine learning technique such as deep learning can also be used for the number recognition processing. In the example, the number recognition unit 342 is realized by the “number recognition model”. The number recognition model will be described with reference to
The matching processing unit 343 associates the vehicle extracted by the vehicle extraction unit 341 with the number recognized by the number recognition unit 342. The processing is also referred to as “matching processing”. Specifically, with reference to
The target vehicle selection unit 344 selects a vehicle of which number matches the number of the target vehicle (received from the server 2) as the target vehicle from the vehicles to which the numbers are associated by the matching processing. The target vehicle selection unit 344 outputs the vehicle selected as the target vehicle to the feature amount extraction unit 345.
The feature amount extraction unit 345 extracts the feature amount of the target vehicle by analyzing the moving image including the target vehicle. More specifically, the feature amount extraction unit 345 calculates the traveling speed of the target vehicle based on the temporal change of the target vehicle in the frame including the target vehicle (for example, the movement amount of the target vehicle between the frames, and the change amount of the size of the target vehicle between the frames). The feature amount extraction unit 345 may calculate, for example, the acceleration (deceleration) of the target vehicle in addition to the traveling speed of the target vehicle. Further, the feature amount extraction unit 345 extracts information on the appearance (body shape, body color, and the like) of the target vehicle by using a known image recognition technique. The feature amount extraction unit 345 outputs the feature amount (traveling state and appearance) of the target vehicle to the moving image cutting unit. Further, the feature amount extraction unit 345 outputs the feature amount of the target vehicle to the communication unit 33. As a result, the feature amount of the target vehicle is transmitted to the server 2.
The moving image buffer 346 temporarily stores the viewing moving image. The moving image buffer 346 is typically a ring buffer (circular buffer), and has an annular storage area in which the beginning and the end of the one-dimensional array are logically connected. The newly photographed viewing moving image is stored in the moving image buffer 346 for a predetermined time that can be stored in the storage area. The viewing moving image (old moving image) that exceeds the predetermined time is automatically deleted from the moving image buffer 346.
The moving image cutting unit 347 cuts out the portion in which the target vehicle is likely to be photographed based on the feature amount (a traveling speed, an acceleration, a body shape, a body color, and the like of the target vehicle) extracted by the feature amount extraction unit 345 from the viewing moving image stored in the moving image buffer 346. More specifically, the distance between the point photographed by the identification moving image photographing unit 31 (the recognition camera 13) and the point photographed by the viewing moving image photographing unit 32 (the viewing camera 14) is known. Therefore, when the traveling speed (and acceleration) of the target vehicle is known, the moving image cutting unit 347 can calculate the time difference between a timing at which the target vehicle is photographed by the identification moving image photographing unit 31 and a timing at which the target vehicle is photographed by the viewing moving image photographing unit 32. The moving image cutting unit 347 calculates the timing at which the target vehicle is photographed by the viewing moving image photographing unit 32 based on the timing at which the target vehicle is photographed by the identification moving image photographing unit 31 and the time difference. Then, the moving image cutting unit 347 cuts out a moving image having a predetermined time width (for example, several seconds to several tens of seconds) including the timing at which the target vehicle is photographed from the viewing moving image stored in the moving image buffer 346. The moving image cutting unit 347 outputs the cut-out viewing moving image to the communication unit 33. As a result, the viewing moving image including the target vehicle is transmitted to the server 2.
The moving image cutting unit 347 may cut out the viewing moving image at a predetermined timing regardless of the feature amount extracted by the feature amount extraction unit 345. That is, the moving image cutting unit 347 may cut out the viewing moving image photographed by the viewing moving image photographing unit 32 after a predetermined time difference from the timing when the target vehicle is photographed by the identification moving image photographing unit 31.
The server 2 includes a storage unit 41, the communication unit 42, and an arithmetic processing unit 43. The storage unit 41 includes an image storage unit 411 and a registration information storage unit 412. The arithmetic processing unit 43 includes a vehicle extraction unit 431, a target vehicle specification unit 432, a frame extraction unit 433A, an image processing unit 433B, an album creation unit 434, a web service management unit 435, and a photographing system management unit 436.
The image storage unit 411 stores the final image obtained as a result of the arithmetic processing by the server 2. More specifically, the image storage unit 411 stores the images before and after processing by the frame extraction unit 433A and the image processing unit 433B, and stores the album created by the album creation unit 434.
The registration information storage unit 412 stores registration information related to a vehicle photographing service. The registration information includes the personal information of the user who applied for the provision of the vehicle photographing service and the vehicle information of the user. The personal information of the user includes, for example, information related to the identification number (ID), the name, the date of birth, the address, the telephone number, and the e-mail address of the user. The vehicle information of the user includes information related to the number of the license plate of the vehicle. The vehicle information may include, for example, information related to a vehicle kind, a model year, a body shape (a sedan kind, a wagon kind, and a one-box kind), and a body color.
The communication unit 42 performs bidirectional communication with the communication unit 33 of the photographing system 1 via the network NW. The communication unit 42 transmits the number of the target vehicle to the photographing system 1. Further, the communication unit 42 receives the viewing moving image including the target vehicle and the feature amount (a traveling state and appearance) of the target vehicle from the photographing system 1. The communication unit 42 corresponds to the communication IF 25 of
The vehicle extraction unit 431 extracts a vehicle (including the entire vehicles, not limited to the target vehicle) from the viewing moving image. For the processing, a vehicle extraction model can be used in the same manner as the vehicle extraction processing by the vehicle extraction unit 341 of the photographing system 1. The vehicle extraction unit 431 outputs a moving image (a frame including the vehicle) in which the vehicle is extracted from the viewing moving image to the target vehicle specification unit 432.
The target vehicle specification unit 432 specifies the target vehicle based on the feature amount of the target vehicle (that is, a traveling state such as a traveling speed and an acceleration, and the appearance such as a body shape and a body color) from the vehicle extracted by the vehicle extraction unit 431. The processing is also referred to as “target vehicle specification processing”. A trained model generated by a machine learning technique such as deep learning can also be used for the target vehicle specification processing. In the example, the target vehicle specification unit 432 is realized by the “target vehicle specification model”. The target vehicle specification model will be described with reference to
The frame extraction unit 433A extracts an image (frame) in which the target vehicle is positioned at a predetermined specific position from the viewing image output from the target vehicle specification unit 432. The processing is also referred to as frame extraction processing. The specific position is not limited to one, and may be set to a plurality of points.
An imaging range R0 of the viewing moving image photographing unit 32 is fixed, and the specific positions P1, P2, P3 are predetermined positions within the imaging range R0.
The viewing image output by the target vehicle specification unit 432 to the frame extraction unit 433 includes image data of each frame, and position information of the target vehicle and information indicating a range occupied by the target vehicle in each frame. The frame extraction unit 433A determines whether the target vehicle is positioned at any of the specific positions P1, P2, P3 in each frame based on the position information of the target vehicle and the range occupied by the target vehicle included in each frame.
For the specific positions P1, P2, P3, for example, the specific positions P1, P2, P3 can be determined by performing verification in advance on which position makes a traveling posture of the target vehicle look aesthetically pleasing. In particular, when the imaging range R0 of the viewing moving image photographing unit 32 is fixed, the traveling posture of the target vehicle at the specific positions P1, P2, P3, and the like can be grasped in advance, and thus the specific positions P1, P2, P3 can be determined by performing verification in advance. The specific position may be one point.
The good traveling posture includes the posture of the vehicle having a dynamic feeling of cornering and the posture of the vehicle having a feeling of speed when the vehicle travels in a straight line. The posture of the vehicle with a dynamic feeling of cornering includes the posture when entering the corner, the posture during cornering, the posture when the vehicle exits the corner, and the like.
For the frame extraction processing, for example, a trained model (frame extraction model) generated by a machine learning technique such as deep learning (deep layer learning) can be used. The “frame extraction model” will be described later in
The extraction images IM1, IM2, IM3 shown in
The frame extraction unit 433A outputs at least one extraction image (extracted frame) extracted to the image processing unit 433B. In the present embodiment, the extraction images IM1, IM2, IM3 are output to the image processing unit 433B.
The image processing unit 433B crops the extraction image such that the cropped image includes the target vehicle TV. The processing is called vehicle cropping processing.
In the vehicle cropping processing, the range occupied by the target vehicle TV and the range occupied by the surrounding other than the target vehicle are determined in the extraction image.
Then, for example, based on the rule of thirds, the target vehicle TV and the surrounding image other than the target vehicle are cropped from the extraction image. In this way, by performing the vehicle cropping processing, the final image using the rule of thirds is acquired. The composition rule is not limited to the rule of thirds, and various composition rules such as the rule of fourths, the triangle composition, and the center composition may be adopted.
When the vehicle cropping processing is performed, the background included in the surrounding image, and the like may be taken into consideration. For example, when the image processing unit 433B determines that the surrounding image includes an image such as the other vehicle OV, the image processing unit 433B performs the vehicle cropping processing such that the cropped image does not include an exclusion target such as the other vehicle.
In the case, the image processing unit 433B detects an object around the target vehicle TV by using an object detection model such as you only look once (YOLO). Then, the image processing unit 433B specifies an exclusion target such as the other vehicle OV, and performs the vehicle cropping processing on the extraction image such that the exclusion target is not included in the cropped image.
When the vehicle cropping processing is performed, the cropped image may include a specific background in the surrounding image. For example, the cropped image may include a specific background such as the sea and the mountain in the background.
When the imaging range R0 of the viewing moving image photographing unit 32 is fixed, the position of the target to be included (inclusion target) as the background is fixed.
In the case, the image processing unit 433B acquires information indicating the position and the range of the inclusion target in advance. Then, in the vehicle cropping processing, the cropped image includes the target vehicle TV and the inclusion target and does not include the exclusion target, and the extraction image is cropped such that the target vehicle TV is positioned in the cropped image based on the rule of thirds and the like. An object and the like recognized as an inclusion target by the object detection model may be cropped to be included as the inclusion target.
Since the target vehicle TV faces diagonally forward when the target vehicle TV is positioned at the specific position P1, the image processing unit 433B adopts the rule of thirds map CM1.
The image processing unit 433B superimposes the rule of thirds map CM1 on the extraction image IM1. The rule of thirds map CM1 has a rectangular shape, and the rule of thirds map CM1 includes an outline of a rectangular shape, vertical division lines L1, L2, and horizontal division lines L3, L4.
The image processing unit 433B adjusts the size of the rule of thirds map CM1 such that the target range R1 and the inclusion range R3 are included in the rule of thirds map CM1 and the exclusion range R2 is not included. Further, the image processing unit 433B disposes the rule of thirds map CM1 such that an intersection P10 of the vertical division line L1 and the horizontal division line L3 is positioned within the target range R1. Then, the image processing unit 433B crops the extraction image IM1 along the outer shape of the rule of thirds map CM1. In this way, the image processing unit 433B can create the extraction image IM1 by performing the vehicle cropping processing on the extraction image IM1.
Although described based on the extraction image EVIL the image processing unit 433B also performs the vehicle cropping processing on the extraction image IM2 and the extraction image IM3.
The image processing unit 433B installs the center composition map CM2 such that the center of the target range R1 is positioned in the virtual circle CL.
In the state in which the target vehicle TV is positioned at the specific position P2, the target vehicle TV faces directly to the side. Therefore, the image processing unit 433B selects the center composition map CM2 as the composition map. In this way, when the specific position is specified in advance, the composition map to be adopted is changed according to the specific position.
In
In the extraction image IM2 of the vehicle positioned at the specific position P2 also, the image of the target vehicle TV may be cropped by using the rule of thirds map CM1 and the like.
The image processing unit 433B selects the rule of thirds map CM3 because the target vehicle TV positioned at the specific position P3 faces diagonally.
Then, the image processing unit 433B adjusts the size of the rule of thirds map CM1 such that the target range R1 and the inclusion range R3 are included in the rule of thirds map CM3 and the exclusion range R2 is not included. In the example shown in
Then, the image processing unit 433B can create the final image FIM3 by cropping the extraction image IM3 along the outer shape of the rule of thirds map CM3.
For the vehicle cropping processing, for example, a trained model (vehicle cropping model) generated by a machine learning technique such as deep learning (deep layer learning) can be used. The “vehicle cropping model” will be described later in
Returning to
The web service management unit 435 provides a web service (for example, an application program that can be linked to the SNS) by using an album created by the album creation unit 434. The web service management unit 435 may be implemented on a server different from the server 2.
The photographing system management unit 436 manages (monitors and diagnoses) the photographing system 1. When some abnormality (camera failure, communication failure, and the like) occurs in the photographing system 1 under management, the photographing system management unit 436 notifies the administrator of the server 2 of the abnormality. As a result, the administrator can take measures such as inspection and repair of the photographing system 1. The photographing system management unit 436 may be implemented as a separate server as well as the web service management unit 435.
Trained Model
A large amount of training data is prepared in advance by a developer. The training data includes example data and correct answer data. The example data is image data including the vehicle that is an extraction target. The correct answer data includes the extraction result corresponding to the example data. Specifically, the correct answer data is image data in which the vehicle included in the example data is extracted.
A learning system 61 trains the estimation model 51 by using the example data and the correct answer data. The learning system 61 includes an input unit 611, an extraction unit 612, and a learning unit 613.
The input unit 611 receives a large number of example data (image data) prepared by the developer and outputs the example data to the extraction unit 612.
By inputting the example data from the input unit 611 into the estimation model 51, the extraction unit 612 extracts the vehicle included in the example data for each example data. The extraction unit 612 outputs the extraction result (the output from the estimation model 51) to the learning unit 613.
The learning unit 613 trains the estimation model 51 based on the extraction result of the vehicle from the example data received from the extraction unit 612 and the correct answer data corresponding to the example data. Specifically, the learning unit 613 adjusts the parameter 512 (for example, a weighting coefficient) such that the extraction result of the vehicle obtained by the extraction unit 612 approaches the correct answer data.
The estimation model 51 is learned as described above, and the estimation model 51 for which the learning is completed is stored in the vehicle extraction unit 341 (and the vehicle extraction unit 431) as the vehicle extraction model 71. The vehicle extraction model 71 receives the identification moving image as an input and outputs the identification moving image in which the vehicle is extracted. For each frame of the identification moving image, the vehicle extraction model 71 outputs the extracted vehicle to the matching processing unit 343 in association with the identifier of the frame. The frame identifier is, for example, a timestamp (the time information of the frame).
The estimation model 52 for which learning is completed is stored in the number recognition unit 342 as a number recognition model 72. The number recognition model 72 receives the identification moving image in which the vehicle is extracted by the vehicle extraction unit 341 as an input, and outputs the coordinates and the number of the license plate. For each frame of the identification moving image, the number recognition model 72 outputs the coordinates and the number of the recognized license plate to the matching processing unit 343 in association with the identifier of the frame.
The estimation model 53 for which learning is completed is stored in the target vehicle specification unit 432 as the target vehicle specification model 73. The target vehicle specification model 73 receives the viewing moving image in which the vehicle is extracted by the vehicle extraction unit 431 and the feature amount (a traveling state and appearance) of the target vehicle as inputs, and outputs the viewing moving image in which the target vehicle is specified. For each frame of the viewing moving image, the target vehicle specification model 73 outputs the specified viewing moving image to the frame extraction unit 433A in association with the identifier of the frame.
The vehicle extraction processing is not limited to the processing using machine learning. A known image recognition technique (an image recognition model and an algorithm) that does not use machine learning can be applied to the vehicle extraction processing. The same also applies to the number recognition processing and the target vehicle specification processing.
The example data is a plurality of image frames including the vehicle to be recognized. The correct answer data includes the extraction result corresponding to the example data. Specifically, the correct answer data is an image frame in which the vehicle having a good traveling posture is reflected from a plurality of image frames of the example data.
Although the example data and the correct answer data are different, the learning method of the estimation model 54 by the learning system 64 is the same as the learning method by the learning system 61 and the like, and thus the detailed description is not repeated.
The estimation model 54 for which learning is completed is stored in the frame extraction unit 433A as the frame extraction model 74. The frame extraction model 74 receives the viewing moving image in which the target vehicle is specified as an input, and outputs the frame in which the target vehicle having a good traveling posture is imaged as an extraction image to the image processing unit 433B.
The frame extraction processing is not limited to the processing using machine learning. A known image recognition technique (an image recognition model and an algorithm) that does not use machine learning can be applied to the frame extraction processing.
The example data is a plurality of image frames including the vehicle to be recognized. The image frame of the example data desirably includes at least one of the exclusion target and the inclusion target. The correct answer data is a cropped image cropped from the example data. Specifically, the cropped image obtained by cropping the example data such that the cropped image includes a vehicle to be recognized and an inclusion target and excludes the exclusion target.
In a case where the exclusion target is included when the correct answer data includes the inclusion target, the correct answer data includes the image cropped to include the vehicle to be recognized without including the inclusion target and the exclusion target. Further, the correct answer data includes an image in which the cropping range is set such that the number of pixels in the cropping range is equal to or greater than a predetermined number. The correct answer data includes an image that is set such that the range occupied by the vehicle to be recognized within the cropping range is equal to or greater than a predetermined range. The cropped image of the correct answer data is a cropped image obtained by cropping the example data to apply various composition rules such as a rule of thirds, a rule of fourths, a triangle composition, and a center composition.
Although the example data and the correct answer data are different, the learning method of the estimation model 54 by the learning system 64 is the same as the learning method by the learning system 61 and the like, and thus the detailed description is not repeated.
The estimation model 52 for which learning is completed is stored in the image processing unit 433B as the vehicle cropping model 75.
The vehicle cropping model 75 receives an extraction image (frame) in which a target vehicle having a good traveling posture is imaged as an input, and outputs a cropped image to which the composition rule is applied as a final image. The vehicle cropping processing is not limited to the processing using machine learning. A known image recognition technique (an image recognition model and an algorithm) that does not use machine learning can be applied to the vehicle cropping processing.
Processing Flow
In step S11, the photographing system 1 extracts a vehicle by executing the vehicle extraction processing (see
When the server 2 receives the number from the photographing system 1, the server 2 refers to registration information to determine whether the received number is a registered number (that is, whether the vehicle photographed by the photographing system 1 is the vehicle (target vehicle) of the user who applied for the provision of the vehicle photographing service). When the received number is a registered number (the number of the target vehicle), the server 2 transmits the number of the target vehicle and requests the photographing system 1 to transmit the viewing moving image including the target vehicle (step S21).
In step S13, the photographing system 1 executes matching processing between each vehicle and each number in the recognition moving image. Then, the photographing system 1 selects a vehicle, with which the same number as the number of the target vehicle is associated, as the corresponding vehicle from the vehicles with which the numbers are associated (step S14). Further, the photographing system 1 extracts the feature amount (a traveling state and appearance) of the target vehicle, and transmits the extracted feature amount to the server 2.
In step S16, the photographing system 1 cuts out a portion including the target vehicle from the viewing moving image temporarily stored in the memory 22 (the moving image buffer 346). In the cutting out, the traveling state (a traveling speed, an acceleration, and the like) and appearance (a body shape, a body color, and the like) of the target vehicle can be used as described above. The photographing system 1 transmits the cut-out viewing moving image to the server 2.
In step S22, the server 2 extracts the vehicle by executing the vehicle extraction processing (see
In step S23, the server 2 specifies the target vehicle from the vehicles extracted in step S22 based on the feature amount (a traveling state and appearance) of the target vehicle (the target vehicle specification processing in
Note that, it is not obligatory to use both the traveling state and the appearance of the target vehicle, and merely one of the traveling state and the appearance may be used. Information related to the traveling state and/or the appearance of the target vehicle corresponds to the “target vehicle information” according to the present disclosure. Further, the information related to the appearance of the target vehicle may be the vehicle information stored in advance in the registration information storage unit 412 as well as the vehicle information obtained by the analysis by the photographing system 1 (the feature amount extraction unit 345).
In step S24, the server 2 extracts at least one extraction image (extracted frame) in which the target vehicle TV having a good traveling posture is imaged from the viewing moving image (a plurality of viewing images) including the target vehicle.
In step S25, the server 2 performs cropping on the extraction image such that the target vehicle is included and a predetermined composition is obtained, and extracts the final image including the target vehicle. In step S25, cropping is performed such that the exclusion target is not included. When the exclusion target is not included, the server 2 performs cropping such that the inclusion target is included. In a case where the exclusion target is included when the inclusion target is included, the server 2 performs cropping such that the inclusion target and the exclusion target are not included. The server 2 performs cropping such that the number of pixels in the cropping range is equal to or greater than a predetermined number. The server 2 performs cropping such that the range occupied by the vehicle to be recognized is equal to or greater than a predetermined range within the cropping range.
Then, the server 2 creates an album using the final image (step S26). The user can view the created album and post the desired image in the album to the SNS.
In the present embodiment, an example in which the photographing system 1 and the server 2 share and execute image processing is described. Therefore, both the processor 11 of the photographing system 1 and the processor 21 of the server 2 correspond to the “processor” according to the present disclosure. However, the photographing system 1 may execute all the image processing and transmit the image-processed data (viewing image) to the server 2. Therefore, the server 2 is not an obligatory component for the image processing according to the present disclosure. In the case, the processor 11 of the photographing system 1 corresponds to the “processor” according to the present disclosure. Alternatively, conversely, the photographing system 1 may transmit all the photographed moving images to the server 2, and the server 2 may execute all the image processing. In the case, the processor 21 of the server 2 corresponds to the “processor” according to the present disclosure.
An image processing system according to the present disclosure includes a memory configured to store moving image data photographed by a camera, and a processor configured to perform image processing on the moving image data stored in the memory, extract a plurality of frames in which a target vehicle registered in advance is imaged from a moving image photographed by the camera, and select a frame in which the target vehicle is positioned at a specific position among the frames.
In the image processing system, the imaging range of the camera may be fixed, and the specific position may be a predetermined position in the imaging range.
The image processing system may include a first model storage memory storing a frame extraction model. The frame extraction model may be a trained model that receives a plurality of frames in which a vehicle is imaged as an input and outputs a frame in which a traveling posture of the vehicle is good, and the processor may be configured to select a frame in which the traveling posture of the target vehicle is good from the frames as a frame in which the target vehicle is positioned at the specific position by using the frame extraction model.
In the image processing system, the processor may be configured to perform cropping the frame in which the target vehicle is positioned at the specific position such that a composition of the target vehicle and a background image is a predetermined composition.
The image processing system may further include a second model storage memory storing a cropping model. The cropping model may be a trained module that receives the frame in which the target vehicle is imaged as an input and that outputs a cropped image obtained by cropping the received frame into an image that includes the target vehicle and to which a composition rule is applied
In the image processing system according to the present disclosure, an attractive image of a traveling vehicle can be acquired.
The embodiments disclosed in the present disclosure should be considered to be exemplary and not restrictive in any respects. The scope of the present disclosure is set forth by the claims rather than the description of the embodiments, and is intended to include all modifications within the meaning and scope of the claims.
Number | Date | Country | Kind |
---|---|---|---|
2021-193954 | Nov 2021 | JP | national |