This patent application is based on and claims priority pursuant to 35 U.S.C. § 119(a) to Japanese Patent Application No. 2017-052342 filed on Mar. 17, 2017, in the Japan Patent Office, the entire disclosure of which is hereby incorporated by reference herein.
The present invention relates to an electronic information board system, an image processing device, and an image processing method.
An electronic information board system to which a user inputs information such as a character by performing an interactive input operation on a display board has been used in companies, educational institutions, and administrative agencies, for example. The electronic information board system is also referred to as an interactive whiteboard (IWB) or an electronic whiteboard, for example.
Recent years have seen a spread of a technology of capturing an image with a camera installed to, for example, an upper part of the display board of the electronic information board system, and transmitting and receiving the image between a plurality of electronic information board systems to enable a videoconference between remote sites.
The existing technique, however, has difficulty in communicating the situation of participants of the videoconference to other participants of the videoconference at another site when the participants of the videoconference spread over a relatively wide viewing angle as viewed from the electronic information board system, for example.
In one embodiment of this invention, there is provided an improved image processing device that includes, for example, circuitry to acquire a first image and a second image captured from different viewpoints, detect areas of faces of a plurality of persons in the first image and the second image, set a position of a boundary between the first image and the second image in one of intervals between the detected areas of the faces of the plurality of persons, and combine the first image and the second image at the position of the boundary.
In one embodiment of this invention, there is provided an improved electronic information board system that includes, for example, a board, a first camera, a second camera, and at least one processor. The first camera captures a first image of a space in front of the board from a first viewpoint. The second camera captures a second image of the space in front of the board from a second viewpoint different from the first viewpoint. The at least one processor acquires the first image and the second image, detects areas of faces of a plurality of persons in the first image and the second image, sets a position of a boundary between the first image and the second image in one of intervals between the detected areas of the faces of the plurality of persons, and combines the first image and the second image at the position of the boundary.
In one embodiment of this invention, there is provided an image processing method including acquiring a first image and a second image captured from different viewpoints, detecting areas of faces of a plurality of persons in the first image and the second image, setting a position of a boundary between the first image and the second image in one of intervals between the detected areas of the faces of the plurality of persons, and combining the first image and the second image at the position of the boundary.
A more complete appreciation of the disclosure and many of the attendant advantages and features thereof can be readily obtained and understood from the following detailed description with reference to the accompanying drawings, wherein:
The accompanying drawings are intended to depict embodiments of the present invention and should not be interpreted to limit the scope thereof. The accompanying drawings are not to be considered as drawn to scale unless explicitly noted.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the present invention. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise.
In describing embodiments illustrated in the drawings, specific terminology is employed for the sake of clarity. However, the disclosure of this specification is not intended to be limited to the specific terminology so selected and it is to be understood that each specific element includes all technical equivalents that have a similar function, operate in a similar manner, and achieve a similar result.
Referring now to the accompanying drawings, wherein like reference numerals designate identical or corresponding parts throughout the several views, embodiments of the present invention will be described in detail.
With reference to
As illustrated in
Each of the IWBs 10 includes cameras 101A and 101B, a panel unit 20, a stand 30, and an image processing device 40.
The cameras 101A and 101B are installed at a given height on the right side and the left side of the panel unit 20, respectively. Further, the cameras 101A and 101B are installed in a direction in which the cameras 101A and 101B are able to capture the image of a person seated at a table placed in front of the IWB 10 at a position farthest from the IWB 10. The cameras 101A and 101B may be installed in a direction in which only the image of the person at the farthest position is captured by both the cameras 101A and 101B in an overlapping manner.
The panel unit 20 is a flat panel display employing a system such as a liquid crystal system, an organic light emitting (LE) system, or a plasma system. A touch panel 102 is installed on the front surface of a housing of the panel unit 20 to display an image.
The stand 30 supports the panel unit 20 and the image processing device 40. The stand 30 may be omitted from the configuration of the IWB 10.
The image processing device 40 displays on the panel unit 20 information such as a character or a figure written at a coordinate position detected by the panel unit 20. The image processing device 40 further synthesizes the image captured by the camera 101A and the image captured by the camera 101B, and transmits a resultant synthesized image to the other IWBs 10. Further, the image processing device 40 displays on the panel unit 20 images received from the other IWBs 10.
The IWB 10-1 transmits and receives information such as still or video images of the cameras 101A and 101B, sounds, and renderings on the panel unit 20 to and from the other IWBs 10 including the IWB 10-2 to have a videoconference with the other IWBs 10.
As compared with an existing projector serving as an image display system, the IWB maintains image quality and visibility even in a bright room, provides easy interactive function such as pen input function, and does not cast a shadow of a person standing in front of a display screen unlike the projector.
A hardware configuration of the IWB 10 according to the first embodiment will be described with reference to
Each of the cameras 101A and 101B captures a still or video image, and transmits the captured image to the CPU 105. For example, the cameras 101A and 101B are installed on the right side and the left side of the touch panel 102, respectively, and are positioned to have different optical axes, i.e., different viewpoints.
The touch panel 102 is, for example, a capacitance touch panel integrated with a display and having a hovering detecting function. The touch panel 102 transmits to the CPU 105 the coordinates of a point in the touch panel 102 touched by a pen or a finger of a user. The touch panel 102 further displays still or video image data of the videoconference at another site, which is received from the CPU 105.
The microphone 103 acquires sounds of participants of the videoconference, and transmits the acquired sounds to the CPU 105. The speaker 104 outputs audio data of the videoconference at the another site, which is received from the CPU 105.
The CPU 105 controls all devices of the IWB 10, and performs control related to the videoconference. Specifically, the CPU 105 encodes still or video image data, audio data, and rendering data synthesized from still or video images acquired from the cameras 101A and 101B, the microphone 103, and the touch panel 102, and transmits the encoded data to the other IWBs 10 via the external I/F unit 108.
The CPU 105 further decodes still or video image data, audio data, and rendering data received via the external I/F unit 108, displays the decoded still or video image data and rendering data on the touch panel 102, and outputs the decoded audio data to the speaker 104. The CPU 105 performs the above-described encoding and decoding in conformity with a standard such as H.264/Advanced Video Coding (AVC), H.264/Scalable Video Coding (SVC), or H.265. The encoding and decoding are executed with the CPU 105, the storage device 106, and the memory 107. Alternatively, the encoding and decoding may be executed through software processing with a graphics processing unit (GPU) or a digital signal processor (DSP) or through hardware processing with an application specific integrated circuit (ASIC) or a field programmable gate array (FPGA) to execute the encoding and decoding faster.
The storage device 106, which is a non-volatile storage medium such as a flash memory or a hard disk drive (HDD), for example, stores programs.
The memory 107, which is a volatile memory such as a double-data rate (DDR) memory, is used to deploy programs used by the CPU 105 and temporarily store arithmetic data.
The external I/F unit 108 is connected to the other IWBs 10 via the network N such as the Internet to transmit and receive image data and other data to and from the other IWBs 10. For example, the external I/F unit 108 performs communication with a wired LAN conforming to a standard such as 10 Base-T, 100 Base-TX, or 1000 Base-T or with a wireless LAN conforming to a standard such as 802.11a/b/g/n.
The external I/F unit 108 is an interface with an external device such as a recording medium 108a. The IWB 10 writes and reads data to and from the recording medium 108a via the external I/F unit 108. The recording medium 108a may be a flexible disk, a compact disc (CD), a digital versatile disc (DVD), a secure digital (SD) memory card, or a universal serial bus (USB), for example.
The input device 109, which includes a keyboard and buttons, receives an operation performed by the user to control a device of the IWB 10.
A functional configuration of the image processing device 40 of the IWB 10 according to the first embodiment will now be described with reference to
The image processing device 40 of the IWB 10 includes an acquiring unit 41, a detecting unit 42, a synthesizing unit 43, a display control unit 44, a communication unit 45, and a control unit 46. These units are implemented through processing of the CPU 105 of the image processing device 40 in the IWB 10 executed by at least one program installed in the image processing device 40.
The acquiring unit 41 acquires still or video images continuously captured by the cameras 101A and 101B from different viewpoints. The detecting unit 42 detects areas of the faces of persons in the images acquired by the acquiring unit 41.
The synthesizing unit 43 sets the position of a boundary in one of intervals between the areas of the faces of the persons in the image of the camera 101A detected by the detecting unit 42. The synthesizing unit 43 then combines a part of the image of the camera 101A and at least a part of the image of the camera 101B at the position of the boundary, to thereby synthesize an image which includes the areas of the faces of the persons in the image of the camera 101A and the areas of the faces of the persons in the image of the camera 101B without overlapping of the areas of the faces of the persons between the two images.
The control unit 46 encodes and decodes data such as image data, audio data, and rendering data, and controls the session of the videoconference with the other IWBs 10, for example.
The display control unit 44 displays data such as image data, audio data, and rendering data on the touch panel 102 of the IWB 10 in accordance with an instruction from the control unit 46.
The communication unit 45 communicates with the other IWBs 10. For example, the communication unit 45 transmits to the other IWBs 10 data such as image data synthesized by the synthesizing unit 43 and encoded by the control unit 46.
The processing of the communication system 1 according to the first embodiment will now be described with reference to
In each of the IWBs 10 including the IWBs 10-1 and 10-2, the control unit 46 establishes a session with the other IWBs 10 in accordance with an operation performed by the user, for example (step S1). Thereby, the IWBs 10 start communication therebetween to transmit and receive therebetween still or video images, sounds, and renderings, for example.
Then, the synthesizing unit 43 of the IWB 10-1 synthesizes the image captured by the camera 101A and the image captured by the camera 101B (step S2).
The cameras 101A and 101B are installed on the right side and the left side of the panel unit 20 of the IWB 10, respectively, such that straight lines 502A and 502B cross each other at a predetermined position in front of the IWB 10. Herein, the straight line 502A is perpendicular to a lens surface of the camera 101A, and the straight line 502B is perpendicular to lens surface of the camera 101B.
As illustrated in
Further, as illustrated in
With the process of step S2, the image captured by the camera 101A and the image captured by the camera 101B are synthesized to generate an image in which the faces of the persons A, B, C, D, E, F, and X do not overlap as viewed from a substantially opposite side thereto, as illustrated in
Then, in the IWB 10-1, the control unit 46 encodes the synthesized image, sound, and rendering (step S3), and the communication unit 45 transmits the encoded image data, audio data, and rendering data to the other IWBs 10 including the IWB 10-2 (step S4).
In the other IWBs 10 including the IWB 10-2, the control unit 46 decodes the image data, audio data, and rendering data received from the IWB 10-1 (step S5), and outputs the decoded image data, audio data, and rendering data (step S6).
The processes of steps S2 to S5 take place interactively between the IWBs 10 including the IWBs 10-1 and 10-2.
The process at step S2 of synthesizing the image captured by the camera 101A and the image captured by the camera 101B will now be described in more detail.
Then, the synthesizing unit 43 performs projective transformation on the acquired images to make the images horizontal (step S102). Herein, the synthesizing unit 43 detects straight lines in the images with Hough transformation, for example, and performs the projective transformation on the images to make the straight lines substantially horizontal. Alternatively, the synthesizing unit 43 may estimate the distance to a person based on the size of the face of the person detected at a later-described process of step S103, and may perform the projective transformation on the images with an angle according to the estimated distance.
Then, the detecting unit 42 detects the faces of the persons in the images (step S103). The process of detecting the faces of the persons may be performed with an existing technique, such as a technique using Haar-like features, for example.
The detecting unit 42 then recognizes the faces of the persons detected in the images (step S104). The process of recognizing the faces of the persons may be performed with an existing technique. For example, the detecting unit 42 may detect relative positions and sizes of parts of the faces of the persons and the shapes of eyes, noses, cheek bones, and jaws of the persons as features to identify the persons.
Then, based on the positions and features of the faces of the persons detected by the detecting unit 42, the synthesizing unit 43 determines whether the images include the face of the same person (step S105). For example, the synthesizing unit 43 may compare the features of the faces of the persons detected in the image captured by the camera 101A with the features of the faces of the persons detected in the image captured by the camera 101B. Then, if the degree of similarity of the features reaches or exceeds a predetermined threshold for any of the faces of the persons, the synthesizing unit 43 may determine that the images include the face of the same person.
For example, in this case, the synthesizing unit 43 may first determine the degree of similarity of the features between the smallest faces in the images. Then, if the degree of similarity falls below the predetermined threshold, the synthesizing unit 43 may determine the degree of similarity of the features between the next smallest faces in the images. This configuration increases the speed of determining that the images include the face of the same person, if any.
If the images do not include the face of the same person (NO at step S105), the synthesizing unit 43 synthesizes the images as laterally aligned and not overlapping each other (step S106), and completes the process. For example, if the cameras 101A and 101B have a relatively narrow viewing angle, and if neither the image of the camera 101A nor the image of the camera 101B includes the detectable or recognizable face of the person X in
If the images include the face of the same person (YES at step S105), the synthesizing unit 43 determines a seam of the images based on the positions and features of the faces of the persons detected by the detecting unit 42 (step S107). Herein, the seam is an example of the position of the boundary between the images. In this process, the synthesizing unit 43 determines, as the seam of the images, a position at which the faces of the same person do not overlap in the image synthesized from the laterally aligned images.
The area 601 includes a wall, but is erroneously detected to include a face. The area 605 includes an arm, but is erroneously detected to include a face. The synthesizing unit 43 averages face detection results of a plurality of frames (e.g., five frames in a video including frames per second) to reduce the influence of erroneous detection, i.e., to increase the signal-to-noise (S/N) ratio. For example, if an area is not detected as a face area at least a predetermined number of times in a predetermined number of frames, the synthesizing unit 43 determines the detection of the area as erroneous detection (i.e., noise), and does not use the result of this detection in the process at step S107 of determining the seam of the images.
The synthesizing unit 43 determines the seam from the seam candidates based on the positions and features of the faces of the persons.
In the example of
The right end of the image in
As compared with a case in which the image of the camera 101A and the image of the camera 101B are synthesized at a seam set at a predetermined position without detection of the faces of the persons, the present configuration prevents the images of the face of a person captured from different viewpoints from being synthesized. Accordingly, a more natural, less artificial image is generated.
The synthesizing unit 43 then adjusts the respective heights of the images based on the detected position of the face of the person (step S108). In this step, the synthesizing unit 43 adjusts the respective heights of the images such that the respective smallest face areas detected in the images captured by the cameras 101A and 101B and determined to include the face of the same person have substantially the same height. In the example of
The synthesizing unit 43 then combines the images as laterally aligned at the determined position of the seam of the images (step S109).
The synthesizing unit 43 further cuts off upper and lower portions of the images not to display blank areas produced in the height direction owing to the projective transformation performed at step S102.
The synthesizing unit 43 further cuts off a portion of each of the images on the opposite side of the seam and not including a detected face area. In
If the above-described processes at steps S103 to S107 in
On the other hand, the processes of steps S101, S102, S108, and S109 in
If the image captured by the camera 101A and the image captured by the camera 101B are different in brightness owing to a factor such as lighting in the room or outside light, optical correction such brightness correction may be performed to reduce the difference in brightness between the images.
Further, if the seam position is changed, the seam may be moved from the previous seam position to the present seam position continuously (i.e., smoothly) not discretely.
Modified examples of the present embodiment will now be described.
A modified example of the process of determining the same person will first be described.
At step S107, the synthesizing unit 43 may determine, without the facial recognition by the detecting unit 42, that the respective smallest areas in the images detected as face areas include the face of the same person. For example, among the areas 602 to 604 correctly detected as face areas in the example of
Among the areas 606 to 609 detected as face areas in the example of
In this case, the synthesizing unit 43 determines that the smallest one of the areas detected as face areas in the image captured by the camera 101A and the smallest one of the areas detected as face areas in the image captured by the camera 101B include the face of the same person, and determines the seam of the images to be laterally aligned at a position not included in the area of the face of the person to prevent overlapping of the images of the person.
In the example of
Further, the synthesizing unit 43 may determine the seam based on the distances of the intervals between the seam candidates instead of the results of the facial recognition or the sizes of the faces. That is, the synthesizing unit 43 may set the seam to the seam candidate corresponding to the shortest one of the intervals between the seam candidates. For instance, in the example of
As another modified example of the present embodiment, if the detecting unit 42 detects the faces of persons only in one of the image captured by the camera 101A and the image captured by the camera 101B, the synthesizing unit 43 may transmit only the image including the detected faces of the persons to the other IWBs 10 for the participants of the videoconference at the other sites, without synthesizing the images. Then, if the faces of persons are detected in the other one of the images, the synthesizing unit 43 may synthesize the images through the above-described process of
As another modified example of the present embodiment, the synthesizing unit 43 may synthesize an image from laterally aligned images captured by three or more cameras, instead of the laterally aligned images captured by the two cameras 101A and 101B. In this case, each of seams for combining the images may be set at a position in one of the intervals between the faces of the persons similarly as in the above-described example.
A second embodiment of the present invention will now be described.
In the above-described example of the first embodiment, the rectangular table 501 having short sides parallel to the IWB 10 is placed in front of the IWB 10. In the second embodiment, a description will be given of an example in which a substantially circular table is placed in front of the IWB 10. According to the second embodiment, the images are synthesized similarly as in the first embodiment when the participants of the videoconference are seated around the substantially circular table. The second embodiment is similar to the first embodiment except for some differences, and thus redundant description will be omitted as appropriate. The following description will focus on differences from the first embodiment, and description of parts similar to those of the first embodiment will be omitted.
As illustrated in
In this case, unlike in the first embodiment illustrated in
In the previous execution of the process of step S107 in
Then, in the present execution of the process of step S107 in
If the degree of similarity of the features between the faces closest to the stored positions falls below a predetermined threshold, the synthesizing unit 43 determines, for one of the remaining faces of the persons selected in a given order, whether the degree of similarity of the features between the face in one of the images and the face in the other one of the images equals or exceeds the predetermined value. If the images include the face of the same person, this configuration increases the speed of determining that the images include the face of the same person.
A third embodiment of the present invention will now be described.
In the above-described example of the first embodiment, the rectangular table 501 is placed in front of the IWB 10 with the short sides of the rectangular table 501 parallel to the IWB 10. In the third embodiment, a description will be given of an example in which a rectangular table is placed in front of the IWB 10 with long sides of the table parallel to the IWB 10. According to the third embodiment, the images are synthesized similarly as in the first embodiment when the participants of the videoconference are seated at the rectangular table to directly face the IWB 10. The third embodiment is similar the first or second embodiment except for some differences, and thus redundant description will be omitted as appropriate. The following description will focus on differences from the first or second embodiment, and description of parts similar to those of the first or second embodiment will be omitted.
As illustrated in
In the example of
When the same plurality of persons are included in the images, the synchronizing unit 43 of the third embodiment determines the seam at a position between a person positioned at or near the center of the same plurality of persons and a person adjacent to the person positioned at or near the center.
In the example of
Further, the synchronizing unit 43 sets the seam of the images to one of seam candidates 574 and 575 of seam candidates 574, 575, and 576 in
In this case, the synchronizing unit 43 may determine the seam such that the synthesized image includes the larger one of an area in the one of the images determined to include the face of the person positioned at or near the center of the same plurality of persons and an area in the other image determined to include the face of the person positioned at or near the center of the same plurality of persons. This configuration increases the size of the face of the person displayed on the other IWBs 10 for the participants of the videoconference on the other sites.
A fourth embodiment of the present invention will now be described.
In the fourth embodiment, a description will be given of an example having a function of detecting a speaker with a plurality of microphones and displaying a zoomed-in image of the face of the speaker, as well as the functions of the first to third embodiments. The fourth embodiment is similar to the first to third embodiments except for some differences, and thus redundant description will be omitted as appropriate. The following description will focus on differences from the first to third embodiments, and description of parts similar to those of the first to third embodiments will be omitted.
A hardware configuration of an IWB 10B according to the fourth embodiment will be described.
A functional configuration of an image processing device 40B of the IWB 10B according to the fourth embodiment will be described.
The acquiring unit 41 according to the fourth embodiment further acquires sounds collected by the microphones 103A and 103B.
The synthesizing unit 43 according to the fourth embodiment further enlarges an area according to the direction of the speaker estimated by the estimating unit 47, and generates a synthesized image by superimposing the enlarged area on a lower-central part of the synthesized image.
A process of displaying the zoomed-in image of the speaker according to the fourth embodiment will be described.
Then, the synchronizing unit 43 selects the face of the person in the estimated direction from the faces detected by the cameras 101A and 101B (step S203). In this step, the synthesizing unit 43 compares the direction of the speaker with the directions of the faces detected by the cameras 101A and 101B, to thereby identify the area of the face of the speaker. The direction of each of the faces may be calculated based on the size of the area of the detected face and coordinates of the area of the face in the image, for example. The synthesizing unit 43 then displays a zoomed-in image of the selected face of the person (step S204).
If it is difficult to identify the speaker when the participants of the videoconference are close to each other, for example, the synthesizing unit 43 may display a zoomed-in image of an area including the faces of a few people in the direction of the sound source detected by the microphones 103A and 103B.
A fifth embodiment of the present invention will be described.
In the above-described example of the first embodiment, the images of the two cameras 101A and 101B installed on the right and left sides of the IWB 10 are aligned and synthesized. In the fifth embodiment, a description will be given of an example in which, in addition to the functions of the first to third embodiments, another camera is provided on an upper part of the IWB 10 to switch between the image of the another camera and the image synthesized from the aligned images of the two cameras 101A and 101B installed on the right and left sides of the IWB 10.
The fifth embodiment is similar to the first to third embodiments except for some differences, and thus redundant description will be omitted as appropriate. The following description will focus on differences from the first to third embodiments, and description of parts similar to those of the first to third embodiments will be omitted.
A hardware configuration of an IWB 10C according to the fifth embodiment will be described.
If the visual field of the camera 101C is not blocked (NO at step S301), the control unit 46 encodes the image of the camera 101C, and transmits the encoded image to the other IWBs 10C (step S302). Thereby, the process is completed.
If the visual field of the camera 101C is blocked (YES at step S301), the synthesizing unit 43 synthesizes the images of the cameras 101A and 101B (step S303). The image synthesizing process of step S303 is similar to the image synthesizing process of the first to third embodiments illustrated in
If the visual field of the camera 101C is blocked, as illustrated in
As a modified example of the fifth embodiment, the synthesizing unit 43 may synthesize the images of the cameras 101A, 101B, and 101C if none of the visual fields of the cameras 101A, 101B, and 101C is blocked.
The camera 101C may be a multifunction camera, such as Kinect (registered trademark), for example, which acquires depth information indicating the distance to a person by using a device such as an infrared sensor and detects a sound direction indicating the direction of the speaker. In this case, the synthesizing unit 43 may use the sound direction acquired from the camera 101C (i.e., the multifunction camera) to display the zoomed-in image of the speaker similarly as in the second embodiment. Further, in this case, the synthesizing unit 43 may use the depth information acquired from the camera 101C to adjust the heights of the images at step S108. Thereby, the heights of the images are more accurately adjusted.
According to at least one of the first to fifth embodiments described above, the situation of the participants of the videoconference is well communicated.
As a modified example of the first to fifth embodiments, the synthesizing unit 43 may synthesize predetermined images, for example, into the detected face areas.
According to the first to fifth embodiments described above, the faces of persons are detected in a plurality of images captured from different viewpoints, and the images are laterally aligned and synthesized with a seam thereof set in one of intervals between the faces of the persons detected in at least one of the images.
With this configuration, even if the participants of a videoconference spread over a relatively wide viewing angle as viewed from an electronic information board system (i.e., IWB 10, 10B, or 10C), for example, a natural image of the videoconference is communicated to another electronic information board system like an image of the videoconference captured by a single camera.
Further, for example, the images of the participants of the videoconference are captured from different viewpoints (i.e., different positions and angles) by a plurality of cameras. Therefore, the images of the participants are captured from the opposite side of the participants, as compared with a case in which the images of the participants are captured by a single camera. Further, the visual fields of the cameras are less likely to be completely blocked by something, such as the body of a person performing rendering on the board of the electronic information board system, than in a case in which the images of the participants of the videoconference are captured by a single camera installed on an upper-central part of the board.
In the IWB 10, 10B, or 10C, the functional units of the image processing device 40 or 40B, such as the detecting unit 42 and the synthesizing unit 43, for example, may be implemented by cloud computing using at least one computer.
The above-described embodiments are illustrative and do not limit the present invention. Thus, numerous additional modifications and variations are possible in light of the above teachings. For example, elements and/or features of different illustrative embodiments may be combined with each other and/or substituted for each other within the scope of the present invention. Further, the above-described steps are not limited to the order disclosed herein.
Number | Date | Country | Kind |
---|---|---|---|
2017-052342 | Mar 2017 | JP | national |