The present application claims priority to and incorporates by reference the entire contents of Japanese Patent Application No. 2013-032365 filed in Japan on Feb. 21, 2013.
1. Field of the Invention
The present invention relates to an image processing apparatus, an image processing system, and a non-transitory computer-readable medium.
2. Description of the Related Art
Conventionally, in such a situation where a plurality of seated people are photographed by a camera or the like, there can be obtained an image, in which the people are away from the camera at varying distances, or an image, in which sizes of the people in an image vary greatly depending on location of the people in a view field of the camera. In that case, the people can be photographed in unequal sizes such that, for example, a face of one person is clear but a face of another person is indistinguishable. To alleviate such a problem, published Japanese translation of PCT application No. 2008-536238 discloses a technique of enlarging faces of people by continuously enlarging/reducing an image.
However, the conventional technique of enlarging/reducing an image is disadvantageous in being incapable of adapting to different arrangements of people. This is because the technique enlarges/reduces images using a fixed enlarging/reducing method. A technique of enlarging/reducing an image according to locations where people are seated can be employed. However, this technique is disadvantageous in that, when simple enlargement/reduction is performed, image portions of people and a background image portion are deformed discontinuously, resulting in a noticeably-unnatural image.
Therefore, it is desirable to provide an image processing apparatus capable of appropriately enlarging an image of a plurality of people.
It is an object of the present invention to at least partially solve the problems in the conventional technology.
According to an aspect of the present invention, there is provided an image processing apparatus including: a people detecting unit that detects image portions of faces of a plurality of people in an image; and an image converting unit that enlarges a portion below a horizontal line segment of the image in a manner that locates the horizontal line segment at a top end of the enlarged image based on upper end positions of the detected faces, the horizontal line segment being on or above the detected faces and having a lateral width equal to or larger than a lateral width of the faces.
According to another aspect of the present invention, there is provided an image processing system including: a people detecting unit that detects image portions of faces of a plurality of people in an image; and an image converting unit that enlarges a portion below a horizontal line segment of the image in a manner that locates the horizontal line segment at a top end of the enlarged image based on upper end positions of the faces, the horizontal line segment being on or above the detected faces and having a lateral width equal to or larger than a lateral width of the faces.
According to still another aspect of the present invention, there is provided a non-transitory computer-readable medium storing program codes that cause, when executed by a computer, the computer to perform a method including: detecting image portions of faces of a plurality of people in an image; and enlarging a portion below a horizontal line segment of the image in a manner that locates the horizontal line segment at a top end of the enlarged image based on upper end positions of the detected faces, the horizontal line segment being on or above the detected faces and having a lateral width equal to or larger than a lateral width of the faces.
The above and other objects, features, advantages and technical and industrial significance of this invention will be better understood by reading the following detailed description of presently preferred embodiments of the invention, when considered in connection with the accompanying drawings.
An exemplary embodiment of the present invention is described in detail below with reference to the accompanying drawings.
The top-boundary defining unit 120 determines top coordinates based on upper end positions of pixels that represent the extracted faces, generates lateral (i.e., horizontal) line segments extending from the top coordinates, and stores coordinates of pixels of the lateral line segments. The length of the horizontal line segment is set to a preset number (e.g., a number between 1.1 and 1.3) multiple of a lateral width of the face detected by the people detecting unit 110. The preset number is set so as not to make the predetermined number multiple of the facial lateral width larger than an overall lateral width of a body.
Let top pixels of pixels that represent a face of one person be denoted by P(xn, yn), P(xn+1, yn+1), . . . , P(xn+m, yn+m). In this case, coordinates of an uppermost pixel in an image of the face may be used as a top position; alternatively, mean coordinates of the pixels P(xn, yn), P(Xn+1, yn+1), . . . , P(xn+m, yn+m) may be used as the top position; further alternatively, coordinates of a position a predetermined buffer distance above the uppermost coordinates may be used as the top position. The horizontal line segments each extending from the top coordinates used in this way and having the lateral width described above are generated. The horizontal line segments are not actually drawn in the image data but generated in a memory as data that is not to be displayed. In the example described below, it is assumed that the top coordinates are the coordinates of the position the predetermined buffer length above the uppermost coordinates.
Subsequently, as illustrated in
As illustrated in
Image portions below the horizontal line segments 20a to 20f remain the same in visual impression before and after the conversion because a same enlargement ratio is applied to pixels in the image portions. By contrast, image portions below the curves 30a to 30c can vary in the distance between the top boundary line 40 and a top end of the image portion with an x coordinate. In that case, enlargement ratios can vary such that although a large enlargement ratio is applied to one pixel, a small enlargement ratio is applied to another pixel. Such variation between enlargement ratios undesirably makes the converted image to differ from the original image in geometry. However, the image portions below the curves 30a to 30c do not contain a face, a visual change at which is easily noticeable. Accordingly, a user is less likely to find the image portions unnatural. For example, a portion near a shoulder of a person second from the right in
The conversion is concretely described below with reference to
Referring to FIGS. 4A(a) to 4B(b), y coordinates in the enlarged image are calculated from Equation (1) below.
In Equation (1), a y coordinate in the enlarged image is calculated from a ratio between h, which depends on vertical resolution of the image, and f(X), which is a Y coordinate determined by an X coordinate.
A method for calculating x coordinates in the enlarged image is described below with reference to FIGS. 4B(a) and 4B(b). Image enlargement ratios obtained during calculation of the y coordinates are used in calculation of the x coordinates. More specifically, an x coordinate is calculated based on a ratio between h, which is a height of the image, and f(Y). The x coordinates calculated in this way can be expressed by Equation (2) below.
A list of coordinates assigned to pixels (hereinafter, “pre-enlargement pixels”) in the not-yet-enlarged image and pixels (hereinafter, “post-enlargement pixels”) in the enlarged image is obtained in this way. The pre-enlargement coordinates and the post-enlargement coordinates related to each other are stored in a table form as illustrated in
The function T2(x, y) allows obtaining y coordinates by reverse calculation using ratios between the function f(X) that describes the top boundary line 40, and the height h. The values of X can be calculated from the function T1(x, y) described above.
The output control unit 140 performs output control for causing the display apparatus 300 to display the image having been enlarged as described above. Examples of the output control include resizing the image according to a resolution of the display apparatus 300 and adjusting a position where the image is to be displayed on a screen. For instance, when an image size of the enlarged image is 250 pixels in width and 100 pixels in height and the resolution of the display apparatus 300 is 960×540 pixels, the image is enlarged with reference to the width of the image. As a result, empty space is created above and below the image.
Steps to be carried out by the image processing apparatus 100 described above are described below with reference to
The image processing apparatus 100 described above applies a same enlargement ratio to every pixel in image portions corresponding to faces of an image. Accordingly, even when the image is enlarged, the faces are not deformed. Accordingly, it becomes possible to appropriately enlarge a plurality of people in an image without causing the enlarged image to give unnatural impression to a user. Furthermore, the horizontal line segments 20a to 20d are connected with the smooth curves 30a-30c in image portions where no face is present. Accordingly, sharp difference in enlargement ratio between neighboring pixels is smoothed, and unnatural impression given by image deformation can be minimized.
Each of the horizontal line segments 20a to 20d is slightly longer than an actual lateral width of a corresponding one of the faces. Accordingly, a same enlargement ratio is applied to an image portion of a body portion adjacent to the face and to the image portion of the face. As a result, unnatural impression that would otherwise be given by deformation of the body portion adjacent to the face can be prevented.
Modifications
Modifications of the embodiment are described below. In a first modification, which is an example of the modification, a top boundary line is not generated each time an image is obtained. Rather, a top boundary line is generated based on positions of faces of people detected during a predetermined period, and images are converted based on the thus generated top boundary line during a preset period of time. The first modification is described below with reference to
The positions of the faces are generated in this manner. Subsequently, positions of a top boundary line are determined based on the face positions, and enlargement is performed. In the first modification, face positions at a certain point in time are determined based on positions of the faces in n frames preceding a frame of the certain point in time. Therefore, even when a face makes a large motion, the motion is averaged. Accordingly, problems such as an abrupt change in the enlargement ratio applied to an image portion of a face and undesirable positional shift of a face can be lessened.
A second modification is described with reference to
Using the top boundary line calculated as described above allows, even when a face makes a large motion, lessening an abrupt change in the enlargement ratio.
As a third modification, an approach of performing enlargement after trimming a right area and/or a left area containing no person from an image can be employed. The third modification is effective when applied to a situation where, for instance, lateral enlargement is limited by resolution of a display or the like. When an enlargement ratio calculated based on the top boundary line is larger than a lateral enlargement ratio permitted by the resolution, an image may be enlarged insufficiently. This is because actual enlargement is performed in the lateral enlargement ratio. However, trimming an area(s) where no person is photographed from the image allows enlargement in a larger enlargement ratio. The third modification is described below with reference to
As illustrated in
In the embodiment described above, the image processing apparatus 100 processes an image captured by the image capturing unit 200 connected to the image processing apparatus 100. Alternatively, the following configuration may be employed. For instance, there may be employed a configuration in which image processing apparatuses are connected to each other over a network, and one of the image processing apparatus processes an image transmitted from the other image processing apparatus. In this configuration, the image processing apparatuses may be connected directly to each other or via a server. When the image processing apparatuses are connected via a server, processing may be performed by the server having the same function in lieu of the image processing apparatus.
The image processing apparatus according to the embodiment has a hardware configuration implemented by utilizing a typical computer and includes a control device such as a central processing unit (CPU), a storage device such as a read only memory (ROM) and a random access memory (RAM), an external storage device such as a hard disk drive (HDD) and/or a compact disc (CD) drive, a display device, and an input device such as a keyboard and/or a mouse.
Program instructions to be executed by the image processing apparatus according to the embodiment are provided as being recorded in a non-transitory tangible computer-readable storage medium as a file in an installable format or an executable format. The non-transitory tangible computer-readable storage medium can be a compact disk read only memory (CD-ROM), a flexible disk (FD), a compact disk recordable (CD-R), a digital versatile disk (DVD), or the like.
The program instructions to be executed by the image processing apparatus according to the embodiment may be configured to be stored in a computer connected to a network such as the Internet and provided by downloading over the network. The program instructions to be executed by the image processing apparatus according to the embodiment may be configured to be provided or distributed over a network such as the Internet. The program instructions to be executed by the image processing apparatus according to the embodiment may be configured to be provided as being stored in a ROM or the like in advance.
The program instructions to be executed by the image processing apparatus according to the embodiment have a module structure including the units described above. From the viewpoint of actual hardware, the CPU (processor) reads out the program instructions from the storage medium and executes the program instructions to load the units into the main storage device, thereby generating the units on the main storage device.
An aspect of the present invention is advantageous in that it enables enlarging a plurality of people appropriately.
Although the invention has been described with respect to specific embodiments for a complete and clear disclosure, the appended claims are not to be thus limited but are to be construed as embodying all modifications and alternative constructions that may occur to one skilled in the art that fairly fall within the basic teaching herein set forth.
Number | Date | Country | Kind |
---|---|---|---|
2013-032365 | Feb 2013 | JP | national |
Number | Name | Date | Kind |
---|---|---|---|
7035456 | Lestideau | Apr 2006 | B2 |
7209149 | Jogo | Apr 2007 | B2 |
20010048447 | Jogo | Dec 2001 | A1 |
20050117049 | Suzuki | Jun 2005 | A1 |
20050180656 | Liu et al. | Aug 2005 | A1 |
20050198661 | Collins et al. | Sep 2005 | A1 |
20070065136 | Lakey | Mar 2007 | A1 |
20070206878 | Liu et al. | Sep 2007 | A1 |
20090087100 | Hu | Apr 2009 | A1 |
20090135269 | Nozaki et al. | May 2009 | A1 |
20090190835 | Lee | Jul 2009 | A1 |
20090316990 | Nakamura et al. | Dec 2009 | A1 |
20100201789 | Yahagi | Aug 2010 | A1 |
20100322519 | Kasuya et al. | Dec 2010 | A1 |
20100324946 | Ohmura et al. | Dec 2010 | A1 |
20100329550 | Cheatle | Dec 2010 | A1 |
20110075016 | Shimizu | Mar 2011 | A1 |
20110090246 | Matsunaga | Apr 2011 | A1 |
20110210908 | Kasuya et al. | Sep 2011 | A1 |
20110216222 | Niyagawa et al. | Sep 2011 | A1 |
20110221773 | Kasuya et al. | Sep 2011 | A1 |
20120014672 | Kasuya | Jan 2012 | A1 |
20120056975 | Yamashita et al. | Mar 2012 | A1 |
20120069045 | Hashimoto et al. | Mar 2012 | A1 |
20120113238 | Yamamoto et al. | May 2012 | A1 |
20120113255 | Kasuya et al. | May 2012 | A1 |
20120127323 | Kasuya et al. | May 2012 | A1 |
20130004064 | Yamaguchi | Jan 2013 | A1 |
20130063547 | Kasuya et al. | Mar 2013 | A1 |
20130129251 | Ishii et al. | May 2013 | A1 |
20130335369 | Nakatomi et al. | Dec 2013 | A1 |
20130339271 | Kasuya et al. | Dec 2013 | A1 |
20140019378 | Okuyama et al. | Jan 2014 | A1 |
20140176756 | Yoon | Jun 2014 | A1 |
20140184849 | Kim | Jul 2014 | A1 |
Number | Date | Country |
---|---|---|
2005-267611 | Sep 2005 | JP |
2006-318151 | Nov 2006 | JP |
2008-536238 | Sep 2008 | JP |
2012-54907 | Mar 2012 | JP |
2012-80518 | Apr 2012 | JP |
Number | Date | Country | |
---|---|---|---|
20140233851 A1 | Aug 2014 | US |