This application is based on and claims priority under 35 USC 119 from Japanese Patent Application No. 2017-131914 filed Jul. 5, 2017.
The present invention relates to an image composing device.
According to an aspect of the invention, there is provided an image composing device including first, second, and third obtaining units, a generator, and a composing unit. The first obtaining unit obtains label data indicating an image of a label to be attached to a product item. The second obtaining unit obtains configuration data indicating three-dimensional configuration of the product item. The third obtaining unit obtains background data indicating an image to be used as a background. The generator generates projection information for projecting the three-dimensional configuration on the background. The composing unit combines the image of the label with the image of the background by using the projection information.
An exemplary embodiment of the present invention will be described in detail based on the following figures, wherein:
To promote the sales of a certain product item, printed materials, such as posters, stickers, and pamphlets, are created by using photos of this item and images of illustrations simulating this item. In addition to printed materials, displayed materials, such as online advertisements and television advertisements, may be produced by using images representing this item. These printed materials and display materials are called promotional materials. The image composing device 1 is a device for producing content used for promotional materials by combining images. Hereinafter, such content will be called image content.
The controller 11 includes a central processing unit (CPU), a read only memory (ROM), and a random access memory (RAM). As a result of reading and executing computer programs (hereinafter simply called programs) stored in the ROM and the storage unit 12, the CPU controls the individual elements of the image composing device 1.
The communication unit 13 is a communication circuit which connects to a communication line, such as the Internet, via a wireless medium or a wired medium. By using the communication unit 13, the image composing device 1 sends and receives information to and from various devices connected to the communication line. The provision of the communication unit 13 in the image composing device 1 may be omitted.
The operation unit 15 includes operators, such as operation buttons, for providing various instructions. The operation unit 15 receives an instruction from a user and supplies a signal to the controller 11 in accordance with the content of the instruction. The operation unit 15 may include a touchscreen which detects the presence of an operator, such as a user's finger or a stylus. If the controller 11 receives various user instructions from external terminals via the communication unit 13, the provision of the operation unit 15 in the image composing device 1 may be omitted.
The display unit 14 includes a display screen, such as a liquid crystal display, and displays images under the control of the controller 11. A transparent touchscreen of the operation unit 15 may be superposed on the display screen of the display unit 14.
The storage unit 12 is a large-capacity storage, such as a solid state drive or a hard disk drive, and stores various programs read by the CPU of the controller 11.
The storage unit 12 stores a label database (DB) 121, a configuration DB 122, and a background DB 123. The label DB 121 is a database that stores label data indicating images of labels to be attached to product items. “Attaching a label to a product item” refers to that a sheet of paper, for example, on which an image of a label is formed, is bonded to a product item by using an adhesive and also that, if a product item is a sheet-like item, an image of a label is formed directly on the surface of the item. A label may not necessarily be directly attached to a product item. Instead, a label may be tied to a product item with a string or a ribbon or be attached to an accessory of a product item. A label may be projected on a product item as an image as in projection mapping. One piece of label data stored in the label DB 121 is not restricted to an image of a label indicated in one continuous region, but may be an image of a set of labels separately indicated in plural regions.
The configuration DB 122 is a database that stores configuration data indicating the three-dimensional configurations of product items. The background DB 123 is a database that stores background data indicating images to be used as backgrounds.
In the example in
The product item G shown in
The configuration data indicates the three-dimensional configuration of the product item G. Any data structure that can repro the three-dimensional configuration of the external surface of the product item G may be used for the configuration data. For example, the constructive solid geometry (CSG) technique or the boundary representation (B-rep) technique may be used.
The configuration data also indicates a region R2 where a label will be attached, on the surface of the product item G. The region R2 shown in
The three-dimensional configuration of the product item is not restricted to a bottle shape. Examples of the other three-dimensional configurations are rectangular-parallelepiped configurations representing a refrigerator, a makeup box, etc., bar-like configurations representing a fountain pen, a ballpoint pen, etc., and flat-shaped configurations representing a smartphone, a tablet terminal, etc.
The product item G is not restricted to an item which occupies a single continuous space, but may be a set of plural items that are combined so that the relative positions of the plural items can be changed, such as a watch, a bicycle, and a tea set. In this case, the three-dimensional configuration of a set of items G is the configurations of the individual items or the configuration of the combined items. The product item G may be an item including a sheet-like portion, such as a string, a ribbon, a strap.
As the background shown in
As the background shown in
The first obtaining unit 111 obtains label data indicating the image of a label to be attached to the product item G. The first obtaining unit 111 shown in
The second obtaining unit 112 obtains configuration data indicating the three-dimensional configuration of the product item G. The second obtaining unit 112 shown in
The third obtaining unit 113 obtains background data indicating an image to be used as a background. The third obtaining unit 113 shown in
The generator 114 generates projection information for projecting a three-dimensional configuration on a background. For example, if the second obtaining unit 112 obtains configuration data indicating the three-dimensional configuration shown in
Various approaches to extracting the associated region R11 from the background data are possible. For example, if the color of the product item G is described in the configuration data, the generator 114 may extract, as the associated region R11, a region represented by the same color as that of the product item G from the background indicated by the background data.
The associated region R11 may be described in the background data in advance. In this case, the generator 114 extracts the associated region R11 from the background data.
After extracting the associated region R11, the generator 114 changes the orientation of the product item G indicated by the configuration data with respect to a plane so that the configuration of the product item G projected on a plane will match the configuration of the boundary of the associated region R11. In this manner, the generator 114 generates projection information for projecting the product item G on the background. The projection information indicates the orientation and the viewpoint, for example, of the product item G with respect to a plane corresponding to the background.
As models used for projecting the configuration of the product item G on the background, perspective projection models are used. Alternatively, weak perspective projection models, pseudo-perspective projection models, or parallel perspective projection models may be used, instead. The generator 114 sets a condition for each projection model, and when the configuration data or the associated region satisfies the condition applied to a certain projection model, the generator 114 may use this projection model.
The generator 114 may use the orientation and the viewpoint of the project item G with respect to the background, which are specified by a user using the operation unit 15. In this case, the generator 114 generates projection information in accordance with the orientation and the viewpoint specified by the user. The user may not have to specify the precise values of the orientation and viewpoint, and the generator 114 may estimate the orientation and viewpoint of the product item G by using the numeric values input by the user as initial values.
The specifying unit 116 specifies a certain region within a label as the above-described focusing region. The specifying unit 116 shown in
The first determining unit 117 determines a position of a portion of the product item G to which a label is attached, based on the focusing region and the projection information. For example, the region R2 of the product item G shown in
The composing unit 115 combines the image of a label with the image of a background by using the projection information generated by the generator 114. More specifically, the composing unit 115 shown in
In step 103, the controller 11 obtains configuration data from the configuration DB 122. In step S104, by referring to the associated region extracted in step S102, the controller 11 generates projection information for projecting the three-dimensional configuration indicated by the configuration data on the background.
In step S105, the controller 11 obtains label data from the label DB 121. Then, in step S106, the controller 11 specifies a focusing region within the label indicated by the label data. In step S107, based on the projection information, the controller 11 specifies a position of a portion of the product item G to which the label will be attached in accordance with a predetermined rule.
In step S108, the controller 11 modifies the image of the label attached to the specified position of the product item G to the configuration to be viewed when the three-dimensional configuration of the product item G is projected on the background, and then combines the modified image of the label with the background.
As a result of executing the above-described operation, images of a product item G projected on backgrounds in different compositions are composed by using a label to be attached to the product item G.
The image content shown in
The image content shown in
The image content shown in
Hitherto, in the production of image content representing a commercial product item for a variety of media, if the taste, style, and composition of the image content are changed according to the type of media, different photos of the product item have to be prepared in accordance with the type of media.
In contrast, in the above-described image composing device 1, by using label data indicating one label or one set of labels and configuration data indicating the three-dimensional configuration of one product item or one set of product items, the label or the set of labels is combined with backgrounds representing the product item or the set of product items in different compositions, tastes, and styles. The production process is thus simplified without the need to prepare various photos of a product item taken in different compositions, tastes, and styles according to the type of media.
The above-described exemplary embodiment is only an example and may be modified in the following manner. Additionally, the following modified examples may be combined according to the necessity.
In the above-described exemplary embodiment, the image composing device 1 includes the display unit 14. However, the provision of the display unit 14 may be omitted. The controller 11 may store composed images in the storage unit 12 or may send composed images to an external device by using the communication unit 13.
The image composing device 1 may include an image forming unit which forms an image composed by the controller 11 on a medium, such as a sheet. In this case, the image forming unit may be an electrophotographic image forming unit.
In the above-described exemplary embodiment, the specifying unit 116 specifies a certain region within a label as the above-described focusing region, and the first determining unit 117 determines a position of a portion of the product item G to which the label will be attached, based on the focusing region and the projection information. However, the position of a portion of the product item G to which a label will be attached may be determined in advance. In this case, the controller 11 may not necessarily function as the specifying unit 116 and the first determining unit 117.
In the above-described exemplary embodiment, the composing unit 115 combines the image of a label with the image of a background by using the projection information generated by the generator 114. The composing unit 115 may also provide shading to the image of a label in accordance with the orientation of the product item G in the background. Shading is shades and shadows created for the product item G when it is illuminated. Shading may be provided to the image of a label by adjusting the brightness tone. The provision of shading to a label on the surface of the product item G gives a stronger depth perception to a viewer (customer) of the image content.
In the above-described third modified example, various approaches to providing shading to the image of a label are possible. For example, the composing unit 115 may provide shading to the image of a label by using a tone level of a portion of the image of a background on which the image of a label will be projected. That is, the composing unit 115 may combine the image of a label with the image of a background by adding the tone value of the associated region of the image of the background and the tone value of the image of the label which has been modified to be projected on the background.
The composing unit 115 may determine the position of a light source which applies light to the product item G in a three-dimensional space, and then calculate the level of shading to be provided to the image of a label by using the determined position of the light source. In this case, as shown in
According to a predetermined rule, the second determining unit 118 may determine the position of the light source, based on the position of the focusing region within a label, the position of the associated region within image content, and the shape of the shading of a product item G provided on the image of a background. The second determining unit 118 may use the position of a light source specified by a user using the operation unit 15.
The program executed by the controller 11 of the image composing device 1 may be provided as a result of being recorded in a computer readable recording medium, such as a magnetic recording medium (magnetic tape and a magnetic disk, for example), an optical recording medium (an optical disc, for example), or a magneto-optical recording medium, or a semiconductor memory. This program may be downloaded via a communication line, such as the Internet. Instead of using a CPU, various other devices may be used for the controller 11. For example, a dedicated processor may be used.
In the above-described exemplary embodiment, a photo of the product item G is used as a background. Instead of photos, various other mediums, such as illustrations, pictures, and ink wash paintings simulating the product item G, may be used. In this case, too, the generator 114 extracts an associated region to which a label will be attached from background data.
The foregoing description of the exemplary embodiment of the present invention has been provided for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise forms disclosed. Obviously, many modifications and variations will be apparent to practitioners skilled in the art. The embodiment was chosen and described in order to best explain the principles of the invention and its practical applications, thereby enabling others skilled in the art to understand the invention for various embodiments and with the various modifications as are suited to the particular use contemplated. It is intended that the scope of the invention defined by the following claims and their equivalents.
Number | Date | Country | Kind |
---|---|---|---|
2017-131914 | Jul 2017 | JP | national |