The present invention relates to the displaying of product images on an electronic display and, more particularly, to the displaying of images of products having areas of differing textures effecting visually distinguishable light reflection.
Printing services Web sites allowing a user to access the site from the user's home or work and design a personalized product are well known and widely used by many consumers, professionals, and businesses. For example, Vistaprint markets a variety of printed products, such as business cards, postcards, brochures, holiday cards, announcements, and invitations, online through the site www.vistaprint.com. Printing services web sites often allow the user to review thumbnail images of a number of customizable design templates prepared by the site operator having a variety of different styles, formats, backgrounds, color schemes, fonts and graphics from which the user may choose. When the user has selected a specific product design template to customize, the sites typically provide online tools allowing the user to incorporate the user's personal information and content into the selected template to create a custom design. When the design is completed to the user's satisfaction, the user can place an order through the web site for production and delivery of a desired quantity of a product incorporating the corresponding customized design.
Printing services sites strive to have the image of the product that is displayed to the customer on the customer's computer/electronic display be as accurate a representation as possible of the physical product that the user will later receive. Trying to simulate on the user's electronic display the visual effect of areas of different or non-standard texture that are especially distinguishable from the main printed surface at different angles of lighting has historically posed a problem.
These types of textured premium finishes that elicit differing lighting effects, including foil, gloss, raised print, embossment, vinyl, leather, cloth, and other textured finishes, which are to be applied in the creation of a finished product (such as a printed document) change in appearance depending on how light reflects off the premium finish surface. The appearance changes as either or both of the product itself or the illuminating light source moves.
The purpose of displaying a preview image of the product is to show the customer what the finished product will look like when manufactured. However, it has proven to be difficult to achieve a natural and accurate simulation of light over the surface of a premium finish to depict how the final product will appear when manufactured. In particular, premium finishes are very difficult to visualize in a static context because their effect is dependent on how light bounces off the finish surface. If upon delivery the delivered final product does not appear how the user imagined it would, this can lead to customer dissatisfaction.
U.S. Pat. No. 7,644,355, owned by the same assignee of interest of the present application and incorporated by reference herein in its entirety for all that it teaches, is directed to simulating the visual effect of light on shiny or reflective portions of a product surface, such as areas covered by foil. In the simulation image, foiled areas in a printed product are represented to a user viewing a product image by a looped animation comprising a sequence of images generated by applying a gradient function to an image of the areas corresponding to the reflective portions of the product. To generate the individual images for use in the animation, the gradient function is applied at different offset positions relative to the product image. U.S. Pat. No. 7,644,355 is useful in providing clear visual cues to assist the customer in recognizing the foil areas in a displayed product image and distinguishing those areas from the non-foil areas. Nonetheless, natural effects such as light scattering are not simulated and the areas representing the foil do not appear exactly as they would when implemented as a physical product.
U.S. patent application Ser. No. 12/911,521 filed Oct. 25, 2010, and published as US20120101790 on Apr. 26, 2012, hereby incorporated by reference in its entirety, discloses using photographic images to simulate the appearance of embroidery stitches in a rendered depiction of an embroidered design. U.S. patent application Ser. No. 12/911,521 does not application to a premium finish of a printed product nor of simulating the movement of light across the simulated image.
To minimize the risk of customer confusion and disappointment, it is highly desirable that the customer be shown an image of the product that is as accurate and natural a depiction of the physical product as possible. There is, therefore, a need for systems and methods for preparing product images for displaying on a user's computer display in a manner that indicates the location or locations in the product design of textured surfaces by simulating the effects of light on those materials and clearly distinguishes those regions from other regions of the product.
Customer previews are inserted into scene rendered animations to depict the customer's product at different lighting angles. In an embodiment, the scene rendered animations use real images (i.e., photographs) of real premium finishes (such as foil, spot gloss, vinyl, etc.) that will be used in the premium finished areas of the product, which serves to facilitate an accurate rendering and depict a natural appearance of the product as light moves across the product in an animated sequence. Because the appearance of the premium finish is both natural and accurate, the simulated depiction of the product gives the customer a more realistic expectation of how the final product will look, thereby improving customer satisfaction by matching customer expectations with the physical realities of the delivered product.
In an embodiment, a method for simulating the movement of light on a design to be applied to a product includes receiving a design image containing one or more primary regions which are to be finished using one or more primary finishes that are characterized by first light reflection characteristics and one or more secondary regions where a secondary finish is to be applied. A mask image indicating one or more regions of the product to be finished with the secondary finish is received, and a scene containing an image placeholder is identified. The identified scene is a description identifying at least one scene image, a description of a position of an image placeholder for placement of an injectable image, the scene description comprising instructions for generating a composite scene image having the injectable image embedded in the scene image. A first solid fill secondary finish photographic image taken at a first lighting angle is selected, and the selected solid fill secondary finish photographic image is composited, based on the received mask image, with the received design image to generate a composite image. The composite image is injected into the selected scene image according to the instructions of the scene description to generate an individual animation frame. If additional individual animations frames are required, a next secondary finish photographic image taken from a next lighting angle is selected, and additional individual frames are generated by repeating the compositing step and injecting step until a sufficient number of animation frames has been created. The individual animation frames are sent to a computer system, preferably in aggregated format, for sequential display on an electronic display.
It will be understood that, while the discussion herein describes an embodiment of the invention in the field of preparation of customized printed materials having premium finish regions such as foil, gloss, raised print, etc., it will be understood that the invention is not so limited and could be readily employed in any embodiment involving the presentation of an electronic image of any type of product wherein it is desired to indicate a texture that reflects light in a visually distinguishable from the base product texture.
Premium finishes are very difficult to visualize in a static context because their effect is dependent on how light bounces off the surface. In the present invention, scene rendered animations are implemented to depict the most accurate and natural-looking preview image of a user's design that contains one or more premium finish region. One technique involves compositing real imagery of foil taken from different angles to capture a range of light reflections. The use of real photographic images of actual foil (or other premium finish of interest) allows the capture of the subtle grain characteristics of the foil.
As is well known and understood in the art, color images displayed on computer monitors are comprised of many individual pixels with the displayed color of each individual pixel being the result of the combination of the three colors red, green and blue (RGB). Transparency is achieved through a separate channel (called the “alpha channel”) which includes a value that ranges between 0 and 100%, with 0 defining the pixel to be fully transparent to layers below it, and 100% defining the pixel to be 100 percent opaque (so that pixels on layers below are not visible). In a typical display system providing twenty-four bits of color information for each pixel (eight bits per color component), red, green and blue are each assigned an intensity value in the range from 0, representing no color, to 255, representing full intensity of that color. By varying these three intensity values, a large number of different colors can be represented. The transparency channel associated with each pixel is also provided with 8 bits, where alpha channel values that range between 0 (0%) and 255 (100%) result in a proportional blending of the pixel of the image with the visible pixels in the layers below it.
Turning now to
Returning to
Returning again to
In step 104, a scene containing an image placeholder is identified. In an embodiment, the identified scene is a description identifying a .jpg image and a description of an image placeholder (i.e., the identification of the position of a to-be-inserted image) which is to be placed on a layer over the identified .jpg scene image.
In a preferred embodiment of a system implemented in accordance with the invention, the system includes a repository (e.g., a non-transitory computer readable memory) which contains a pool of different scene images into which the design can be inserted. In an embodiment, the pool of scene images can contain a number of images of an identical scene taken with different illumination source positions. Alternatively, the pool of scene images can contain a number of images of the same scene positioned at a different angle.
Returning again to
In general, for each secondary finish offered, and for each set of allowed design dimensions offered (e.g., dimensions of business cards, greeting cards, brochures, etc.), a design of the corresponding dimensions having a solid fill of the respective secondary finish is created. The design specifying the solid fill secondary finish is physically created and photographs of the solid fill secondary finish design on the product, taken with the illumination by the source light at different angles, as indicated in step 113, are cataloged by source lighting angle and design dimensions, and stored. In general, different illuminations angles can be generated by either moving, relative to the source lighting, the physical product on which the design is implemented, or by fixing the product in place and physically moving the source lighting. In an embodiment, the images of the solid fill design of the secondary finish are stored in a computer-readable accessible database. Preferably, a photograph is taken of the solid fill design for every 1° relative movement between the physical design and the source light over a span of at least 35°.
In step 106, the method selects a first solid fill secondary finish photographic image taken at a first lighting angle. The, using the converted mask image generated in step 103, the method composites the selected solid fill secondary finish photographic image with the design image received in step 101. In step 107, for each non-transparent pixel in the converted mask image, the corresponding pixel of the selected solid fill secondary finish photographic image either replaces or is blended with the pixel in the received design image. The compositing can directly modify the received design image or can be saved into a newly created composite image. After all non-transparent pixels in the converted mask image have been processed, the result is a composite image that contains corresponding pixels of the selected solid fill secondary finish photographic image replacing or blended with the corresponding pixels of the design image where specified by the mask.
The composite image generated in step 108 is then injected into the selected scene image by mapping the composite image into the image placeholder in the scene image identified in step 104 to generate an individual animation frame.
For the animation, a check is made in step 109 as to whether a sufficient number of frames has been generated. If not, then in step 110 a next secondary finish photographic image taken from a next lighting angle is selected from the determined series of photographic images, and steps 107 through 109 are repeated until a sufficient number of animation frames has been created. If a sufficient number of animation frames has been generated, the frames are aggregated into an animation sequence in step 111. In step 112, the animation sequence is then played at the client device to display the design image while simulating the effect of the movement of light on the product at different lighting angles. In this regard, the sequence of individual frames is downloaded to the client device and repeatedly displayed in sequence at a rate preferably faster than the sampling rate of the human eye. In an embodiment, in step 111 after all of the individual frames have been created, they are composited into a single image called a Sprite sheet that is sent to the client device. Once the client device receives this image (sprite sheet), a JavaScript animation script animates the frames in the client web browser.
Although
When client computer system 700 is operating, an instance of the client computer system operating system 703, for example a version of the Microsoft Windows operating system, Apple iOS, etc., will be running, represented in
While Server 710 is shown in
In interacting with server 710 to create a custom product design, the user is typically presented with one or more screen displays (not shown) allowing the user to select a type of product for customization and then review thumbnail images of various product design templates prepared by the site operator and made available for customization by the user with the user's personal text or other content. To provide the customer with a wide range of styles and design choices, each product design template typically comprises a combination of graphics, images, fonts, color schemes, and/or other design elements. When a specific product template design is selected by the user for customization, the markup language elements and layout instructions needed for browser 704 to properly render the template at the user's computer are downloaded from server 720 to client computer system 700.
After (or even during) user customization, the server 710 may receive an electronic document describing a personalized product design of a customer. In an embodiment, the server 710 includes a Browser-Renderable Preview Generator 711 which includes a scene generating engine 712 and which generates a preview image of the personalized product design of the customer embedded within a larger scene image to give the customer an accurate representation of what the physical product will look like.
The scene generating engine 712 includes an image warping and compositing engine 710, a scene framework engine 720, and a rendering engine 730. The scene framework 720 receives or obtains a scene description (i.e., scene rendering code) 722, one or more scene image(s) 724, and one or more image(s)/text/document(s) (hereinafter called “injectable(s)”) 726 to place within a generated scene. The scene framework 720 generates a composite scene image 728 containing the injectable(s) 724 composited into the received scene(s) 724 according to the scene description 722. The scene description 722 is implemented using an intuitive language (for example, in an XML format), and specifies the warping and compositing functionality to be performed on the injectable(s) 726 and/or the scene(s) 724 when generating the composite image 728. A rendering engine 730 receives the composite image 728 and renders it in a user's browser.
The scene framework 720 is a graphical composition framework that allows injection of documents, images, text, logos, uploads, etc., into a scene (which may be generated by layering one or more images). All layers of the composite image may be independently warped, and additional layering, coloring, transparency, and other inter-layer functions are provided. The scene framework 720 includes an engine which executes, interprets, consumes, or otherwise processes the scene rendering code 722 using the specified scene(s) 722 and injectable(s) 724.
At a high level, the Framework 720 is a scene rendering technology for showing customized products in context. A generated preview of a customer's customized product may be transformed in various ways, and placed inside a larger scene. An example of such a preview image implemented in a contextual scene is illustrated in
Upon receipt of an electronic document 200 implementing a personalized product design of a customer, the server 710 retrieves, generates, or selects a Scene Image and corresponding Scene Rendering Code. In the system of
Given one or more selected Scene image(s) 724 and corresponding Scene Rendering Code 722, the Preview Generator 711 determines whether the customer's personalized design includes any premium finishes such as foil, gloss, vinyl, or other shiny textured finish and if so, triggers a frame generation engine 715 to generate a plurality of individual frames containing a preview image of the customer's design injected into a scene. Each frame contains a preview image of the customer's design with the secondary regions illuminated from different lighting angles. In an embodiment, the Frame generation engine 715 retrieves the mask image corresponding to the customer's design image, and further retrieves a plurality of solid fill secondary finish photographic images taken at different lighting angles. The Frame generation engine 715 composites each of the retrieved solid fill secondary finish photographic images with a rendered image of the customer's design based on the mask image (in accordance with the method discussed in connection with
The animation generator 716 receives all of the individual frames and sequences them into an animated sequence. The animation generator 716 may further package the sequenced frames into an animation image, for example a Sprite Sheet, which is sent to the customer's computer system 700, where it is unpackaged and displayed in sequence to display the animated sequence of the customer's design with simulated light movement.
Example scene rendering code implementing a scene description for the first frame 600a of the animation sequence shown in
The above scene rendering code is implemented using XML (eXtended Markup Language). The scene rendering code defines a perspective warp transformation called “cardWarp” which takes as input the corner coordinates of a source image (normalized to range from 0 to 1, where coordinates (0, 0) correspond to an upper left corner of a rectangular image, and (1, 1) corresponds to a lower right corner of the rectangular image. The perspective warp transformation maps the input source points to target points (defined in terms of actual pixel locations (which in this example is 500 by 400 pixels).
The first step in creating the final composite is to draw the image of the hand holding the blank card (located at src=“../../../images/blanks/eu/0.png”) to the canvas. Next, a nested composite is created to hold the document (business card) and its foil mask. The foil image (located at src=“../../../images/foil/0.png) is blended into the document according to the white areas in the foil mask. The result of this blending is now mapped back into the blank card image using the “cardWarp” transformation in a multiply mode. Finally, another mask image (located at “../../../images/masks/eu/0,png”) is used to remove areas where the blended document is overlapping the fingers in the hand image.
This application is a continuation-in-part of, and claims priority to, U.S. application Ser. No. 13/084,550, filed Apr. 11, 2011 and U.S. application Ser. No. 13/205,604 filed Aug. 8, 2011, each of which is hereby incorporated by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
Parent | 13084550 | Apr 2011 | US |
Child | 13973396 | US | |
Parent | 13205604 | Aug 2011 | US |
Child | 13084550 | US |