Digital cameras allow users to easily capture and store many digital images. Unfortunately, users may have access to limited tools that allow the digital images to be organized and presented. Even with the tools available to a user, the user may find the organization and presentation of images tedious or difficult to understand. It would be desirable for a user to be able to generate an organized and meaningful presentation of digital images.
A method performed by a processing system. The method includes receiving a layout of a first slide of a slideshow with the processing system. The first slide includes first and second digital images selected based on an image content analysis of a set of digital images that include the first and the second digital images. The method also includes generating an in-slide transition between the first and the second digital images of the first slide with the processing system using the layout such that the in-slide transition emphasizes a first relationship between the first and the second digital images determined from the image content analysis
In the following detailed description, reference is made to the accompanying drawings, which form a part hereof, and in which is shown by way of illustration specific embodiments in which the disclosed subject matter may be practiced. It is to be understood that other embodiments may be utilized and structural or logical changes may be made without departing from the scope of the present disclosure. The following detailed description, therefore, is not to be taken in a limiting sense, and the scope of the present disclosure is defined by the appended claims.
As described herein, a content aware slideshow generator is provided that generates a content aware slideshow from a set of digital images using image metadata produced by image content analysis of the images. The slideshow generator prunes, clusters, and arranges the set of images into slides based on content similarities identified from the metadata. The slideshow generator arranges the slides to include transitions that consider the relationship between individual images in the slides. The transitions include in-slide transitions that seamlessly integrate multiple images on a slide and between-slide transitions that produce meaningful animations between sequences of slides. The slideshow generator outputs a slideshow with a sequence of seamlessly tiled slides with animations between selected slides.
The content aware slideshow generator may be used to automatically generate slideshows for photo collections stored as digital images. The slideshow generator selectively and intelligently arranges a flow of photos to emphasize the identified relationships between the photos and create a slideshow that is fluid, meaningful, and dynamic. By doing so, the slideshow generator may enhance the browsing experiences of viewers of the photos and facilitate sharing of the photos by the viewer.
Set 12 of images 14 includes any number of images 14. Each image 14 includes information that represents a digital image stored in any suitable storage medium or media (e.g., memory system 104 shown in
Slideshow generator 20 is configured to receive or access frames 12 from a storage medium and generate a content aware slideshow 30 with slides 32. Slideshow generator 20 generates, accesses, or otherwise receives image metadata 22 that provides allows content similarities between subsets of images 14 in the set 12 to be identified by slideshow generator 20. Slideshow generator 20 prunes, clusters, and arranges the set 12 of images 14 into slides 32 based on content similarities identified from the metadata. Slideshow generator 20 interprets the content similarities to extrapolate relationships between images 14 and allow related images to be arranged in slides 32 that result in a semantically meaningful slideshow 30. Slideshow generator 20 also generates transitions in slideshow 30 that consider the relationship between individual images 14 in slides 32. The transitions include in-slide transitions that seamlessly integrate multiple images 14 on a slide 32 and between-slide transitions that produce meaningful animations between selected slides 32. The slideshow generator outputs slideshow 30 such that slideshow 30 includes a sequence of seamlessly tiled slides 32 with animations between selected slides 32.
Slideshow generator 20 includes a selection unit 24, a layout unit 26, and a transition unit 28. The general operation of slideshow generator 20 will be described with reference to
As shown in
To select images 14, selection unit 24 applies one or more image analysis algorithms to images 14 to identify content similarities in images 14 in set 12 and stores the data generated by the algorithms as image metadata 22. Examples of content similarities include, but are not limited to, similar colors, textures, patterns, and/or objects such as faces or other distinctive features in images 14. For example, the image analysis algorithms may include a blurry filter that detects the strength and frequency of image edges and a boring filter that weighs color variation across the image and identifies content similarities based on the image edge and color variation information. The image analysis algorithms may also identify and match objects such as faces or other distinctive features of images 14. In addition, the image analysis algorithms may identify and eliminate duplicate and near duplicate images 14 based on content similarities. With each image analysis algorithm, selection unit 24 may consider time stamp information associated with images 14 because images 14 captured more closely in time may have a higher likelihood of having content similarities than images 14 captured further apart in time. Further, selection unit 24 may receive user inputs 16 regarding one or more images 14 and include images 14 in or exclude images 14 from a subset based on the user inputs 16.
Referring to
Referring back to
Referring back to
Layout unit 26 applies a super-eclipse function to the pixel values of each image 14 in an overlap area and a border area to produce seamless in-slide transitions between each image 14 on the slide 32. The in-slide transitions create a blended appearance of images 14(1)-14(4) on slide 32 that enhances the content similarities identified from image metadata 22. In other examples, layout unit 26 may apply other suitable mathematical functions to the pixel value of the overlap and border areas of images 14 to blend the images 14 and create the in-slide transitions. In an example where a slide 32 includes a single image 14, layout unit 26 may blur the edges of slide 32 only.
In the example shown in
Additional details on methods for generating layouts of images 14 may be found in U.S. patent application Ser. No. 11/536,556, entitled GRAPHIC ASSEMBLY LAYOUT WITH MAXIMUM PAGE COVERAGE AND CONTENT REMOVAL, and filed Sep. 28, 2006; U.S. patent application Ser. No. 11/769,671, entitled ARRANGING GRAPHIC OBJECTS ON A PAGE WITH RELATIVE AREA BASED CONTROL, and filed Jun. 27, 2007; and U.S. patent application Ser. No. 11/865,112, entitled ARRANGING GRAPHIC OBJECTS ON A PAGE WITH RELATIVE POSITION BASED CONTROL, and filed Oct. 1, 2007 which are incorporated by reference herein.
Referring back to
Transition unit 28 selects a region of interest in each slide 32 using image metadata 22 as indicated in a block 82. Transition unit 28 identifies corresponding regions of interest in two slides 32 generated by layout unit 26. Each slide 32 may include one or more images 14 and the region of interest may be located in any suitable image 14 in each slide 32. In
The regions of interest may encompass any generally corresponding size and shape of areas of two sequential slides 32 with content similarities identified using image metadata 22. For example, the regions of interest may include one or more similar faces, objects, recognizable patterns, colors, and/or textures. The regions of interest may appear in the foreground or background of images 14 in slides 32.
Referring to
As shown in the example of
Transition unit 28 generates the regions of interest 94(1)-94(Q) with a lesser amount of distortion than the amount of distortion in the remainder of corresponding slides 32(2)(1)-32(2)(Q). In doing so, the focal point of the transition becomes the regions of interest 90, 92, and 94(1)-94(Q) where these regions have content similarities that were determined by transition unit 28 as described above. As a result, the between-slide transition provides a meaningful transition between slides 32(2) and 32(3).
Additional details of generating a between-slide transition will now be described. Given the texture coordinate of a pixel in a slide 32 is (x, y) (0<=x, y<=1), the amount of distortion at time t may depend on the distance d between this pixel and the region of interest, and may be expressed as Equation II, where f is a vector function.
In two dimensional space, the distortion is a 2×1 vector. In Equation II, the distance d may be computed in any suitable way, such as Euclidean distance and color distance between this pixel and the region of interest. For example, if the region of interest is in the shape of a circle and the center of the circle is (xc, yc), the function may be expressed as Equation III.
Here, the vector
is the distortion speed for that pixel (x, y) and the distance d is the Euclidean distance between this pixel and the region of interest center.
In one example, transition unit 28 applies the function of Equation III to slides 32(2) and 32(3) and blends slides 32(2) and 32(3) linearly based on transition time to create a waterdrop effect, where r0 is the radius of the regions of interest 90 and 92 in slides 32(2) and 32(3), respectively.
Using Equation III, the between-slide transition causes the display of slides 32(2) and 32(3) appear to a viewer as if a water drop hits region of interest 90 and slide 32(2) is transformed into slide 32(3).
In other examples, other suitable distortion functions may be used to create other suitable effects such as a waterflow effect, a spotlight effect, or a color flow effect. With each distortion function, the function may be chosen to keep the magnitude of the function (i.e., the amount of distortion) small in the regions of interest to emphasize the display of the regions of interest.
The above examples may enhance automatically generated slideshows by highlighting and emphasizing identified relationships in images 14 used to create the slideshow. These relationships may be incorporated into in-slide and between-slide transitions to provide a more fluid, meaningful, and dynamic slideshow. As a result, slideshows may be generated with more visually pleasing transition effects and an increased level of satisfaction with the created slideshow.
Image processing system 100 includes one or more processors 102, a memory system 104, zero or more input/output devices 106, zero or more display devices 108, zero or more ports 110, and zero or more network devices 112. Processors 102, memory system 104, input/output devices 106, display devices 108, ports 110, and network devices 112 communicate using a set of interconnections 114 that includes any suitable type, number, and/or configuration of controllers, buses, interfaces, and/or other wired or wireless connections. Image processing system 100 may execute a basic input output system (BIOS), firmware, and/or an operating system that includes instructions executable by processors 102 to manage the components of image processing system 100 and provide a set of functions that allow slideshow generator 20 to access and use the components.
Each processor 102 is configured to access and execute instructions stored in memory system 104. The instructions may include a basic input output system (BIOS) or firmware (not shown), an operating system (not shown), slideshow generator 20, and other applications (not shown). Each processor 102 may execute the instructions in conjunction with or in response to information received from input/output devices 106, display devices 108, ports 110, and/or network devices 112. Each processor 102 is also configured to access and store data, such as the set 12 of images 14, image metadata 22, and the slideshow 30 with slides 32, in memory system 104.
Memory system 104 includes any suitable type, number, and configuration of volatile or non-volatile storage devices configured to store instructions and data. The storage devices of memory system 104 represent computer readable storage media that store computer-executable instructions including, in one example, slideshow generator 20. Memory system 104 also stores the set 12 of images 14, image metadata 22, and the slideshow 30 with slides 32. Memory system 104 stores instructions and data received from processors 102, input/output devices 106, display devices 108, ports 110, and network devices 112. Memory system 104 provides stored instructions and data to processors 102, input/output devices 106, display devices 108, ports 110, and network devices 112. The instructions are executable by image processing system 100 to perform the functions and methods of slideshow generator 20 described herein. Examples of storage devices in memory system 104 include hard disk drives, random access memory (RAM), read only memory (ROM), flash memory drives and cards, and magnetic and optical disks.
Input/output devices 106 include any suitable type, number, and configuration of input/output devices configured to input instructions and/or data from a user to image processing system 100 and output instructions and/or data from image processing system 100 to the user. Examples of input/output devices 106 include buttons, dials, knobs, switches, a keyboard, a mouse, a touchpad, and a touchscreen.
Display devices 108 include any suitable type, number, and configuration of display devices configured to output image, textual, and/or graphical information to a user of image processing system 100. Examples of display devices 108 include a display screen, a monitor, and a projector. Display devices 108 may be configured to display all or selected images 14 from the set 12 and all or selected slides 32 from slideshow 30.
Ports 110 include suitable type, number, and configuration of ports configured to input instructions and/or data from another device (not shown) to image processing system 100 and output instructions and/or data from image processing system 100 to another device.
Network devices 112 include any suitable type, number, and/or configuration of network devices configured to allow image processing system 100 to communicate across one or more wired or wireless networks (not shown). Network devices 112 may operate according to any suitable networking protocol and/or configuration to allow information to be transmitted by image processing system 100 to a network or received by image processing system 110 from a network.
In one example, image processing system 100 is included in an image capture device 200 that captures, stores, and processes frames 12 as shown in the example of
Although specific embodiments have been illustrated and described herein for purposes of description of the embodiments, it will be appreciated by those of ordinary skill in the art that a wide variety of alternate and/or equivalent implementations may be substituted for the specific embodiments shown and described without departing from the scope of the present disclosure. Those with skill in the art will readily appreciate that the present disclosure may be implemented in a very wide variety of embodiments. This application is intended to cover any adaptations or variations of the disclosed embodiments discussed herein. Therefore, it is manifestly intended that the scope of the present disclosure be limited by the claims and the equivalents thereof.
Number | Name | Date | Kind |
---|---|---|---|
4373784 | Nonomura et al. | Feb 1983 | A |
4662746 | Hornbeck | May 1987 | A |
4811003 | Strathman et al. | Mar 1989 | A |
4956619 | Hornbeck | Sep 1990 | A |
5061049 | Hornbeck | Oct 1991 | A |
5083857 | Hornbeck | Jan 1992 | A |
5146356 | Carlson | Sep 1992 | A |
5309241 | Hoagland | May 1994 | A |
5317409 | Macocs | May 1994 | A |
5319744 | Kelly et al. | Jun 1994 | A |
5386253 | Fielding | Jan 1995 | A |
5402184 | O'Grady et al. | Mar 1995 | A |
5490009 | Venkateswar et al. | Feb 1996 | A |
5557353 | Stahl | Sep 1996 | A |
5689283 | Shirochi | Nov 1997 | A |
5751379 | Markandey et al. | May 1998 | A |
5842762 | Clarke | Dec 1998 | A |
5897191 | Clarke | Apr 1999 | A |
5912773 | Barnett et al. | Jun 1999 | A |
5920365 | Eriksson | Jul 1999 | A |
5953148 | Moseley et al. | Sep 1999 | A |
5978518 | Oliyide et al. | Nov 1999 | A |
6025951 | Swart et al. | Feb 2000 | A |
6067143 | Tomita | May 2000 | A |
6104375 | Lam | Aug 2000 | A |
6118584 | Van Berkel et al. | Sep 2000 | A |
6141039 | Poetsch | Oct 2000 | A |
6184969 | Fergason | Feb 2001 | B1 |
6219017 | Shimada et al. | Apr 2001 | B1 |
6239783 | Hill et al. | May 2001 | B1 |
6243055 | Fergason | Jun 2001 | B1 |
6313888 | Tabata | Nov 2001 | B1 |
6317171 | Dewald | Nov 2001 | B1 |
6384816 | Tabata | May 2002 | B1 |
6390050 | Feikus | May 2002 | B2 |
6393145 | Betrisey et al. | May 2002 | B2 |
6456339 | Surati et al. | Sep 2002 | B1 |
6522356 | Watanabe | Feb 2003 | B1 |
6657603 | Demetrescu et al. | Dec 2003 | B1 |
6695451 | Yamasaki et al. | Feb 2004 | B1 |
7019713 | Hereld et al. | Mar 2006 | B2 |
7038727 | Majumder et al. | May 2006 | B2 |
7630021 | Matsuzaka et al. | Dec 2009 | B2 |
20030020809 | Gibbon et al. | Jan 2003 | A1 |
20030076325 | Thrasher | Apr 2003 | A1 |
20030090597 | Katoh et al. | May 2003 | A1 |
20040239885 | Jaynes et al. | Dec 2004 | A1 |
20070058884 | Rother et al. | Mar 2007 | A1 |
20080082012 | Gunderson et al. | Apr 2008 | A1 |
20090002764 | Atkins et al. | Jan 2009 | A1 |
20090089660 | Atkins et al. | Apr 2009 | A1 |
20090193359 | Anthony | Jul 2009 | A1 |
20090262116 | Zhao | Oct 2009 | A1 |
20100293470 | Zhao et al. | Nov 2010 | A1 |
20130124951 | Shechtman | May 2013 | A1 |
Number | Date | Country |
---|---|---|
1 001 306 | May 2000 | EP |
Entry |
---|
Chen et al. “Tiling Slideshow”, Oct. 2006, pp. 25-34. |
Chen et al. “iSlideshow: a content-aware slideshow” system, Feb. 2010, pp. 293-296. |
Chen et al. (Tiling Slideshow), Oct. 2006, pp. 25-34. |
M.Brown et al., “A practical and flexible tiled display system.” In Pacific Conf. on Comp. Graphics and Apps., 2002. |
N. Chang, “Efficient dense correspondences using temporally-encoded light patterns.” IEEE Intl. Workshop on on Projector-Camera Systems, Oct. 12, 2003, Nice, France. |
A. Majumder et al. “Perceptual photometric seamlessness in projection-based tiled displays.” In ACM Transactions on Graphics, vol. 24, No. 1, pp. 118-139, 2005. |
A. Raij et al. “PixelFlex2: a comprehensive, automatic, causally-aligned multiprojector display”. IEEE Intl. Workshop on Projector-Camera Systems, Oct. 12, 2003, Nice, FR. |
R. Raskar et al. “Seamless projection overlaps using image warping and intensity blending.” In Intl. Conf. on Virtual Systems and Multimedia, Gifu Japan, 1998. |
R. Raskar et al. “Multiprojector displays using camera-based registration.” In IEEE Visualization, p. 161-168,1999. |
R. Raskar et al. “iLamps: Geometrically Aware and Self-Configuring Projectors”. In ACM Transactions on Graphics, vol. 22, No. 3, pp. 809-818, 2003. |
R. Surati. Scalable self-calibrating display technology for seamless large-scale displays. Ph.D. thesis, Dept. of Computer Science, MIT, 1998. |
R. Szeliski. “Video Mosaics for Virtual Environments”, IEEE Computer Graphics and Applications, pp. 22-30, Mar. 1996. |
Tanguay, Donald et al., “Nizza: A Framework for Developing Real-time Streaming Multimedia Applications”, HP Laboratories, HPL-2004-132, Aug. 2, 2004, 9 pages. |
C. Jaynes et al., “Super-Resolution Composition in Multi-Projector Display's,” IEEE Int'l Workshop on Projector-Camera Systems, Oct. 2003; 8 pgs. |
L.M Chen & S. Hasagawa, “Visual Resolution Limits for Color Matrix Displays—One Panel Projectors”, vol. 13, pp. 221-226, 1992. |
A. Yasuda et al., “FLC Wobbling for High Resolution Projectors”, Journal of the SID, May 1997, pp. 229-305. |
T. Tokita et al., “P-108: FLC Resolution-Enhancing Device for Projection Displays”, SID 02 Digest 2002, pp. 638-641. |
D.K. Kelley, “Motion and Vision—II. Stabilized Spatio-Temporal Threshold Surface”, Journal of the Optical Society of America, vol. 69, No. 10, Oct. 1979. |
Candice H. Brown Elliott et al., “Color Subpixel Rendering Projectors and Flat Panel Displays”; SMPTE Advanced Motion Imaging Conference: Feb. 27- Mar. 1, 2003; pp. 1-4. |
Diana C. Chen, “Display Resolution Enhancement with Optical Scanners”, Applied Optics,vol. 40, No. 5, Feb. 10, 2001; pp. 636-643. |
Brochure entitled “The Use of Dual Graphics Devices on the High-End HP xw9300 Workstation”, Hewlett-Packard Development Company, #5983-0684EN, Feb. 2005, 8 pages. |
Wylie, “Frustration-free photo books”, Oct. 2009, accessible from http://www.hpl.hp.com/nrews/2009/oct-dec/magicphotobook.html. |
Rother et al., “AutoCollage”, International Conference on Computer Graphics and Interactive Technologies, ACM Siggraph 2006 Papers, Jul. 2006, Boston, MA. |
Perez et al., “Poisson Image Editing”, International Conference on Computer Graphics and Interactive Technologies, ACM SIGGRAPH 2003 Papers, Jul. 2003, San Diego, CA. |
Chen et al., “Tiling Slideshow”, International Multimedia Conference, Proceedings of the 14th annual conference on Multimedia, Oct. 2006, Santa Barbara, CA. |
Xiao et al., “Mixed-Initiative Photo Collage Authoring”, International Multimedia Conference, Proceedings of the 16th ACM international conference on Multimedia, Oct. 2008, Vancouver, BC, Canada. |
McFarland, Dave, “Animation in a Flash”, Macworld, Jun. 2000, pp. 99-101. |