This application is related to a presentation application and to masking a portion of an object displayed on a slide of the presentations using a presentation application.
Electronic devices, including for example computers, may be used to implement presentation applications. Using presentation applications, users may create one or more slides that include some form of visual information. For example, presentation applications can allow a user to define a background color or pattern for each of the slides, and may provide the user with an interface to add images to the slides. In many scenarios, the images added to a slide may have backgrounds that do not match the background of the slide.
Accordingly, systems and methods are provided for masking a portion of an object, such as the background of an image or graphic, that is displayed using a presentation application.
The user of an electronic device may access a presentation application to create slides for a presentation. In some embodiments, the user define may add an object to a slide (e.g., text, video, or an image), and direct the presentation application to remove or mask the background from the object. For example, the user may only want the foreground of an image displayed in a slide, and may direct the presentation application to remove portions of the image that are part of the background.
The presentation application may provide a background removal tool that has a convenient and user-friendly interface to aid a user in defining a portion of an object that the user wants to remove, and for masking the portion select by the user. The presentation application may receive a user selection of an initial point in the background, which the presentation application may use to determine an initial background color. Based on a distance away from the initial point that the user moves a cursor (e.g., using a computer mouse), the presentation application can compute a color tolerance of the background color. The presentation application may use a seed-fill algorithm to identify the background of the object. Using the seed-fill algorithm, the presentation can identify a contiguous portion of the object that includes the initial point, where each pixel in the portion is within the color tolerance of the background color.
The presentation application may allow a user to finalize the portion selected using the seed-fill algorithm, and may display the object without that portion when a user indication to finalize the portion is received. When a user indication is not received, the presentation application may determine a new location of the user-controlled cursor, and may compute a new color tolerance and a new portion based on the distance between the new location and the initial point. Thus, the presentation can automatically adjust the portion in real-time as the user moves the cursor.
The presentation application may smoothen the edges of the portion that the user selects for removal from the object. When a user indication to finalize the removal of the portion is received, the presentation application can convert the selected portion to a vector graphic. For example, the presentation application may create a black and white bitmap of the finalized portion, where the finalized portion is black, and may perform a tracing algorithm (e.g., the PO tracing algorithm) to create the vector graphic. Because a tracing algorithm is used, the edges of the vector graphic may be substantially smoother than the original edges of the selected portion.
The above and other aspects and advantages of the invention will be apparent upon consideration of the following detailed description, taken in conjunction with the accompanying drawings, in which like reference characters refer to like parts throughout, and in which:
Memory 110 may include one or more different types of memory that can be used to perform device functions. For example, memory 110 may include cache, Flash, ROM, and/or RAM. Memory 110 may be specifically dedicated to storing firmware. For example, memory 110 may be provided for storing firmware for device applications (e.g., operating system, user interface functions, and processor functions).
Storage device 112 can include one or more suitable storage mediums, such as a hard-drive, permanent memory such as ROM, semi-permanent memory such as RAM, or cache. Storage device 112 can be used for storing media (e.g., audio and video files), text, pictures, graphics, or any suitable user-specific or global information that may be used by electronic device 100. Storage device 112 may store programs or applications that can be run on processor 106, may maintain files formatted to be read and edited by one or more of the applications, and may store any additional files that may aid the operation of one or more applications (e.g., files with metadata). It should be understood that any of the information stored in storage device 112 may instead by stored in memory 110, and in some cases, may depend on the operating conditions of electronic device 100.
With continuing reference to
Electronic device 100 can include any device or group of devices capable of providing an application for editing media or creating presentations. In one embodiment, electronic device 100 may include a user computer system, such as a desktop computer (e.g., an iMac) or a laptop computer (e.g., a PowerBook or MacBook). In this embodiment, system 100 may run a suitable computer operating system, such as a Mac OS, and can include a set of applications stored in, for example, storage device 112 that is compatible with the particular operating system.
The applications available to a user of electronic device 100 can be grouped into application suites. The suites may include applications that provide similar or related functionalities. For example, the applications in one suite may include word processing and publishing applications (e.g., Keynote and Pages within the iWorks suite), and another suite may include media editing tools (e.g., iWeb within the iLife suite). The applications within a given suite may have similar properties and other features that associate each application in a suite with the other applications in that suite. For example, the applications may have a similar look and feel, may have a similar user interface, may include related features or functions, may allow a user to easily switch between the applications in the suite, or may be a combination of these.
Therefore, although embodiments of the present invention will generally be described in terms of a single application, it should be understood that any of the features or functionalities of an application may be general to one or more of the applications in a suite. Alternatively, they may be general to one or more applications across a group of suites. Also, some embodiments of the present invention are described as being executed by a presentation application. A presentation application is referred to herein as an interactive application running on any user equipment that presents media (e.g., visual media such as video, images, pictures, graphics, etc., or audio media such as sound clips) to a user and allows the user to control some of the features or aspects of the media. For example, a presentation application may be an application that can be used to create and present slides of a slide show presentation.
A presentation application can provide multiple modes of operation. For example, the presentation application can include three modes of operation referred to sometimes as an edit mode, a presentation mode, and a presenter's mode. In edit mode, the presentation application may provide a convenient and user-friendly interface for a user to add, edit, remove, or otherwise modify the slides of a slide show. To display a created slide show presentation in a format suitable for presenting to an audience, the presentation application can switch into presentation mode. In some embodiments, the presentation application may provide a full-screen presentation of the slides in the presentation mode, and can include any animations, transitions, or other properties defined in the edit mode. In presenter's mode, the presentation application can display the slide show presentation, like in the presentation mode, but with additional information (e.g., a timer for displaying the length of time since the presentation started, options to control the flow of the presentation). The presentation application may provide a presenter's mode display to a presenter of a slide show presentation, and may provide a presentation mode display to the audience of that slide show presentation.
Slide organizer 202 may display a representation of each slide in a presentation. Slide organizer 202 may include representations 204 of the slides of a presentation. Representation 204 may take on a variety of forms, such as an outline of the text in the slide or a thumbnail of the slide. Slide organizer 202 may allow the user to organize the slides prepared using the application. For example, the presentation application may allow the user to manipulate the order of the slides by dragging a representation of a slide in slide organizer 202 from one relative position to another within slide organizer 202. As illustrated in
In response to receiving a user selection of a slide representation 204 in slide organizer 202, the presentation application may display the associated slide in slide canvas 210. For example, highlight region 206 may be displayed in response to a user selection of that slide representation, and the slide identified by highlight region 206 may be displayed in slide canvas 210 as slide 212. To change the slide displayed in slide canvas 210, the presentation application may provide the user with the ability to move highlight region 206 up or down one slide at a time, or may allow the user to directly select a different slide in slide organizer 202 (e.g., using input interface 102,
The current slide 212 displayed in slide canvas 210 may include any suitable object 214, including for example text, images, graphics, video, or any other suitable object. While only one object 214 is shown in
In some embodiments, the presentation application may allow the user to edit the different elements of slide 212 from slide canvas 210. For example, the user may edit the settings associated with slide 212 (e.g., the background) or may edit the location, properties, or animation of objects (e.g., object 214) in the slide. For example, the user may customize object 214 of slide 212, or the properties of slide 212 using various tools provided by the presentation application. The user may activate one or more of these various tools from toolbar 220 provided by the presentation application.
Toolbar 220 may include several selectable icons 222 that are each operative to activate a different tool or function. For example, toolbar 220 may include an icon operative to activate a smart build tool. The smart build tool may allow a user to group multiple objects (e.g., images, tables, videos, etc.) and create animations (e.g., turntable animation, thumb-through animation, etc.) for the group. These animations may be made to appear cinematic by utilizing acceleration and other capabilities of a 3D graphics circuitry (e.g., graphics circuitry 108 of
The presentation application may provide options window 230 as an alternate or additional interface for a user to customize slide 212 or its contents (e.g., object 214). Options window 230 may be an interactive window that can be moved and minimized/maximized by the user independently of panes 202, 210, and 220. Options window 230 can be, for example, an overlay over any of these panes (as is shown in
The options available from options window 230 may vary depending on the tool selected in toolbar 220 or by the type of object 214 selected from slide 212, or both. For example, the presentation application may provide different options in window 230 based on whether a table, graphic, or text is selected. This allows options window 230 to provide suitable options for the particular type of object selected (e.g., font related options for text-based objects and pixel related options for graphics-based objects). It will be understood that although only one options window 230 is shown in
Referring now to
When “graphic” icon 302 is selected or highlighted, window 300 may provide options related to editing and enhancing graphics. For example, the presentation application may display an option in window 300 to add a frame to a graphic. The frame can, in some embodiments, resemble a physical picture frame. A frame can be added to any suitable object, such as a video, graphic animation, or image. Although the following discussion may at times describe the use of frames with an image, it should be understood that frames can be used for any suitable object displayed on a slide, including, for example, videos. This, the particular examples presented in this disclosure are not intended to be limiting.
In response to a user selecting frame icon 308, the presentation application may display pull-down menu 306 with available frames that can be added to the image. In the example of
The result of selecting frame 312 in window 300 is illustrated in
Some of the properties of images 400 and 500 can be edited directly from a slide canvas (e.g., slide canvas 204 of
In some embodiments, the frame for the image may be edited independently of the image. For example, the properties (e.g., thickness, type) of frame 502 may be changed by a user or automatically without changing the properties of the image frame 502 frames. Referring back to
In some embodiments, properties of the currently selected frame (e.g., the frame shown in icon 308) may be changed using input mechanisms or options in window 300 other than selecting a frame from drop-down menu 306. The other options provided in window 300 may allow a user to change one or more properties of the frame without changing the remaining the properties. For example, the presentation application may provide slider 310 and text box 304 in window 300 for changing just the size (e.g., thickness) of the frame. The presentation application may change the size of the frame in response to a user either moving the cursor in slider 310 or editing the frame ratio in text box 304. As one of these input mechanism is edited, the other mechanism may be automatically adjusted to reflect the edit. In some embodiments, the value of slider 310 and text box 304 can indicate the size of the frame relative to the maximum possible size of the frame. The user may change any other suitable aspect of the frame, including, for example, the color of a frame, the texture pattern of a frame, the degree of texture of a frame, etc.
In
In some embodiments, some of the options in drop-down menu 306 (
The presentation application can adjust the placement of an adornment as the size or other properties of the image changes. For example, the adornment may not change size when the image is scaled, and can remain in the same relative horizontal position of the image (e.g., in the center of the image). Vertically, the adornment may be located at a fixed position on the image (e.g., a predetermined number of pixels from the top of the image). In some embodiments, the size or other properties of the adornment can be edited in a separate window (e.g., using window 300), and can be edited independently of the frame (if applicable) or along with the frame. In other embodiments, the presentation application may not allow a user to edit the adornment.
Referring now to
As described in detail above, the presentation application can automatically lengthen or shorten a frame as necessary when the size of the image it frames is adjusted. When the size of the image is adjusted, the sizes of top graphic 902, right graphic 904, bottom graphic 906, and left graphic 908 may be stretched or narrowed accordingly. For example, if the length of the image is doubled, top and bottom graphics 902 and 906 may each be stretched to approximately twice its previous length to accommodate the longer image. Corner graphics 910, 912, 914, and 916 may remain the same or may be stretched/narrowed as well.
In some embodiments, instead of stretching the various graphics of frame 900, the length or width of the graphics may be fixed (e.g., to the true length/width of the graphics), and a graphic can be repeated multiple times and truncated (if necessary) when the size of the image it frames changes. For example, if the length of the image is doubled, the number of graphic copies used to frame a side of the image may be doubled. This feature may be illustrated in the schematic display of frame 1000 in
Referring briefly back to pull-down menu 306 of
List 1200 can include any suitable number of keys of various types. List 1200 can include graphics keys 1202 that associated particular image files with the graphics of the frame (e.g., top graphic 902 of
The presentation application can provide a user interface for adjusting the values of metadata list 120.
In some embodiments, input pane 1304 may allow a frame designer or a user of the presentation application to edit the metadata values shown in
The presentation application may allow the designer or user to save the values selected in user interface 1300. When a user selects to save the values, the values entered into input pane 1304 may be reflected in the values of list 1200. For example, if a designer or user changes the value of pull-down menu 1306 to either enable or disable clipping of frame graphics, the value of Stretch Tiles key 1204 may be updated to reflect this change. Thus, when a user selects the frame in the presentation application, the updated values may be provided as the default properties for the frame.
Referring now to
If, at step 1408, the presentation application instead determines that an input selecting a frame object has been received, process 1400 may move to step 1412. At step 1412, the presentation application may identify the graphics used to form the frame selected by the input received at step 1408. For example, the presentation application may identify the eight graphics associated with the selected frame (e.g., one for each side and one for each corner). In some embodiments, the presentation application may identify the graphics using metadata associated with the selected frame. At step 1414, the graphics associated with the frame may be scaled to a size in response to a user request. For example, in
At step 1416, the presentation application may frame the selected object by displaying the scaled graphics around at least a portion the selected object. That is, the presentation application may frame the object that was selected at step 1404. At step 1420, the presentation application may determine whether a request to change the size of the frame has been received. For example, the presentation application may determine whether a user has moved slider 310 (
If, at step 1420, the presentation application instead determines that a request to change the frame size has not been received, process 1400 may move to step 1422. At step 1422, the presentation object may determine whether the size of the selected object has changed. For example, the presentation application can determine whether the user has increased or decreased the size of the object by dragging an edge indicator of the object (e.g., edge indicator 504 of
At step 1424, the presentation application may determine whether a different frame has been requested. For example, the presentation application may determine whether the user has accessed a new frame from drop down menu 306 (
It should be understood that
Referring now to
Process 1500 may begin at step 1502. At step 1504, the presentation application may determine whether the graphic associated with the edge of the object is too small. For example, the presentation application may determine whether the length of the graphic is greater than the length of the object the frame is intended to border.
If, at step 1504, the presentation application determines that the graphic for edge is not too small, process 1500 may move to step 1506. At step 1506, the presentation application may determine whether the graphic is too large. The presentation application may determine that the graphic is too large if, for example, the length of the graphic is greater than the length of the object that the frame is intended to border. If, at step 1506, the presentation application determines that the graphic is not too large, process 1500 may move to step 1508. In this scenario, the graphic may be substantially the same length as the length of the edge of the object. Thus, at step 1508, the presentation application may draw (e.g., generate and display) the graphic for the edge. Process 1500 may then end at step 1526.
Returning to step 1506, if the presentation application instead determines that the graphic is too large, process 1500 may move to step 1510. At Step 1510, the presentation application may clip the graphic based on the size of the (e.g., length) of the object. For example, the presentation application may shorten the graphic such that the length of the shortened graphic is substantially the same as the length of the edge (e.g., side, top, or bottom) of the object. At step 1512, the presentation application may be drawn to border that edge of the object. Process 1500 may then end at step 1526.
Returning to step 1504, if the presentation application instead determines that the graphic is too small (e.g., the length of the graphic is smaller than the length of the object edge), process 1500 may move to step 1514. At step 1514, the presentation application may determine whether to stretch the graphic. This determination can be based on metadata associated with the frame, such as the metadata of
If, at step 1514, the presentation application instead determines that the graphic should not be stretched, process 1500 may move to step 1520. At step 1520, the presentation application may draw the graphic without stretching it. The length of the graphic may be too small to frame the entire edge of the object. Thus, at step 1522, the presentation application may determine whether to draw the graphic again. In some embodiments, this determination may involve determining whether the number of graphics drawn up to that point span the edge of the object. If the presentation application determines that the graphic should not be drawn again, process 1500 may move to step 1526 and end.
Returning to step 1522, if the presentation application instead determines that the graphic should be drawn again, process 1500 may move to step 1524. At step 1524, the presentation application may determine whether the graphic is too large. The graphic may be too large if, for example, drawing the complete graphic again would result in a frame edge that is too long. If the presentation application determines that the graphic is too large, process 1500 may move to step 1510, which is described above.
Returning to step 1524, if the presentation application instead determines that the graphic is not too large, process 1500 may return to step 1520 and may draw the graphic again.
One skilled in the art would understand that
Process 1600 may being at step 1602. At step 1604, the presentation application may select a frame for framing a displayed object. In some embodiments, the frame can have any of the features of the frames shown in pull-down menu 306 of
At step 1608, the presentation application may mask portions of the object outside of the contour determined at step 1606. Masking the object may involve hiding the portions of the object without deleting information about the object. Instead, the presentation application may draw the information “underneath” the object, such as the background of a slide in a slide show presentation. At step 1601, the presentation application may display the frame for the object. Displaying the frame at step 1610 may effectively cover up any portions of the displayed object outside of the inner boundary of the frame that were not masked at step 1608. Because of the masking at step 1606 and the displaying at step 1610, no portions of the image may be displayed outside of the frame. Process 1600 may then end at step 1612.
One skilled in the art would understand that
In some embodiment of the present invention, the presentation application (e.g., the presentation application that can provide display screen 200 of
To remove the background of an image, the presentation application may provide a background removal tool. Although the tool may be referred to as a background removal tool, removing backgrounds is merely one use for the tool. It should be understood that any other portion of the image, such as portions in the foreground of the object, can be removed using this tool. Also, while the tool may have the effect of removing the background, the removal may be accomplished by masking the background from view, rather than by changing the image itself (e.g., the .PNG file). Although the following discussion may at times describe the background removal tool for removing portions of an image, it should be understood that the removal feature can be used for any suitable object displayed on a slide, including, for example, videos. This, the particular examples presented in this disclosure are not intended to be limiting.
With continuing reference to
The presentation application may display region 1806, which may be a region with the same or a similar color as that of the initial point. Region 1806 may include the initial point, and may be a contiguous region. Thus, region 1806 may not include areas of the object that are the same color as the background color of the image but would not be considered part of the background. That is, because region 1806 may be contiguous, region 1806 may not include areas of object 1800 that are similar to the background color, but are separated from the initial point by an area of dissimilar color. Region 1806 may be displayed as a single color (e.g., purple), or can be multi-colored. In some embodiments, region 1806 may be semi-transparent such that the user can still view the part of object 1800 behind region 1806.
The presentation application may update the size and shape of region 1806 using the location of a pointer relative to the location of the initial point. In
Returning to
Region 1806 may define the region that the presentation application removes if a removal confirmation is received from the user. Thus, a user can direct pointer 1804 away from or toward the initial point until the resulting displayed region corresponds to a region the user wants to remove. For example, region 1806 may not fully remove the background of object 1800, so the user may want to move pointer 1804 further from the initial point.
The user can finalize a portion for removal at any time. The portion currently displayed to the user may then be masked, revealing the background or other images, etc. underneath the object. For example, referring now to
Using the background removal tool, the presentation application may provide clean, smooth edges between the portion of an object that is removed and the portion that remains visible. For example, referring again to
If the user directs the presentation application to keep the background removal tool active after finalizing a portion of the object to be removed, or if the background tool is reactivated, a display similar to display 2300 of
The presentation application may enable a user to perform background removal multiple times to the same object. For example, if the background includes multiple colors, using the background removal tool once may not be sufficient to remove the entire background. Thus, after a first removal, the user may select another initial point, possibly having a different color than the first initial point, on the image to continue removing portions of the background. The background tool may also be combined with the masking feature of the application, which may allow the user to manually define portions of the object to mask. Thus, multiple presentation application tools may be at the user's disposal to define the particular portion of the object that the user wants to remove.
When one or more masks are applied to the object using the masking tool or background removal tool, the original object may remain unchanged. Instead, the masked object may be defined by the original object and a series of masks that were used to obtain the masked object. For example, the masked object may be defined by the original object and a sequence of one or more vector graphics selected by the user for background removal. In this way, in response to requests by the user, the presentation application can revert back to displaying previous views of the object by removing each individual mask in the series of masks. The presentation application may save the masked object by storing the original object and a series of masks. Thus, the next time the user directs the presentation application to open the slide show presentation, the presentation application can still display the masked object, as saved, and can still allow the user to remove each individual mask from the masked object.
Referring now to
At step 2408, the presentation application may determine the current location of a user pointer or cursor, and at step 2410, the presentation application may compute the distance between the current location and the initial point. The pointer may be the pointer that was used to select the initial point, and the computed distance may correspond to the distance that the user has moved the cursor or pointer since selecting the initial point. At step 2412, the presentation application may compute or update a color tolerance level based on the computed distance. The tolerance level can be obtained using any suitable method, such as using a table lookup approach or by calculating a function that is proportional to the distance.
At step 2414, the presentation application may select a portion of the object for removal that includes the initial point. The presentation application may select the portion based on the color tolerance level, and using any suitable algorithm. For example, the presentation application may use a seed-fill algorithm starting from the initial point to obtain pixels in the object that are similar in color to the initial point. In some embodiments, this portion can be computed using the color distribution of the entire object. That is, if, at step 2414, the presentation application selects a portion from an object that has already been masked (e.g., after previously using the background tool, using a masking tool, etc.), the portion may be selected from the original, unmasked object. Selecting from the unmasked object may be possible because the presentation may mask the image without changing the actual object (e.g., the saved version of the object) or removing any information associated with the masked areas. Instead, the presentation application may make those areas of the object appear transparent.
At step 2416, the presentation application may display the selected portion overlaid on the object. In some embodiments, the presentation application may display the selected portion using a suitable color over the object. The color chosen for this purpose be a distinct color, such as purple. At step 2418, the presentation application may determine whether the selection has been confirmed for removal by the user. If, at step 2418, the presentation application determines that the selection has been confirmed, process 2400 may move to step 2420. At step 2420, the presentation application may mask the selected portion. This may have the effect of removing portions of the object that have not already been masked (e.g., in previous uses of the background removal too, using the masking tool, etc.).
Returning to step 2420, if the presentation application instead determines that the selection has not been confirmed, process 2400 may return to step 2408 and may determine a new location of the user pointer. Thus, the presentation application can continuously and automatically update its selection for removal based on user manipulation (e.g., user movement of a pointer).
One skilled in the art would understand that
Referring now to
At step 2506, the presentation application may create a bitmap of the selected portion. In some embodiments, the presentation application may create a bitmap image where the selected portion is black while the remaining, unselected portions are white. Thus, creating a bitmap may involve producing a black and white bitmap of the selected image. At step 2508, the presentation application may convert the bitmap to a vector graphic of the selected portion. This conversion may involve performing any suitable tracing algorithm, such as the PO trace algorithm. The tracing algorithm may be based on a tracing library, and may produce a vector-based outline of the bitmap image. Thus, the resulting graphic from tracing the bitmap may be smoothened.
At step 2510, the presentation application may remove areas of the object corresponding to the vector graphic. Removing the portion may involve performing a Boolean addition of corresponding components of the vector graphic and the object. Because one or more previous masks for the object may exist (e.g., from using the masking tool), in some embodiments, the presentation application may remove a portion by combining the vector graphic with the previously computed masks. This combination may be determined by, for example, performing a Boolean OR function on all of the computed portions. The combined portion may therefore include all of the portions of the object that should be masked. In this case, the presentation application may perform the Boolean addition on the combined portion and the object. In other embodiments, the presentation application may remove the background by adding a mask on top of a previously masked object. Process 2500 may then end at step 2512.
One skilled in the art would understand that
As described above, the background removal tool can automatically detect the boundary between the foreground and background of an object, and can mask the background from view. In some scenarios, however, a user may want to extract only a portion of the foreground, and not the entire foreground. For example, a user may want to extract one person from a group picture. Thus, the presentation application can provide a tool that allows a user to define an area of the foreground, and can remove the other portions of the object. These other portions that are removed may include the background of the object as well as other parts of the foreground not of interest to the user. Thus, a user may select between different types of masking tools provided by the presentation application, such as the foreground extraction tool and the background removal tool, based on the current needs of the user.
Although the foreground extraction tool is at times described as a tool for extracting a portion of an object's foreground, this is merely illustrative. It should be understood that the foreground extraction tool may allow a user to select and extract any portion of an object, whether background or foreground.
To extract at least a portion of an object's foreground or background, the presentation application can allow the user to define the perimeter of the foreground and extract the foreground or background based on the defined perimeter.
Display screen 2600 of
The features of the foreground/background extraction tool can be provided through image masking options areas 2604. Image masking options areas 2604 can include, for example, at least three tools that are activated using options icons 2606, 2608, and 2610. Option icon 2606 can enable the foreground extraction tool. Option icon 2608 can enable a tool similar to the background removal tool, and option icon 2610 can enable a pencil tool. Some other examples of options that may be included in image options areas 2604 are described below in connection with
When, the foreground/background extraction tool is enabled, as shown in
One or more sides of the polygon may also be created by allowing a user to guide the pointer device along the perimeter of foreground 2614. The presentation application may automatically draw perimeter region 2612 in object 2602 where the pointer device passes. The user may choose between these two mechanisms or utilize both, in addition to any other line or polygon creation mechanism (e.g., choosing from a predefined shape), to draw perimeter region 2612. During or after the creation of perimeter region 2612, the presentation application may delete the fully or partially drawn polygon in response to the user selecting “Clear Path” option 2618, and the present application may display the original object. The user may then start the creation of a perimeter region over again, or may switch to using a different tool in the presentation application, for example. One skilled in the art would appreciate that other options may also be provided to the user, such as, for example, options to clear the most recently created side of the polygon, etc.
The user may change the thickness of perimeter region 2612 using slider 2616 of object masking options areas 2604. In some embodiments, the perimeter region defined by a user may be referred to as a “lasso,” and thus slider 2616 may be labeled, as shown in
The creation of perimeter region 2612 may be complete once the user has formed a closed polygon. For example, the presentation application may automatically determine that the perimeter of foreground 2614 is complete when the ends of perimeter region 2612 touch, thereby creating a closed polygon. For the purposes of the present invention, a polygon may be considered closed when a subsequent polygon vertex is placed on or near the first vertex. In addition, if the user is using computer equipment, the user may double click on the computer mouse to indicate that the user would like to close the polygon. In response the double click, the presentation application may form a line from the last vertex created by the user to the first vertex. A completed perimeter region of the object in
A user-defined perimeter region, such as perimeter region 2702, can aid a presentation application in determining which portions of an object the user would like the application to consider as part of the foreground and as part of the background. For example, the presentation application may consider any area outside of perimeter region 2702 to be part of the background, the area enclosed by perimeter region 2702 to be part of the foreground, and may initially consider the area within perimeter region 2702 to be unknown. Thus, in some embodiments of the present invention, the presentation application initially associates the pixels of an object among three categories, sometimes called “pixel sets”: a foreground set, a background set, and an unknown set. Classifying an object in this manner is referred to herein as creating a “trimap” of the object. For simplicity, and where appropriate, a pixel identified as being part of the foreground is referred to herein as a “foreground pixel,” a pixel identified as being part of the background is referred to herein as a “background pixel,” and a pixel initially identified as being neither inside nor outside the lasso, but under the lasso, is herein referred to as an “unknown pixel.” By allowing the user to adjust the thickness or width of the lasso (e.g., using slider 2710), the presentation application dynamically changes the number of pixels in each of these pixel sets. For example, when the user increases the lass width, the presentation application may identify more pixels as being part of the unknown pixel set and fewer pixels as being part of the background and/or foreground pixel sets.
The presentation application may process each unknown pixel to determine whether the unknown pixel is completely part of the background, completely part of the foreground, or may be associated with both background and foreground. The algorithm for making this determination will be described in greater detail below in connection with
In some embodiments, the presentation application may create the trimap of an object and process unknown pixels in response to receiving a user selection of extract option 2708. This approach may be advantageous, because the presentation application starts processing unknown pixels after the shape of the perimeter region is confirmed by the user, and prevents the presentation application from having to discard and re-compute information if the user changes the shape of the polygon. In other embodiments, the presentation application creates a trimap and performs some or all of the processing steps as the user creates and/or refines the perimeter region. For example, when the user creates a side of the polygon, the presentation application may create a partial trimap for the side and may process the unknown pixels within the partial trimap.
In response to receiving a selection of extract option 2708, the presentation application may mask the areas of the object that are not part of, for example, the foreground. This may leave the pixels in the foreground set of the object trimap visible, as well as any pixels in the unknown set that are determined to be at least partially part of the foreground. In some embodiments, the degree of visibility (e.g., the opacity) of an unknown pixel may depend on the degree that the pixel is determined to be part of the foreground. One skilled in the art would appreciate that, rather than extracting the foreground, this feature could also be used to extract the background. For example, the presentation application may instead mask the pixels in the foreground set, and may leave pixels in the background set and any unknown pixels that are at least partially in the background set visible. Thus, in addition to providing extract option 2708, a second extract option could be provided for extracting the background.
One exemplary result of foreground extraction is illustrated in
The presentation application may allow a user to refine the result of foreground extraction. Thus, the user may change the result of extraction if the user is not satisfied with the result. The presentation application can provide, for example, one or more tools for this purpose. The tools may be set to either add pixels to the foreground or remove pixels from the foreground, and may do so one or more pixels at a time. The setting may be toggled, for example, in response to the user selecting an option key. A magic wand tool, for example, may be activated by selecting option 2810, and as described above, may enable a tool similar to the background removal tool. In particular, the magic wand tool may obtain an initial point of the object (e.g., in response to a mouse click) and a color tolerance (e.g., based on the current distance of a mouse pointer to the initial point), and may use a seed-fill algorithm to remove or add contiguous areas of similar color.
In addition, a pencil tool, for example, may be activated when the user selects option tool 2812 of
In some embodiments, the pencil tool may have two settings that can be adjusted by sliders 2814 and 2816 in image masking options areas 2806. Slider 2814 may adjust the size (in terms of pixels) of the pencil used to draw the added/removed portion, and slider 2816 may adjust the smoothness of the pixel. The smoothness of the pixel refers to the degree of affect the pencil has on the object. For example, when the pencil tool is set to change background pixels to foreground pixels, a pencil with a low smoothness setting may change a pixel that is fully part of the background to a pixel that is fully part of the foreground. On the other hand, a pencil with a high smoothness setting may change a pixel that is fully part of the background to a pixel that is 10% part of the foreground and 90% part of the background. The degree of affect of the pencil may also vary depending on how close a pixel is to the center of the pencil. For example, for the high smoothness setting, the pencil tool may alter pixels that are closer to the center by 50%, and may alter pixels that are at the edge of the pencil by 10%. Thus, the smoothness of the pixel may be perceived as a blur around the edge of the pencil. In some embodiments, image masking options window 2806 can provide preview window 2818 to illustrate the size and smoothness of the pencil tool based on the current setting of sliders 2814 and 2816.
As mentioned above, the foreground extraction tool provided by the presentation application can determine whether pixels are part of the foreground of an object or the background of the object.
Process 2900 may begin at step 2902. At step 2904, the presentation application may create a trimap in response to, for example, the user drawing a lasso in an object. In other embodiments, the lasso may be generated automatically by the presentation application. The object may be, for example, an image, picture, graphic, still frame from a video, etc.
At step 2906, the presentation application may calculate a set of resistances, which are referred to herein with the symbol omega (Ω), for each unknown pixel of the trimap. A resistance between two pixels may correspond to the degree of color difference between the two pixels. The presentation application may calculate the resistance by mapping the red-green-blue (RGB) value of each pixel to a point in a three-dimensional coordinate system (e.g., X-Y-Z coordinate system) and computing the Euclidean distance between the points. In other embodiments, any other method for calculating color resistance may also be used by the presentation application.
For each unknown pixel, the presentation application may need to determine the resistance to neighboring pixels in each of the eight directions (four cardinal directions and four diagonals). To obtain these resistances without computing duplicates and thereby conserving processing power, the presentation application can calculate and store only four resistances for each unknown pixel at step 2906. For example, the presentation application may calculate resistances in directions up, up-right, right, and down-right. If, for example, the down resistance were also calculated for a particular pixel, this would be a duplicate of the up resistance calculated for the pixel below that particular pixel. Therefore, computing resistances for four neighboring pixels may allow the presentation application to obtain the resistances between each neighboring pixel without computing duplicates.
With continuing reference to
Then, at step 2910, and for each unknown pixel, the presentation application can find a path of low resistance directed (or “guided”) from the unknown pixel towards the foreground pixels. That is, using the resistances computed at step 2904 and the vectors computed at step 2906, the presentation application can spawn a walker algorithm that walks pixel-by-pixel towards the nearest background pixels. Because the walkers are guided by the vectors to the closest background or foreground pixels, these vectors are referred to herein as “guidance vectors.” The algorithm can be terminated once the walker reaches a pixel that is either part of the foreground or the background. The walker algorithm can be associated with a free parameter, which defines the priority between the resistances and the guidance vectors. The free parameter, for example, may define how heavily the walker is guided by the resistances between pixels in relation to the guidance vectors associated with those pixels. The free parameter may be configured such that the walker travels the path of least resistance. Alternatively, the free parameter may be set such that a different outcome is achieved. Then, at step 2912, the presentation can find a path of low resistance leading away from the unknown pixel and directed towards the background pixels, which may be found using a second walker algorithm. The paths of low or least resistance for the foreground and background walkers can be found using the process described below in connection with
At step 2914 of process 2900, the presentation application may compute an alpha value for each unknown pixel. The alpha value may correspond to the degree that the unknown pixel is part of the foreground. An alpha value of one may signify that the pixel is fully and only part of the foreground, an alpha value of zero may signify that the pixel is fully and only part of the background, and an alpha value between these values may indicate that the pixel is part of both the background and foreground to some degree. For example, an alpha value of 0.5 indicates that the pixel is equally part of the background as it is part of the foreground. As another example, an alpha value of 0.25 may signify that the pixel is mostly part or the background.
The presentation application can compute an alpha value based on the result of the paths found in steps 2910 and 2912. For example, the presentation application can compute an alpha value based on whether the walker algorithm directed toward the foreground pixels ended at a foreground pixel, whether the walker algorithm directed toward the background pixels ended at a background pixel, and the total resistance encountered during the walks. For example, when both walks end at a foreground pixel, the alpha value for that pixel may be set to one, and when both walks ended at a background pixel, the alpha value for that pixel may be set to zero. The computation performed by the presentation application to obtain alpha values for each unknown pixel is described in greater detail below in connection with
Referring now to
If, at step 3008, the presentation instead determines that the direction towards the pixel of least resistance does not oppose the direction of the guidance vector, process 3000 may move to step 3010. At step 3010, the walker may move to the pixel of least resistance, and may add this resistance to the resistance accumulator at step 3012. Thus, the resistance accumulator may be updated to reflect the total resistance accumulated by the walk up to the current point. Then, at step 3014, the presentation application may determine whether the new pixel, which was moved to at step 3010, is an unknown pixel in the trimap. If so, process 3000 may move back to step 3006 and may identify the pixel of least resistance for the new unknown pixel. Otherwise, the new pixel may be either part of the foreground or background, and process 3000 may end at step 3016.
Thus, process 3000 of
Process 3100 of
where A is the alpha value, Ωhg is the resistance of the path guided towards the background, and Ωfx is the resistance of the path guided towards the foreground. Process 3000 may then move to step 3112 and end.
Returning to step 3108, in response to the presentation application determining that the foreground walker does not reach a foreground pixel, process 3100 may move to step 3114. In this case, both walkers have terminated at a background pixel, and the presentation application may consider the unknown pixel to be fully part of the background. Thus, at step 3114, the presentation application may set the alpha value of the pixel to zero. Process 3100 may then move to step 3112 and end.
Returning to step 3106 of process 3100, if the presentation application determines that the background walker does not reach a background pixel, process 3100 may move to step 3116. At step 3116, the presentation application may determine whether the foreground walker reaches a foreground pixel. If so, process 3100 may move to step 3118. In this case, both the background walker and foreground walker have reached foreground pixels, and the presentation application may consider the unknown pixel to be fully part of the foreground. Thus, at step 3118, the presentation application may set the alpha value to one. Process 3100 may then move to step 3112 and end.
Returning to step 31116, if the presentation application determines that the foreground walker does not reach a foreground pixel, process 3100 may move to step 3120. In this scenario, neither walkers have reached their respective types of pixels. The presentation application may not be able to properly determine whether the unknown pixel is part of the foreground or background. Thus, at step 3120, the presentation application may set the alpha value of the pixel to 0.5. Process 3100 may then move to step 3100 and end.
Using the alpha value associated with each of the unknown pixels, and therefore the information on whether each of the pixels is part of the foreground or part of the background, the presentation application may be able to extract the foreground from an object. In some embodiments, the presentation application may trace the foreground or background pixels using a tracing algorithm (e.g., PO trace) to smoothen the areas of the object before performing the extraction. The result of extracting an object may be illustrated in
One skilled in the art would understand that
In some embodiments of the present invention, the presentation application map provide users with the ability to record audio for use in enhancing slide show presentations. The presentation application may allow a user to give presentations using slides that may have been created and edited in the presentation application. In some embodiments, the presentation application may allow a user to record a presentation, including the audio of the presentation, while the presentation is given. The slide show presentation with the audio can be played or exported to a different file format (e.g., Quicktime, etc.) using the timing of the recorded presentation. These features are described in greater detail below.
Referring now to
Display screen 3200 can include selectable record icon 3202 and selectable play icon 3204 in the toolbar of the presentation application. When a user selection of record icon 3202 is received, the presentation application may enter into presentation and/or presenter's mode to record a presentation of the slides currently open in the presentation application. For example, and as described above, the presentation application may switch to a presentation mode that displays the slide show presentation in a format suitable for viewing by an audience. In addition, or alternatively, the presentation application may switch to a presenter's mode that can be used by a presenter while giving a slide show presentation. While in presentation and/or presenter's mode, the presentation application may record the presentation given by the presenter.
In some embodiments, the presentation application may record the displayed slides of the slide show presentation by saving the timing between transitions in the presentation. That is, during the presentation, the transitions between, for example, the slides and/or object animations may progress by either manual manipulation from a presenter or based on timing setting predefined in edit mode. To record the slide show presentation, the presentation application may record the timing of each of these transitions. This allows the presentation to recreate the visual component of the slide show presentation at a later time. The presentation application can use the recreated visual component to create a video file, such as an MPEG file.
In some embodiments, the presentation application may record the displayed slides of the slide show presentation directly using, for example, a screen capturing program. For example, the screen capturing program may take a series of screen shots during the slide show presentation and may save the screen shots as a video file (e.g., an MPEG file).
In addition to or instead of saving the slides or slide transitions of the slide show presentation, the presentation application may perform audio recording to obtain a sound track that can accompany the slide show presentation. For example, the presentation application may enable an audio input device (e.g., included in user interface 102 of
In some embodiments, the application may provide a user with the ability to pause the recording of the presentation. This may allow the user to stop the recording of a presentation in the event of, for example, an interruption in the presentation (e.g., if an audience member asks a question). To pause the recording, the presentation application can pause the progression of the slide show presentation and the audio recording. If a screen capturing program is used for video recording, the presentation application may also pause the screen capturing program. If the presentation application records the slide show presentation by saving slide transitions, the presentation application may perform the steps described below in connection with
The presentation application may provide an interface that allows the presenter to pause the recording and progression of the presentation in the presenter's mode of the application. In the presenter's mode, a “record” selectable icon may be displayed when the presentation is being recorded, and a “pause” selectable icon may toe displayed when the presentation is not being recorded. By selecting the “record” or the “pause” icon, depending on which is displayed, the presenter may direct the presentation application to change the recording state of the presentation. The presentation application may display the slide show presentation to an audience using the presentation mode, where neither the “record” nor the “pause” icon may be displayed.
The presentation application may create an audio-visual media file (e.g., an MPEG file) based on the recorded slide show presentation. The audio component of the audio-visual media file may be the audio recorded during the slide show presentation, and the visual component may be the slides of the slide show presentation. The presentation application may create the audio-visual media file automatically or in response to a user request. For example, in some embodiments, the presentation application may create the audio-visual media file automatically once the slide show presentation has ended (e.g., by combining the video file and the audio file). In some embodiments, the presentation application may create the audio-visual media file in response to a user request to export the recorded slide show presentation (described below).
With continued reference to
Selecting play icon 3204 may cause the presentation application to play the recorded slide show presentation from any suitable slide or from any point in the slide show presentation. For example, the presentation application may begin to play from the currently selected slide (e.g., the slide displayed in the slide canvas). As another example, the presentation application may begin playback from the beginning of the presentation regardless of which slide is currently selected, or from any other suitable playback position. The presentation application can ensure that any audio that is played during playback of the slide show presentation corresponds to the appropriate slide or slide transition.
In some embodiments, the slide show presentation, including the audio recorded by the user, can be exported from the application to another file format. As illustrated in
Window 3400 of
Based on the selected program, the presentation application may provide window 3400 with various input mechanisms (e.g., drop-down menus, check boxes, etc.) that allow a user to define export settings. The export settings may include playback settings to define the way in which the progression of the exported presentation will be advanced (e.g., manually), format settings to set the quality (e.g., high, low) of the exported presentation, and audio settings to select the audio component of the exported presentation. For example, the presentation application may provide drop-down menu 3412, where a user can select one of a plurality of playback settings for the exported file. The user can select option 3414 to produce an exported presentation that is manually advanced, or the user may select, option 3416 to produce an exported presentation with the timing of the recorded presentation. Using highlight region 3418, the user may choose to include or to exclude the recorded audio in the exported presentation.
Window 3400 can include “Next . . . ” option 3420, which a user may select to direct the presentation application to export the slide show presentation. When a user selection of “Next . . . ” option 3420 is received, the presentation application can export the slide show presentation to the file type and settings associated with the program and the export settings specified by the user. In some embodiments, the presentation application may first create a media file of a particular type (e.g., an MPEG file with the user-defined audio and visual components), and the presentation application can convert this media file to a different format compatible with the selected program or source. The presentation application may also perform any source-specific actions with the exported presentation. For example, if the presentation is exported to a QuickTime format, the presentation application may invoke the QuickTime media player to play the exported presentation. If the presentation is exported to an iPod format, the presentation application may initiate storage of the exported presentation into the storage of an iPod (if present).
In some embodiments, the presentation application may have the capability to add subtitles to an exported slide show. The presentation application may automatically add subtitles to an exported slide show. Alternatively, window 3400 may include an option (not shown) for including subtitles in the exported slide show. In some embodiments, the presentation application can add subtitles to a video file by invoking an application capable of text-to-voice conversion. The text can then be applied to slide show presentation with or instead of the recorded audio, and may be exported to the format specified by the user.
Referring now to
At step 3510, the presentation application may determine whether a transition has occurred. The transition may be a slide transition, an animation or effect of an object (e.g., build in, build out, or smart build), or any other suitable transition defined in the slide show. Some animation that may affect the decision at step 3510 are discussed in greater detail in the P5574 reference. If, at step 3510, the presentation application determines that a transition has not occurred, process 3500 may return to step 3510 and may continue waiting for a transition. If, at step 3510, the presentation application determines that a transition has occurred, process 3500 may move to step 3512. At step 3512, the presentation application may determine whether the slide show has ended. If the slide show has ended, process 3500 may move to step 3514. At step 3514, the presentation application may disable audio recording (e.g., may turn off a microphone), and process 3500 may end at step 3516.
If, from step 3510, the presentation application instead determines that the slide show has not ended, process 3500 may move to step 3518. At step 3518, the presentation application may associate the value of the timer with the transition that occurred at step 3510. The value of the timer may reflect the amount of time that has lapsed since the previous transition (e.g., the last animation or slide change). Therefore, saving the time lapse between each transition in this way may allow the presentation application to reproduce the timing of the slide show presentation at a later time.
At step 3520, the presentation application may determine whether the slide show presentation has changed to a new slide. If the presentation application determines that the slide show has not changed to the next slide, process 3500 may return to step 3508 and may reset the timer for timing the next transition. If, at step 3520, the presentation application instead determines that the slide show has changed to the next slide, the presentation application may move to step 3522. At step 3522, the presentation application may add a marker to the audio track to indicate the location of the slide change in the time progression of the slide show presentation. Therefore, if the presentation is played back from this new slide rather than from the first slide, the audio track can be played starting from the proper time. Process 3500 may return to step 3508 and the presentation application may reset the timer for timing the next transition in the slide show.
One skilled in the art would and understand that
Referring now to
If, at step 3604, the presentation application instead determines that a pause request has been received, process 3600 may move to step 3608. At step 3608, the presentation application may disable audio recording. This may cause the recorded audio track to pause. At step 3610, the presentation application may pause the timer used to obtain timing information on slide transitions (discussed above). Thus, neither the audio or the visual of the recording may remain active when a pause command is received.
At step 3612, the presentation application may determine whether a resume request has been received. A resume request may be received when, for example, the presenter re-selects the icon on the presenter's mode display. If no resume request has been received, process 3600 may return to step 3612 and may wait for a resume request. If, at step 3612, the presentation application determines that a resume request has been received, process 3600 may move to step 3614. At step 3614, the presentation application may reactivate the timer, and at step 3616, the presentation application may enable audio recording. Thus, by re-activating both the audio recording and the timer at approximately the same time, audio track and the slide show timing may continue to progress together. Process 3600 may then end at step 3606.
One skilled in the art would understand that
Referring now to
At step 3708, the presentation application may determine whether to export the slide show presentation using the timing from the recorded presentation. For example, the presentation application may determine whether the user has selected “Recorded Timing” option 3416 (
If, at step 3708, the presentation application determines that the timing of the recorded presentation should be used, process 3700 may move to step 3714. At step 3714, the presentation application may determine whether to include the recorded audio in the exported presentation. In some embodiments, this determination can be performed based on whether a user selection of checkbox 3418 (
Returning to step 3714, if the presentation application instead determines that the recorded audio should used, process 3700 may move to step 3718 without disabling the audio. At step 3718, the presentation application may export the presentation to the selected portion using the recorded timing. Because the selected application may be associated with a particular file format (e.g., resolution, compression, etc.) different than the current format, exporting the application may involve converting the slide show presentation to the format expected by the selected application. In some embodiments, exporting the slide show presentation may be performed in real-time. That is, the presentation application may play the slide show presentation, and while the slide show is played, the presentation application may save and convert the slide show presentation in the format of the selected program. In other embodiments, the presentation application may export the slide show presentation substantially faster than in real-time (e.g., 1.5×, 3×, 10× times faster) depending on the available format converters available to the presentation application. Process 3700 may end at step 3712.
One skilled in the art would understand that
The presentation application may provide a plurality of features and functionalities, including those are described above. Some of the features provided by a presentation application may not be supported by other applications or programs, or may not be supported by older versions of the presentation application. For example, when exporting a recorded slide show presentation, to another program, as described above in connection with
In some embodiments, an overlay may be displayed by the presentation application when a saved slide show is opened. The presentation application may provide a Document Warnings overlay window to the user if any import errors are discovered when the slide show is opened.
Each waning 3804 of Document Warnings window 3802 may be selectable. To choose a particular warning/notification, the user may move highlight region 3806 to the warning or may directly select one of the warnings using an input device (e.g., a mouse or stylus). In response to receiving a selection of one of the warnings, the presentation application may display more information for the selected warning, if available, or may display options associated with the selected warning.
Each warning 3804 of Document Warnings window 3802 may list the slides in the slide show that are affected by the warning. For example, if warning 3804 is intended to notify the user of an unsupported font, warning 3804 may list all of the slides that include the unsupported font. Selecting a warning using highlight region 3806 may cause the presentation application to display one of the affected slides (e.g., the first affected slide). This may allow the user to quickly and easily navigate to an affected slide and determine the severity of the importing error. In these embodiments, the presentation application may display each of the affected slides in succession when the user selects a warning multiple times. In other embodiments, warning 3804 may list the slides that are affected by a particular import error, and may rely on the user to manually select a slide affected by the error.
In some embodiments, selecting a warning using highlight region 3806 may cause the presentation application to display several different correction options to the user (not shown in
If the user selects one of the substituted fonts that the presentation application displays in the new window or pull-down menu, each instance of the unsupported font may be replaced by the selected font. In some embodiments, choosing one of the plurality of correction options may direct the presentation to temporary replace all instances of the unsupported font with the selected font. The presentation application may allow the user to preview the corrections before finalizing the corrections, thereby providing an opportunity for the user to choose and preview a different correction option. Because selecting the warning message may also bring the user to a slide affected by the warning, as described above, this interface may provide a simple and effective way to correct import errors. The user may make any necessary corrections substantially using Document Warnings window 3804 (rather than manually going through each slide), and can rely on Document Warnings window 3804 to inform the user of any problems that may arise due to importing files from different sources.
When a warning is highlight in Document Warnings window 3802, the user may direct the presentation application to remove the highlighted warning by selecting button 3808. For example, the user may choose to select button 3808 when the user no longer needs to see the warning message (e.g., has corrected the import error associated with the warning message). In response to receiving the selection, the presentation application can provide a display screen similar to display screen 3900 of
In some operating scenarios of the presentation application, a user may have several presentation open. In these scenarios, the presentation application may provide a different Document Warnings window for each open presentation. Each Document Warnings window may list the import notifications/warnings for its associated presentation. When one of the presentations is displayed in the foreground of the application, the Document Warnings window associated with that presentation may also be brought to the foreground. For example, when the user directs the presentation application to display the slides of a particular presentation, the presentation application may display the Document warnings window associated with those slides as an overlay over the slides.
In some embodiments, the presentation application may automatically correct an import error. The presentation application may automatically correct an import error when, for example, it determines that there is only one suitable approach or a best approach for the type of unsupported import error. When this occurs, the warning displayed in Document Warnings window 3802 may be a warning that the automatic correction took place. For example, in some cases, the presentation application may automatically substitute an unsupported font with another font, and may display a warning in the Document Warnings window that a font has been substituted. This example may be illustrated by warning 3804, which may be of type “Substituted Font.” As another example, the presentation application may replace a corrupted object in a slide with a placeholder object, and may notify the user of this replacement. For example, if an image included the presentation cannot be identified by the presentation application, the presentation application may display a placeholder image (e.g., of a question mark) in place of the unidentified image. This placeholder image or object may provide a user indication that there should be an object at the position, and may in some scenarios indicate the size of the original object.
As mentioned above, the presentation application may display different types of import errors. The user may determine what types of import errors to display in Document Warnings window 3802 (
Although, up to this point, Document Warnings window 3802 (
If, at step 4106, the presentation application instead determines that an import error has occurred, process 4100 may move to step 4110. At step 4110, the presentation application may determined whether the import error is the last import error that needs to be addressed. For example, the presentation application may determine whether all of the import errors associated with the document have been addressed (e.g., a warning has been displayed for each error). If the import error is the last import error, process 4100 may move to step 4108 and end.
If, at step 4110, the presentation application determines that the import error is not the last import error, process 4100 may move to step 4112. At step 4112, the presentation application may determine whether to perform automatic correction. The application may decide to automatically correct an import error if there is a logical correction strategy. For example, the application may automatically substitute an unsupported font with a font that appears nearly identical. The choice of whether to automatically correct a particular error may be a design choice made by designers of the presentation application. For example, the presentation application may keep track of a set of typical unsupported features and an approach to automatically correct each of the unsupported features in the set.
If, at step 4112, the presentation application determines that automatic correction should be performed, process 4100 may move to step 4114. At step 4114, the presentation application may perform the automatic correction associated with the import error (e.g., automatic font replacement), and at step 4116, the presentation application may display an import warning informing the user of the automatic correction. For example, for an automatically replaced font type, the presentation application may display a warning message of type “Substituted Font,” as described above. Process 4116 may then move back to step 4110 to address another import error, if necessary.
Returning to step 4112, if the presentation application instead determines that automatic correction will not be performed, process 4100 may move to step 4118. At step 4118, the presentation application may perform correction based on a correction option selected by the user. The correction option may be selected by the user from a plurality of options in, for example, a pull-down menu associated with a Document Warnings window. The application may use a placeholder correction strategy for the import error until a user selection of a correction strategy is received. For example, for an unsupported font, the application may use a default font face until a user selects a different font, at which time the presentation application may replace the default font with the selected font. Thus, even if the presentation application bypasses step 4118 (e.g., by user request), the text may still be associated with a font supported by the presentation application—the default font. Process 4100 may then return to step 4110 to address another import error, if necessary.
One skilled in the art would understand that
If, from step 4208, the presentation application instead determines that a user selection of a warning message has been received, process 4200 may move to step 4212. At step 4212, the presentation application may go to the slide affected by the warning message. For example, the application may show the affected slide in the slide canvas (e.g., slide canvas 210 of
One skilled in the art would understand that
Thus, the foregoing describes systems and methods for enhancing displayed objects, such as videos and images, within an application. Those skilled in the art will appreciate that the invention can be practiced by other than the described embodiments, which are presented for the purpose of illustration rather than of limitation.
This application is a continuation of U.S. patent application Ser. No. 12/187,262, entitled “Background Removal Tool for a Presentation Application,” filed on Aug. 6, 2008, which claims the benefit of U.S. Provisional Application Nos. 60/954,297, filed Aug. 6, 2007 and 60/970,167, filed Sep. 5, 2007. This application is also related to U.S. application Ser. No. 12/187,174, filed Aug. 6, 2008; Ser. No. 12/187,264, filed Aug. 6, 2008; and Ser. No. 12/187,263, filed Aug. 6, 2008. All of these applications are hereby incorporated by reference herein in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
5452406 | Butler et al. | Sep 1995 | A |
5459529 | Searby et al. | Oct 1995 | A |
5473740 | Kasson | Dec 1995 | A |
5534917 | MacDougall | Jul 1996 | A |
5557728 | Garrett et al. | Sep 1996 | A |
5590267 | Butler et al. | Dec 1996 | A |
5615324 | Kuboyama | Mar 1997 | A |
5687306 | Blank | Nov 1997 | A |
5715331 | Hollinger | Feb 1998 | A |
5748775 | Tsuchikawa et al. | May 1998 | A |
5781198 | Korn | Jul 1998 | A |
5812787 | Astle | Sep 1998 | A |
5914748 | Parulski et al. | Jun 1999 | A |
5923791 | Hanna et al. | Jul 1999 | A |
6008807 | Bretschneider et al. | Dec 1999 | A |
6084582 | Qureshi et al. | Jul 2000 | A |
6175423 | Frickey | Jan 2001 | B1 |
6204840 | Petelycky et al. | Mar 2001 | B1 |
6300955 | Zamir | Oct 2001 | B1 |
6311196 | Arora et al. | Oct 2001 | B1 |
6324545 | Morag | Nov 2001 | B1 |
6369835 | Lin | Apr 2002 | B1 |
6396500 | Qureshi et al. | May 2002 | B1 |
6400347 | Kang | Jun 2002 | B1 |
6400374 | Lanier | Jun 2002 | B2 |
6404441 | Chailleux | Jun 2002 | B1 |
6459756 | Tam et al. | Oct 2002 | B1 |
6550922 | Bogomolnyi | Apr 2003 | B2 |
6590586 | Swenton-Wall et al. | Jul 2003 | B1 |
6636238 | Amir et al. | Oct 2003 | B1 |
6683649 | Anderson | Jan 2004 | B1 |
6738075 | Torres et al. | May 2004 | B1 |
6750903 | Miyatake | Jun 2004 | B1 |
6827447 | Wu | Dec 2004 | B2 |
6850247 | Reid et al. | Feb 2005 | B1 |
6907136 | Shigemori | Jun 2005 | B1 |
6938210 | Huh | Aug 2005 | B1 |
6970145 | Aoki | Nov 2005 | B1 |
6973627 | Appling | Dec 2005 | B1 |
6992661 | Ikehata | Jan 2006 | B2 |
7030890 | Jouet et al. | Apr 2006 | B1 |
7032180 | Wilkinson et al. | Apr 2006 | B2 |
7102643 | Moore et al. | Sep 2006 | B2 |
7167191 | Hull et al. | Jan 2007 | B2 |
7197181 | Cote | Mar 2007 | B1 |
7246316 | Furlong et al. | Jul 2007 | B2 |
7251363 | Priddy | Jul 2007 | B2 |
7283277 | Li | Oct 2007 | B2 |
7289132 | Reid et al. | Oct 2007 | B1 |
7360159 | Chailleux | Apr 2008 | B2 |
7391929 | Edwards | Jun 2008 | B2 |
7478328 | Hannebauer et al. | Jan 2009 | B2 |
7500203 | Mori | Mar 2009 | B2 |
7505051 | Wang | Mar 2009 | B2 |
7526726 | Skwarecki et al. | Apr 2009 | B1 |
7629984 | Reid et al. | Dec 2009 | B2 |
7681113 | Takakuwa et al. | Mar 2010 | B2 |
7689712 | Lee et al. | Mar 2010 | B2 |
7707188 | Pandya et al. | Apr 2010 | B2 |
7710439 | Reid et al. | May 2010 | B2 |
7739601 | Wong et al. | Jun 2010 | B1 |
7743331 | Fleischer et al. | Jun 2010 | B1 |
7747133 | Seo et al. | Jun 2010 | B2 |
7752548 | Mercer | Jul 2010 | B2 |
7764277 | Klassen et al. | Jul 2010 | B2 |
7768535 | Reid et al. | Aug 2010 | B2 |
7787755 | Seo et al. | Aug 2010 | B2 |
7843466 | Hanechak | Nov 2010 | B2 |
8013870 | Wilensky | Sep 2011 | B2 |
8155415 | Faul et al. | Apr 2012 | B2 |
8225208 | Sprang et al. | Jul 2012 | B2 |
20010050755 | Bogomolnyi | Dec 2001 | A1 |
20020030634 | Noda | Mar 2002 | A1 |
20020109736 | Chailleux | Aug 2002 | A1 |
20020120939 | Wall et al. | Aug 2002 | A1 |
20020122067 | Geigel et al. | Sep 2002 | A1 |
20020133513 | Townsend et al. | Sep 2002 | A1 |
20020156799 | Markel et al. | Oct 2002 | A1 |
20020164151 | Jasinschi et al. | Nov 2002 | A1 |
20020167538 | Bhetanabhotla | Nov 2002 | A1 |
20030002718 | Hamid | Jan 2003 | A1 |
20030063797 | Mao | Apr 2003 | A1 |
20030072486 | Loui et al. | Apr 2003 | A1 |
20030076538 | Whittingham et al. | Apr 2003 | A1 |
20030103234 | Takabayashi | Jun 2003 | A1 |
20030112268 | Wettach | Jun 2003 | A1 |
20030122863 | Dieberger et al. | Jul 2003 | A1 |
20040056894 | Zaika et al. | Mar 2004 | A1 |
20040104920 | Kawabe | Jun 2004 | A1 |
20040109014 | Henderson | Jun 2004 | A1 |
20040123184 | Westberg | Jun 2004 | A1 |
20040125129 | Marsh | Jul 2004 | A1 |
20040201610 | Rosen et al. | Oct 2004 | A1 |
20040205477 | Lin | Oct 2004 | A1 |
20040205546 | Blumberg | Oct 2004 | A1 |
20050034077 | Jaeger | Feb 2005 | A1 |
20050044499 | Allen et al. | Feb 2005 | A1 |
20050068589 | Inness et al. | Mar 2005 | A1 |
20050071781 | Atkins | Mar 2005 | A1 |
20050091579 | Mewherter et al. | Apr 2005 | A1 |
20050108619 | Theall et al. | May 2005 | A1 |
20050111737 | Das et al. | May 2005 | A1 |
20050114251 | Sperandeo | May 2005 | A1 |
20050114521 | Lee et al. | May 2005 | A1 |
20050147322 | Saed | Jul 2005 | A1 |
20050168583 | Thomason | Aug 2005 | A1 |
20050174462 | Brost | Aug 2005 | A1 |
20050219395 | Sugimoto | Oct 2005 | A1 |
20050259590 | Brown et al. | Nov 2005 | A1 |
20050271273 | Blake | Dec 2005 | A1 |
20050278629 | Chailleux | Dec 2005 | A1 |
20060044324 | Shum | Mar 2006 | A1 |
20060098167 | Sato | May 2006 | A1 |
20060104513 | Aharon | May 2006 | A1 |
20060125846 | Springer | Jun 2006 | A1 |
20060177132 | Jackson et al. | Aug 2006 | A1 |
20060228044 | Yeh et al. | Oct 2006 | A1 |
20060238653 | Tobita | Oct 2006 | A1 |
20060244765 | Isomura et al. | Nov 2006 | A1 |
20060245805 | Takakuwa et al. | Nov 2006 | A1 |
20060259314 | Furman et al. | Nov 2006 | A1 |
20060265643 | Saft | Nov 2006 | A1 |
20060279566 | Atkins et al. | Dec 2006 | A1 |
20060284885 | Van Asten et al. | Dec 2006 | A1 |
20070005616 | Hay et al. | Jan 2007 | A1 |
20070016875 | Santos-Gomez | Jan 2007 | A1 |
20070033528 | Merril et al. | Feb 2007 | A1 |
20070038938 | Canora et al. | Feb 2007 | A1 |
20070083911 | Madden et al. | Apr 2007 | A1 |
20070106927 | Antley | May 2007 | A1 |
20070124156 | Rice et al. | May 2007 | A1 |
20070124709 | Li | May 2007 | A1 |
20070140559 | Rambharack et al. | Jun 2007 | A1 |
20070146365 | Kondo | Jun 2007 | A1 |
20070154164 | Liu et al. | Jul 2007 | A1 |
20070168849 | Bell et al. | Jul 2007 | A1 |
20070198908 | Kirkpatrick et al. | Aug 2007 | A1 |
20070199018 | Angiolillo et al. | Aug 2007 | A1 |
20070201765 | DuBois | Aug 2007 | A1 |
20070204215 | Mueller et al. | Aug 2007 | A1 |
20070206842 | Hamid | Sep 2007 | A1 |
20070266324 | Chailleux | Nov 2007 | A1 |
20070294612 | Drucker et al. | Dec 2007 | A1 |
20080007625 | Reid et al. | Jan 2008 | A1 |
20080063303 | Strom | Mar 2008 | A1 |
20080086689 | Berkley et al. | Apr 2008 | A1 |
20080095429 | Wilensky | Apr 2008 | A1 |
20080100696 | Schirdewahn | May 2008 | A1 |
20080104515 | Dumitru et al. | May 2008 | A1 |
20080120546 | Pulier | May 2008 | A1 |
20080126476 | Nicholas et al. | May 2008 | A1 |
20080170787 | Cheng | Jul 2008 | A1 |
20080222505 | Chmura | Sep 2008 | A1 |
20080222560 | Harrison | Sep 2008 | A1 |
20080272986 | Lee | Nov 2008 | A1 |
20090024881 | Carroll et al. | Jan 2009 | A1 |
20090044117 | Vaughan et al. | Feb 2009 | A1 |
20090044123 | Tilton et al. | Feb 2009 | A1 |
20090044136 | Flider et al. | Feb 2009 | A1 |
20090060334 | Rayner | Mar 2009 | A1 |
20090070636 | Bultrowicz et al. | Mar 2009 | A1 |
20090128486 | Nijlunsing et al. | May 2009 | A1 |
20090254830 | Reid et al. | Oct 2009 | A9 |
20090317011 | Axelsson | Dec 2009 | A1 |
20100064223 | Tilton | Mar 2010 | A1 |
20110107214 | Kouznetsov et al. | May 2011 | A1 |
20130050255 | Sprang et al. | Feb 2013 | A1 |
Entry |
---|
Peter Selinger, Potrace: a polygon-based tracing algorithm, Sep. 20, 2003, pp. 1-15. |
“Find and replace text—MS Word 2007 Tutorial,” http://www.1wordtut.com/2013/05/find-and-replace-text-ms-word-2007.html, 4 pages. |
“Microsoft Office Word 2007,” Greg Perry, 2006, 4 pages. |
Number | Date | Country | |
---|---|---|---|
20140300630 A1 | Oct 2014 | US |
Number | Date | Country | |
---|---|---|---|
60970167 | Sep 2007 | US | |
60954297 | Aug 2007 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 12187262 | Aug 2008 | US |
Child | 14311147 | US |