The invention is directed towards media editing and viewing application. Specifically, the invention is directed towards a navigation tool for such applications.
Video editing applications provide film producers, amateur users, etc. with tools to create video presentations (e.g., films). These applications give users the ability to edit, combine, transition, overlay, and piece together different video clips (along with other media content, such as audio) in a variety of ways to create a video presentation. As such, a video presentation is generally composed of a number video clips. In general, a video clip is a single, unbroken video that is imported individually into the video editing application used to create the video presentation.
Once created by compositing the various video clips, video presentations can be both edited and viewed. Post-composition editing can include modifying the color properties of the video clips (i.e., color correction), adding special effects to the video clips, etc. After the video presentation is finalized, it can be distributed in various forms for viewing (e.g., as an electronic file, as a DVD or Blu-Ray disc, etc.).
Both in the post-composition editing stage and the viewing stage, users will want to navigate the video presentation in order to easily jump to a particular point in the presentation. In editing, a user might want to color correct a particular video clip and would need a way to navigate through the potential hundreds, if not thousands, of video clips that make up the presentation. Precisely finding and accessing a particular video clip can be difficult.
Similarly, users watching a video will often want to jump to a particular scene. For navigating a video while viewing it on a digital media player, current players generally only provide a track in the time dimension with a playhead that can be moved in order to jump to a point in the video presentation. However, the typical track just shows a user the time, so in order to reposition the playhead the user is required to guess at the time in the video presentation at which their desired content appears. Attempting to find a particular scene or video clip in this manner can be a tedious process.
DVD players generally provide a scenes menu that has the movie or other video content broken down into scenes (often on the order of a few dozen scenes). A user can access the scenes menu, where each scene is often represented by an image. For the typical DVD, approximately four such images are shown at one time, and a user can jump through the groups of four to find the scene they desire. However, scenes can be on the order of four or five minutes long, and any more precise navigation within a scene must be done with the standard fast-forward and rewind functions. For all of the above applications, tools for more robust navigation of video presentations are needed.
For a composite video presentation formed by an ordered series of video clips, some embodiments of the invention provide a novel navigation tool for representing the video clips and navigating through them. Some embodiments select a representative video picture from each video clip in the video presentation and use the representative video pictures to generate the navigation tool.
In some embodiments, exactly one representative video picture from each clip (e.g., a video picture based on location within the clip, a video picture based on analysis of the clip, etc.) is selected for the navigation tool. The selected video pictures are then arranged in a user interface to generate the navigation tool. In some embodiments, the navigation tool is a rectangular display area (e.g., a scrollbar) and the selected video pictures are ordered in the display area to match the order within the video presentation of the video clips they represent.
Different embodiments use the navigation tool to navigate through the video presentation differently. For instance, the selection of a particular video picture in the navigation tool in some embodiments causes the display of the video presentation in a separate display area (for viewing, editing, etc.), starting from the video clip represented by the selected video picture. In such cases, each representative video picture acts as a user interface tool that links to a particular point in the video presentation.
Instead of navigating through the video presentation directly, other embodiments use the navigation tool to navigate through a larger set of representations of the video clips in the video presentation. In order to do this, some embodiments display the navigation tool as a scrollbar in a first display area, while in a second display area display a set of thumbnail images (in some cases, the same images as in the scrollbar) for a user to scroll through. In some embodiments, the first display area has a defined size in which all of the selected representative video pictures are displayed at once. Accordingly, the selected video pictures are compressed in size so as to fit in the first display area. In some cases, the video pictures are compressed to the point that the content is no longer discernible by a human eye. However, the navigation tool will still illustrate the prominent colors of the clips, from which a human can discern the color rhythm of the video project (i.e., sections that are “warm” (light colors) or “cool” (dark colors)). The video pictures in the second display area are displayed in a size large enough that the content is discernible by a human (e.g, as thumbnails).
Some embodiments display a window over the navigation tool to indicate which video clips have video pictures presently displayed in the second display area. This window can be dragged along the navigation tool by the user in order to scroll through the video clips in the second display area. In some embodiments, a user can select a particular video clip for display in a separate display area either using the navigation tool (as described above) or by selecting one of the larger video pictures in the second display area.
The navigation tool as described can be used in a variety of different applications. For instance, the navigation tool of some embodiments is provided in a video-editing application. A user can use the navigation tool to select a video clip for editing (either by selecting directly from the navigation tool or by scrolling through video clips in a second display area). Once the video clip is selected, a user can use editing tools to edit various aspects of the selected clip (e.g., color correction, adding special effects, etc.).
In addition to use in editing applications, the navigation tool can be used in applications for playing a completed composite video presentation. For instance, navigation tools that include one representative video picture per video clip may be used in a DVD player application. In such applications, the video clips are defined as in the editing process in some cases, while in other cases the video clips are defined based on the scenes of the DVD. A viewer of the DVD can use the navigation tool to navigate to a particular scene in the movie (or other video content) that is stored on the DVD.
Similarly, the navigation tool may be provided in applications that play video files other than from DVDs (e.g., personal computer applications such as Quicktime®, applications on handheld devices such as cell phones or media players, etc.) or as the interface for interacting with machines designed specifically for playing DVDs, Blu-Ray® discs, etc. Finally, the navigation tool can be used to navigate through images rather than video. For instance, if a digital photo album groups photos into “events” or “film rolls” (sets of images that are logically grouped together), then one image from each of several sets could be selected and used to generate the navigation tool of some embodiments.
The novel features of the invention are set forth in the appended claims. However, for purpose of explanation, several embodiments of the invention are set forth in the following figures.
In the following description, numerous details are set forth for purpose of explanation. However, one of ordinary skill in the art will realize that the invention may be practiced without the use of these specific details. For instance, some portions of the application refer to examples of video clips that are composed of a number of frames. One of ordinary skill will recognize that video clips could be composed of fields or other units of digital video as well.
For a composite video presentation formed by an ordered series of video clips, some embodiments of the invention provide a novel navigation tool for representing the video clips and navigating through them. Some embodiments select a representative video picture from each video clip in the video presentation and use the representative video pictures to generate the navigation tool.
As stated, the video presentation is composed of an ordered series of video clips. Each video clip, in some embodiments, is a series of consecutive ordered video pictures (e.g., frames or fields of video). In some embodiments, a video clip is defined as a series of ordered video pictures that is individually imported into the video presentation.
In some embodiments, exactly one representative video picture from each clip (e.g., a video picture based on location within the clip, a video picture based on analysis of the clip, etc.) is selected for the navigation tool. The selected video pictures are then arranged in a user interface to generate the navigation tool. In some embodiments, the navigation tool is a rectangular display area (e.g., a scrollbar) and the selected video pictures are ordered in the display area to match the order within the video presentation of the video clips they represent.
Different embodiments use the navigation tool to navigate through the video presentation differently. For instance, the selection of a particular video picture in the navigation tool in some embodiments causes the display of the video presentation in a separate display area (for viewing, editing, etc.), starting from the video clip represented by the selected video picture. In such cases, each representative video picture acts as a user interface tool that links to a particular point in the video presentation.
Instead of navigating through the video presentation directly, other embodiments use the navigation tool to navigate through a larger set of representations of the video clips in the video presentation. In order to do this, some embodiments display the navigation tool as a scrollbar in a first display area, while in a second display area display a set of thumbnail images (in some cases, the same images as in the scrollbar) for a user to scroll through. This is the case in
As shown in
In some embodiments, as shown in
As in the example of
The navigation tool as described can be used in a variety of different applications. For instance, the navigation tool of some embodiments is provided in a video-editing application. A user can use the navigation tool to select a video clip for editing (either by selecting directly from the navigation tool or by scrolling through video clips in a second display area). Once the video clip is selected, a user can use editing tools to edit various aspects of the selected clip (e.g., color correction, adding special effects, etc.).
In addition to use in editing applications, the navigation tool can be used in applications for playing a completed composite video presentation. For instance, navigation tools that include one representative video picture per video clip may be used in a DVD player application. In such applications, the video clips are defined as in the editing process in some cases, while in other cases the video clips are defined based on the scenes of the DVD. A viewer of the DVD can use the navigation tool to navigate to a particular scene in the movie (or other video content) that is stored on the DVD.
Similarly, the navigation tool may be provided in applications that play video files other than from DVDs (e.g., personal computer applications such as Quicktime®, applications on handheld devices such as cell phones or media players, etc.) or as the interface for interacting with machines designed specifically for playing DVDs, Blu-Ray®t discs, etc. Finally, the navigation tool can be used to navigate through images rather than video. For instance, if a digital photo album groups photos into “events” or “film rolls” (sets of images that are logically grouped together), then one image from each of several sets could be selected and used to generate the navigation tool of some embodiments.
Several more detailed embodiments of the invention are described in the sections below. Section I provides a description of various ways to generate the navigation tool of some embodiments. Section II describes various ways to navigate a video presentation using the navigation tool. Section III provides descriptions of various applications that use the navigation tool. Section IV follows with a description of the software architecture of one such application, a media-editing application. Finally, Section V describes a computer system which implements some embodiments of the invention.
As mentioned above, some embodiments provide a novel navigation tool for navigating through video clips in a video presentation.
A non-linear editing application can include multiple overlapping tracks of video. For instance, Clip 1-1 (306) and Clip 2-1 (308) overlap in the composite presentation, and Clip 1-2 (307) and Clip 2-3 (310) also overlap. Such overlaps might occur when there is animation (credits, graphics, etc.) displayed over a main video clip. Such overlaps can also occur from a user simply assigning multiple video clips to different tracks over the same time period. In some embodiments, the non-linear editor has a hierarchy of tracks such that if a first clip is assigned to one track and a second clip is assigned to a second track over the same time period as the first clip, the non-linear editor renders whichever clip is on a track that is higher in the hierarchy.
For the purpose of generating the navigation tool, some embodiments require that the clips be arranged without overlap. Accordingly, some embodiments define clips for the navigation tool based on the clips in the non-linear editor. In the example of
Different embodiments might define the clips differently, however. For instance, some embodiments allow a user to further refine the breaks between video clips in the non-linear editing application. For instance, a user might know that Clip 2-1 (308) and Clip 2-2 (309) were shot from the same location and are intended to look like one video clip in the rendered final presentation. As such, the user could manually remove the boundary between Clip B (317) and Clip C (318) such that these would be combined into one video clip. In some embodiments, a video presentation will have a main track that is composed of video clips that are successively played and various other tracks that include insets, overlaid graphics, animations, and text, etc. In such cases, some embodiments treat only the video clips in the main track as the video clips from which the navigation tool is generated, ignoring all the various insets and overlays.
Returning to process 200, the process next selects (at 210) a video clip. In some embodiments, video clips are selected successively (i.e., referring to video presentation 150 of
Next, the process selects (at 215) a representative video picture (i.e., a frame, field, or other unit of digital video) for the selected clip to be used in generating the navigation tool. Different embodiments select the representative video picture for a particular clip differently. Some embodiments select the representative video picture based on its location in the video clip. For instance, some embodiments always select the first video picture of a clip as the representative video picture for the clip. Other embodiments select the middle video picture of a clip (i.e., if the clip has 99 video pictures, the 50th video picture is the middle) as the representative frame. Still other embodiments select a video picture that is a particular number of video pictures into the video clip. For instance, the first video picture of some clips might not be very representative of the color properties of the clip, so the 50th (or 100th, 300th, etc.) might be chosen instead.
Alternatively, the representative video picture may be selected based on an analysis of one or more properties of the video pictures of the clips. For instance, color properties of the clips might be analyzed, and a frame most representative of the colors of the various frames in the video clip would be selected. Analysis of the objects of the video picture is also done in some embodiments in order to determine a representative video picture. For example, some embodiments could determine the various content objects (e.g., people, cars, buildings, trees, etc.) present in a particular video clip, then select a video picture that best represents the content objects. Some embodiments select all representative video pictures in the same way (e.g., the first video picture from each video clip), while other embodiments use different techniques for the various representative video pictures.
Next, process 200 determines whether there are more video clips in the video presentation that have not been analyzed. When there are more video clips to analyze, the process proceeds to 210 to select another video clip. When representative frames have been selected for all video clips in the video presentation, the process generates (at 225) a navigation tool from the selected frames. In some embodiments, generating the navigation tool entails compressing the selected video pictures to fit into a predetermined area such that all of the video pictures are displayed at once.
As shown, process 400 receives (at 405) a set of images for the navigation tool. In some embodiments, the received images are representative video pictures for the various video clips in a composite video presentation. As described above, the representative video picture for a particular video clip could be based on position in the video clip, properties of the video pictures of the video clip, etc. The received images could also be selected photographs from a number of digital photo albums (i.e., a set of images grouped together as representing one film roll or event).
Next, the process 400 identifies (at 410) dimensions of the navigation tool. Some embodiments require all of the images in the navigation tool to be displayed concurrently in a fixed area. Accordingly, the process identifies the dimensions of the fixed area in which the navigation tool is to be displayed. The area is determined in some embodiments by the application for which the navigation tool is to be used. For instance, in a media-editing application, the navigation tool might take up only a portion of the display area for the application, whereas in a video playing application the navigation tool might take up the entire bottom of the display of the application.
Once the dimensions of the navigation tool are identified, the process determines (at 415) the size for each image in the navigation tool. In some embodiments, the navigation tool is a horizontal arrangement of all of the images in the set of images. When the set of images is the set of representative video pictures for all of the video clips in a video presentation, some such embodiments arrange the video pictures in the order of the video clips they represent, such that the video picture for the first video clip in the presentation is at the left end of the navigation tool and the video picture for the last video clip in the presentation is at the right end. In such embodiments, the width for each image WI is the total width W for the navigation tool divided by the total number of images N, such that WI=W/N. The height of each image is the same as the height of the navigation tool.
Similarly, some embodiments arrange the images vertically. When the set of images is the set of representative video pictures for all of the video clips in a video presentation, some such embodiments arrange the video pictures such that the video picture for the first video clip in the presentation is at the top of the navigation tool and the video picture for the last video clip in the presentation is at the bottom of the navigation tool. In such embodiments, the height for each image is the total height of the navigation tool divided by the total number of images and the width of each image is the width of the navigation tool.
Other embodiments arrange the images differently. For example, the images could be arranged in two dimensions (e.g., in a square). Or the images could be arranged primarily horizontally, but in two rows stacked on top of each other. Furthermore, some embodiments determine a different size for each image in the navigation tool (e.g., based on the length of the video clip represented by the image).
After the size for each image is determined, process 400 compresses (at 420) the images to the determined size or sizes. Obviously, in scaling an image by large scale factors (e.g., 10, 100, 500, etc., depending on the allotted dimensions of the navigation tool and the number of images that must fit in the navigation tool) a great deal of visual data will be lost. To scale the images, some embodiments use standard scaling/compression mechanisms that are regularly used in image editing software.
Next, the process displays (at 425) the navigation tool.
While it is more difficult to discern the content of any particular image in navigation tool 500 as compared to navigation tool 600 (compare, e.g., image 510 with image 610), navigation tool 500 illustrates the color rhythm of the images (which, in some cases, corresponds to the color rhythm of a video presentation) quite well. The images clearly go from generally lighter to a dark section in the middle to a substantially lighter ending. The use of these navigation tools to actually navigate through a video presentation or other series of images will be described in detail in the next section.
II. Navigating with the Navigation Tool
The navigation tool of some embodiments can be used in different ways to navigate a video presentation. For instance, some embodiments use the navigation tool as a scrollbar that enables scrolling through the video clips of a video presentation. Some embodiments use the navigation tool to enable selection of a video clip (for viewing, editing, etc.), and some embodiments combine the scrolling and selection features.
Next, the process displays (at 715) a subset of scrollable thumbnail images for the video presentation and displays (at 720) the navigation tool. While process 700 is shown as displaying the thumbnails before displaying the navigation tool, one of ordinary skill in the art will recognize that some embodiments might display the navigation tool before displaying the thumbnails, or display them both at the same time. Some embodiments also display, in a separate display area, the video presentation starting from the first video clip.
Process 700 next displays (at 725) a window over the portion of the navigation tool that corresponds to the displayed subset of thumbnail images. The window, in some embodiments, acts like a thumb or elevator of a scrollbar—i.e., a knob that can be dragged by a user to scroll through the thumbnails.
In embodiments that display the window over all of the images in the navigation tool that correspond to the displayed subset of thumbnails (as is the case with window 825), the window will vary in size along with the size of the images that make up the navigation tool. By comparison,
Some embodiments display a window that only covers one of the images in the navigation tool rather than the entire set of images that is displayed in the larger view.
After the initial display of the navigation tool, window, and portion of the video presentation, process 700 determines (at 730) whether any input has been received to navigate through the thumbnail images. When no such input has been received, the process proceeds to 740, which is described below. When input is received to navigate through the thumbnails, the process displays (at 735) a new subset of the thumbnail images. In embodiments such as those shown in
The bottom portion of
The received input to navigate through the thumbnail images can be in a variety of different forms in some embodiments. In some embodiments, a user can select (e.g., with a cursor control device such as a mouse, touchpad, etc.) the window and move the window along the navigation tool, thereby resulting in the scrolling of the thumbnail images. For instance, in
Some embodiments also allow a user to use scroll arrows, such as scroll arrows 835 and 840. Selecting the left scroll arrow will cause the window to move to the left as the thumbnail images scroll to the right. Selecting the right scroll arrow will cause the window to move to the right as the thumbnail images scroll to the left. For instance, in
Other mechanisms for navigating through the thumbnail images are available in some embodiments as well. A user can select an image in the navigation tool (e.g., by moving a cursor over the image and pressing a cursor control button) in order to jump the displayed larger images to that particular image. Some embodiments also allow a user to place the cursor in the display area where the larger images are displayed and drag the images in order to scroll through them.
When process 700 determines that no input is received to navigate through the thumbnails, the process determines (at 740) whether a selection of a video clip is received. When input selecting a video clip is received, the process displays (at 745) the selected video clip in a separate display area. In some embodiments, displaying the selected video clip entails playing the video presentation starting from the beginning of the selected video clip.
Some embodiments enable a user to select an image or video clip for display in the separate display area (e.g., display area 1210) directly from the navigation tool itself. For instance, in some embodiments, a user can perform a selection command (e.g., clicking, double-clicking, etc.) with a cursor over a particular image in the navigation tool such that a version of the particular image will be displayed in the separate display area. This is also the case in some embodiments that do not have a scrolling series of images such as images 1205.
Returning to
Some embodiments enable a user to interact with the navigation tool by using a touchscreen. In such embodiments, the navigation tool (and video presentation) are displayed on the touchscreen. Rather than using a cursor controlled by a cursor controller such as a mouse, the user interacts directly with the touchscreen (in fact, in some embodiments, there is no cursor). For instance, to navigate through the thumbnails in some embodiments, the user places a finger over the window and moves the window to the right or left. Similarly, the user can select a video clip by double-tapping on the screen over a particular thumbnail in some embodiments (or similar selection input).
One of ordinary skill in the art will recognize that other processes similar to process 700 are possible. For instance, some embodiments do not display the scrolling thumbnail images, and instead use the navigation tool to navigate through the video presentation directly. Process 700 and other, similar processes can be used by a variety of different applications to navigate a video presentation. The following section describes a number of these applications in greater detail.
In some embodiments, the navigation tool as described above is incorporated into a larger application. For instance, when the navigation tool is for navigating video, it could be incorporated into a digital media player (e.g., Apple Quicktime® or Microsoft Windows Media Player®), a video editing application (e.g., Apple Color®), or even the user interface of a specialized electronics device such as a DVD player or Blu-Ray® player. When the navigation tool is for navigating through groups of images, it could be incorporated into a digital photobook (e.g., Apple iPhoto®) or an image-editing application (e.g., Apple Aperture®) or Adobe Photoshop®).
The set of selectable clips 1410 includes a set of clips that are available to a user for editing. In some cases, the clips 1410 are clips within a video presentation that have already been edited, while other clips are selectable only through navigation tool 1430. For shorter video presentations, some embodiments display all of the clips in the video presentation among clips 1410. Other embodiments have different selection criteria for determining which video clips should be present among clips 1410.
The first set of color correction tools 1415 includes color wheels and sliders for editing shadow, midtone, and highlight in the image displayed in editing window 1405. The second set of color correction tools 1420 includes curves for adjusting red, green, blue, and luma values. Adjusting the red curve, for instance, will only affect the red values of the pixels in the image displayed in editing window 1405. When a region of interest is defined for the image, then adjusting the red curve will only affect the red values of the pixels in the region of interest. The indicator graphs 1425 illustrate the spread of color values throughout the image displayed in editing window 1405.
As shown in
In some embodiments, an edit made to the image displayed in editing window 1405 is in fact made for the entire video clip of which the image is a part. While this is not necessarily difficult for edits applied to the entire image, some embodiments include selection tools that allow a user to select a region of interest and only apply edits to the region of interest. Such a region will often move throughout the course of a video clip. For instance, if the region of interest is a person, then the person may move during the course of the video clip. Some embodiments can recognize this motion (via edge detection, color value comparisons, or other techniques) and move the region of interest along with a particular object or objects in a video clip.
The navigation tool 1430 includes a representative video picture from each of the video clips in the video presentation. In some embodiments, navigation tool 1430 is generated according to processes 200 and 400 and can be used to navigate the video presentation according to process 700, all of which are described above. A user can use navigation tool 1430 to select a particular video clip for editing in some embodiments. Some embodiments generate the navigation tool when a user selects the particular video presentation for editing, whereas other embodiments generate a navigation tool beforehand (or at least determine the representative video pictures) and store the navigation tool with the video presentation.
One of ordinary skill will recognize that the media-editing tools and processes that are described above can be incorporated into any media-editing application by way of a plug-in, applet, or direct function incorporated within the application itself. Accordingly, different image-editing applications (e.g., Apple Aperture®, Apple iPhoto®, Adobe Photoshop®, Adobe Lightroom®, etc.) or video-editing applications (e.g., Apple Final Cut Pro®, Apple Color®, Avid®, etc.) may each implement one or more of the media-editing tools described herein. Additionally, the media-editing tools and processes described above and below can be incorporated within the functionality of any other application (e.g., digital photo albums, etc.), or within an operating system (e.g., Microsoft Windows®, Apple Mac OS®, etc.).
Furthermore, one of ordinary skill will recognize that many image- and video-editing features not shown in 1400 may also be part of a media-editing application that incorporates the invention. For instance, some embodiments might have other color correction tools, such as tools to change saturation, hue, balance, etc., or might have tools for adding various effects to an image or a region of interest of an image.
One of ordinary skill will recognize that the media player features that are described above can be incorporated into any media player application by way of a plug-in, applet, or direct function incorporated within the application itself. Accordingly, different media player applications (e.g., Apple Quicktime®, Real Player®, Microsoft Windows Media Player®, etc.) may each implement one or more of the image-editing tools described herein. Additionally, the media player tools and processes described above can be incorporated within the functionality of any other application (e.g., media-editing applications, etc.), or within an operating system (e.g., Microsoft Windows®, Apple Mac OS®, etc.). Furthermore, one of ordinary skill will recognize that many media player features not shown in application 1700 may also be part of a media player application that incorporates the invention.
In some embodiments, the processes described above are implemented as software running on a particular machine, such as a computer or a handheld device, or stored in a computer readable medium.
Media-editing application 1900 includes a user interface (UI) interaction module 1905, a navigation tool generator 1910, a navigation module 1915, an editing module 1920, and a preview generator 1925. The navigation tool generator 1910 includes a video picture selector 1930 and a video picture compressor 1935. The media-editing application also includes content storage 1940. In some embodiments, other storages are present as well, which may be part of the same physical storage as content storage 1940 or stored separately.
The UI interaction module 1905 generates some user interface items, such as the various color correction tools described above with respect to
A user interacts with the user interface via input devices (not shown). The input devices, such as cursor controllers (mouse, tablet, touchpad, etc.) and keyboards, send signals to the cursor controller driver 1955 and keyboard driver 1960, which translate those signals into user input data that is provided to the UI interaction module 1905. The UI interaction module uses the user input data to modify the displayed user interface items. The UI interaction module also passes data on user interactions to the navigation module (1915) and the navigation tool generator 1910).
Navigation tool generator 1910 generates a navigation tool for a video presentation. Navigation tool receives a selection of a video presentation for which a navigation tool is to be generated from UI interaction module 1905. The video picture selector 1930 receives information about the video presentation from 1940 and selects one video picture for each video clip in the video presentation. These selected video pictures are sent to frame compressor 1935 which determines the dimensions of the video pictures and compresses them to the appropriate size. In some embodiments, processes 200 and 400 are performed at least partially by navigation tool generator 1910. The generated navigation tool is sent to the navigation module 1915 for use in navigating the video project and to the display module 1965 for display.
The navigation module 1915 receives information from the UI interaction module to navigate a video presentation using a navigation tool. For instance, the information could be selection of a window to move the window along a navigation tool, selection of scroll arrows, or selection of a particular video picture. The navigation module translates this information and sends the translated information to the display module to modify the display of the navigation tool and the associated thumbnail images and/or video presentation.
The editing module 1920 performs the actual editing of the media content (i.e., videos, images, etc.), which is stored in content storage 1940. The editing module 1920 receives information from the UI interaction module 1905, such as input affecting color correction tools and other editing tools. After editing the content, the editing module 1920 stores the edited content in content storage 1940.
Preview generator 1925 enables the output of audio and video from the media-editing application application. The preview generator 1925, based on information from the editing module 1920 (and, in some embodiments, other modules), sends information about how to display each pixel of a video or image to the display module 1965.
While many of the features have been described as being performed by one module (e.g., the navigation module 1915 or frame compressor 1935), one of ordinary skill would recognize that the functions might be split up into multiple modules, and the performance of one feature might even require multiple modules.
Process 2000 then defines (at 2010) a second display area for displaying a navigation tool for navigating the video presentation. The process then defines (at 2015) a module for selecting a representative image from each video clip in a video presentation, such as video picture selector 1930. The process also defines (at 2020) a module for compressing the selected images to generate a navigation tool, such as video picture compressor 1935. Next, the process defines (at 2025) a navigation tool for receiving user input to navigate through a video presentation. One example of such a navigation tool is navigation tool 1430 of
The process next stores (at 2035) the defined elements (i.e., the defined modules, UI items, etc.) on a computer readable storage medium. As mentioned above, in some embodiments the computer readable storage medium is a distributable CD-ROM. In some embodiments, the medium is one or more of a solid-state device, a hard disk, a CD-ROM, or other non-volatile computer readable storage medium. One of ordinary skill in the art will recognize that the various modules and UI items defined by process 2000 are not exhaustive of the modules and UI items that could be defined and stored on a computer readable storage medium for an editing application incorporating some embodiments of the invention.
One of ordinary skill in the art will recognize that the various modules and UI items defined by process 2000 are not exhaustive of the modules and UI items that could be defined and stored on a computer readable storage medium for an editing application incorporating some embodiments of the invention.
Many of the above-described features and applications are implemented as software processes that are specified as a set of instructions recorded on a computer readable storage medium (also referred to as computer readable medium). When these instructions are executed by one or more computational element(s) (such as processors or other computational elements like ASICs and FPGAs), they cause the computational element(s) to perform the actions indicated in the instructions. Computer is meant in its broadest sense, and can include any electronic device with a processor. Examples of computer readable media include, but are not limited to, CD-ROMs, flash drives, RAM chips, hard drives, EPROMs, etc. The computer readable media does not include carrier waves and electronic signals passing wirelessly or over wired connections.
In this specification, the term “software” is meant to include firmware residing in read-only memory or applications stored in magnetic storage which can be read into memory for processing by a processor. Also, in some embodiments, multiple software inventions can be implemented as sub-parts of a larger program while remaining distinct software inventions. In some embodiments, multiple software inventions can also be implemented as separate programs. Finally, any combination of separate programs that together implement a software invention described here is within the scope of the invention. In some embodiments, the software programs when installed to operate on one or more computer systems define one or more specific machine implementations that execute and perform the operations of the software programs.
The bus 2105 collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of the computer system 2100. For instance, the bus 2105 communicatively connects the processor 2110 with the read-only memory 2130, the GPU 2120, the system memory 2125, and the permanent storage device 2135.
From these various memory units, the processor 2110 retrieves instructions to execute and data to process in order to execute the processes of the invention. In some embodiments, the processor comprises a Field Programmable Gate Array (FPGA), an ASIC, or various other electronic components for executing instructions. In some embodiments, the processor Some instructions are passed to and executed by the GPU 2120. The GPU 2120 can offload various computations or complement the image processing provided by the processor 2110. In some embodiments, such functionality can be provided using CoreImage's kernel shading language.
The read-only-memory (ROM) 2130 stores static data and instructions that are needed by the processor 2110 and other modules of the computer system. The permanent storage device 2135, on the other hand, is a read-and-write memory device. This device is a non-volatile memory unit that stores instructions and data even when the computer system 2100 is off. Some embodiments of the invention use a mass-storage device (such as a magnetic or optical disk and its corresponding disk drive) as the permanent storage device 2135.
Other embodiments use a removable storage device (such as a floppy disk, flash drive, or ZIP® disk, and its corresponding disk drive) as the permanent storage device. Like the permanent storage device 2135, the system memory 2125 is a read-and-write memory device. However, unlike storage device 2135, the system memory is a volatile read-and-write memory, such a random access memory. The system memory stores some of the instructions and data that the processor needs at runtime. In some embodiments, the invention's processes are stored in the system memory 2125, the permanent storage device 2135, and/or the read-only memory 2130. For example, the various memory units include instructions for processing multimedia items in accordance with some embodiments. From these various memory units, the processor 2110 retrieves instructions to execute and data to process in order to execute the processes of some embodiments.
The bus 2105 also connects to the input and output devices 2140 and 2145. The input devices enable the user to communicate information and select commands to the computer system. The input devices 2140 include alphanumeric keyboards and pointing devices (also called “cursor control devices”). The output devices 2145 display images generated by the computer system. For instance, these devices display a GUI. The output devices include printers and display devices, such as cathode ray tubes (CRT) or liquid crystal displays (LCD).
Finally, as shown in
Any or all components of computer system 2100 may be used in conjunction with the invention. For instance, in some embodiments the execution of the frames of the rendering is performed by the GPU 2120 instead of the CPU 2110. Similarly, other image editing functions can be offloaded to the GPU 2120 where they are executed before the results are passed back into memory or the processor 2110. However, a common limitation of the GPU 2120 is the number of instructions that the GPU 2120 is able to store and process at any given time. Therefore, some embodiments adapt instructions for implementing processes so that these processes fit onto the instruction buffer of the GPU 2120 for execution locally on the GPU 2120. Additionally, some GPUs 2120 do not contain sufficient processing resources to execute the processes of some embodiments and therefore the CPU 2110 executes the instructions. One of ordinary skill in the art would appreciate that any other system configuration may also be used in conjunction with the present invention.
Some embodiments include electronic components, such as microprocessors, storage and memory that store computer program instructions in a machine-readable or computer-readable medium (alternatively referred to as computer-readable storage media, machine-readable media, or machine-readable storage media). Some examples of such computer-readable media include RAM, ROM, read-only compact discs (CD-ROM), recordable compact discs (CD-R), rewritable compact discs (CD-RW), read-only digital versatile discs (e.g., DVD-ROM, dual-layer DVD-ROM), a variety of recordable/rewritable DVDs (e.g., DVD-RAM, DVD-RW, DVD+RW, etc.), flash memory (e.g., SD cards, mini-SD cards, micro-SD cards, etc.), magnetic and/or solid state hard drives, read-only and recordable blu-ray discs, ultra density optical discs, any other optical or magnetic media, and floppy disks. The computer-readable media may store a computer program that is executable by at least one processor and includes sets of instructions for performing various operations. Examples of hardware devices configured to store and execute sets of instructions include, but are not limited to application specific integrated circuits (ASICs), field programmable gate arrays (FPGA), programmable logic devices (PLDs), ROM, and RAM devices. Examples of computer programs or computer code include machine code, such as produced by a compiler, and files including higher-level code that are executed by a computer, an electronic component, or a microprocessor using an interpreter.
As used in this specification and any claims of this application, the terms “computer”, “server”, “processor”, and “memory” all refer to electronic or other technological devices. These terms exclude people or groups of people. For the purposes of the specification, the terms display or displaying means displaying on an electronic device. As used in this specification and any claims of this application, the terms “computer readable medium” and “computer readable media” are entirely restricted to tangible, physical objects that store information in a form that is readable by a computer. These terms exclude any wireless signals, wired download signals, and any other ephemeral signals.
While the invention has been described with reference to numerous specific details, one of ordinary skill in the art will recognize that the invention can be embodied in other specific forms without departing from the spirit of the invention. Thus, one of ordinary skill in the art would understand that the invention is not to be limited by the foregoing illustrative details, but rather is to be defined by the appended claims.