Digital graphic design, image editing, audio editing, and video editing applications (hereafter collectively referred to as media content editing applications or media-editing applications) provide graphical designers, media artists, and other users with the necessary tools to create a variety of media content. Examples of such applications include Final Cut Pro® and iMovie®, both sold by Apple® Inc. These applications give users the ability to edit, combine, transition, overlay, and piece together different media content in a variety of manners to create a resulting media project. The resulting media project specifies a particular sequenced composition of any number of text, audio clips, images, or video content that is used to create a media presentation.
Various media-editing applications facilitate such composition through electronic means. Specifically, a computer or other electronic device with a processor and computer readable storage medium executes the media content editing application. In so doing, the computer generates a graphical interface whereby editors digitally manipulate graphical representations of the media content to produce a desired result.
In many cases, media content is recorded by a video recorder coupled to multiple microphones. Multi-channel microphones are used to capture sound from an environment as a whole. Multi-channel microphones add lifelike realism to a recording because multi-channel microphones are able to capture left-to-right position of each source of sound. Multi-channel microphones can also determine depth or distance of each source and provide a spatial sense of the acoustic environment. To further enhance the media content, editors use media-editing applications to decode and process audio content of the media clips to produce desired effects. One example of such decoding is to produce an audio signal with additional channels from the multi-channel recording (e.g., converting a two-channel recording into a five-channel audio signal). With the undecoded and decoded signals, editors are able to author desired audio effect representing motion through certain advance sound processing. The media-editing application may further save these authored audio effects as presets for future application to media clips.
Some embodiments of the invention provide several selectable presets that produce panning behaviors in media content items (e.g., audio clips, video clips, etc.). The panning presets are applied to media clips to produce behaviors such as sound panning in a particular direction or along a predefined path. Other preset panning behaviors include transitioning audio between audio qualitative settings (i.e., outputting primarily stereo audio versus outputting ambient audio). By utilizing panning presets to produce desired effects, a user is able to incorporate high-level behaviors into media clips that typically require extensive keyframe editing to produce.
In order to produce the desired effects, audio portions of media clips are generally recorded by multiple microphones. Multiple channels recording the same event will produce similar audio content; however, each channel will have certain distinct characteristics (e.g., timing delay and sound level). These multi-channel recordings are subsequently decoded to produce additional channels in some embodiments. This technique of producing multi-channel surround sound is often referred to as upmixing. Asymmetrically outputting the decoded audio signals to a multi-channel speaker system creates an impression of sound being heard from various directions. Additionally, the application of panning presets to further modulate the individual outputs to each channel of the multi-channel speaker system as a function of time provides a sense of movement to the listening audience.
In some embodiments, the sound reproduction quality of an audio source is enriched by altering several audio parameters of the decoded audio signal before the signal is sent to the multiple channels (i.e., at the time when the media is authored/edited or at run-time). The parameters include an up/down mixer that provides adjustments for balance (also referred to as original/decoded), which selects the amount of original versus decoded signal, front/rear bias (also referred to as ambient/direct), left/right (L/R) steering speed, and left surround/right surround (Ls/Rs) width (also referred to as surround width) in some embodiments. The parameters further include advanced settings such as rotation, width (also referred to as stereo spread), collapse (also referred to as attenuate/collapse) which selects the amount of collapsing versus attenuating panning, center bias (also referred to as center balance), and low frequency effects (LFE) balance. The alteration of parameters enhances the audio experience of an audience by replicating audio qualities (i.e., echo/reverberation, width, direction, etc.) representative of live experiences.
In some embodiments, the above listed parameters are interdependent on one another within a panning preset. For example, within a particular panning preset, changing the rotation value of a first set of parameter values causes a change in one or more of the other parameter values, thus resulting in a second set of parameters. The sets of parameter values are represented as states of the particular panning preset, and each panning preset includes at least two states. Additional sets of parameter values are provided by the user at the time of authoring or interpolated from the two or more defined states of the effect in some embodiments.
The combination of directional and qualitative characteristics provided by the audio processing and panning presets provides a dynamic sound field experience to the audience. The multi-channel speaker system encircling the audience outputs sound from all directions of the sound field. Altering the asymmetry of the audio output in the multi-channel speaker system as a function of time creates a desired panning effect. While soundtracks of movies represent a major use of such processing techniques, the multi-channel application is used to create audio environments for a variety of purposes (i.e., music, dialog, ambience, etc.).
The preceding Summary is intended to serve as a brief introduction to some embodiments of the invention. It is not meant to be an introduction or overview of all inventive subject matter disclosed in this document. The Detailed Description that follows and the Drawings that are referred to in the Detailed Description will further detail the embodiments described in the Summary as well as other embodiments. Accordingly, to understand all the embodiments described by this document, a full review of the Summary, Detailed Description and the Drawings is needed. Moreover, the claimed subject matters are not to be limited by the illustrative details in the Summary, Detailed Description and the Drawings, but rather are to be defined by the appended claims, because the claimed subject matters can be embodied in other specific forms without departing from the spirit of the subject matters.
For purpose of explanation, several embodiments of the invention are set forth in the following figures.
In the following description, numerous details are set forth for purpose of explanation. However, one of ordinary skill in the art will realize that the invention may be practiced without the use of these specific details. For instance, many of the examples illustrate the application of particular panning presets to audio signals of media clips. One of ordinary skill will realize that these are merely illustrative examples, and that the invention can apply to a variety of panning presets that involve the adjustment of several audio parameters to achieve. Furthermore, many of the examples are used to illustrate processing audio signals that have been decoded to a five-channel signal. One of ordinary skill will realize that the same processing may also be performed on audio signals that include a variety of channels (i.e., two-channel stereo, seven-channel surround, nine-channel surround, etc.). In other instances, well known structures and devices are shown in block diagram form in order to not obscure the description of the invention with unnecessary details.
In some embodiments of the invention, application of the panning presets is performed by a computing device. Such a computing device can be an electronic device that includes one or more integrated circuits (IC) or a computer executing a program, such as a media-editing application.
I. Overview
A. Static Panning Preset
At the first stage 105, the user selects a media clip 130 from a library 125 of media clips. Upon making a selection, the media clip is placed in the media clip track 145 in the timeline area 140. The media clip 145 track includes a playhead 155 indicating the progress of playback of the media clip.
The UI also provides a drop-down menu 150 from which the user selects the panning presets. The selection is received through a user selection input such as input received from a cursor controller (e.g., a mouse, touchpad, trackpad, etc.), from a touchscreen (e.g., a user touching a UI item on a touchscreen), from keyboard input (e.g., a hotkey or key sequence), etc. The term user selection input is used throughout this specification to refer to at least one of the preceding ways of making a selection or pressing a button through a user interface.
As shown in the second stage 110, the user selects the “Music” panning preset 165, which is subsequently highlighted. The Music preset in this example does not provide any adjustable values. Accordingly, a single set of audio parameters representing the Music preset is applied throughout the media clip. The third stage 115 and the fourth stage 120 illustrate the progression of playback of the media clip to which the Music preset has been applied.
B. Static Panning Preset with Single Adjustable Value
At the first stage 205, the user selects a media clip 230 from a library 225 of media clips. Upon making a selection, the media clip is placed in the media clip track 245 in the timeline area 240. The media clip track 245 includes a playhead 255 indicating the progress of playback of the media clip.
The UI also provides a drop-down menu 250 from which the user selects the panning preset. As shown in the second stage 210, the user selects the “Ambience” panning preset 265, which is subsequently highlighted. Upon selection of a panning preset, a slider control 275 and a slider track 270 is provided to the user for setting a level of effect. By selecting the amount value, the user indicates a particular level of audio effect that the user would like to have applied throughout the media clip. In this example, the user sets the amount value by setting the slider control 275 at the middle of the slider track 270. Accordingly, a set of audio parameters corresponding to the position of the slider control 275 is applied to the entire media clip. If a user changes the position of the slider control 275 to indicate a selection of a different level of audio effect, a different set of audio parameters corresponding to the newly selected slider controller position is applied throughout the media clip to produce the desired audio effect. The third stage 215 and the fourth stage 220 illustrate the progression of playback of the media clip to which the Music preset has been applied. Since the same audio effect is applied throughout the media clip, the slider control 275 position does not change during playback.
C. Dynamic Panning Preset
At the first stage 305, the user selects a media clip 330 from a library 325 of media clips. Upon making a selection, the media clip is placed in the media clips track 345 in the timeline area 340. The media clips track includes a playhead 355 indicating the progress of playback of the media clip.
The UI also provides a drop-down menu 350 from which the user selects the panning preset. The selection of the “Fly: Front to Back” preset 365 is shown in the second stage 310, and the selected preset is subsequently highlighted. Upon selection of the preset, a slider control 375 representing a position along the path of the pan effect is provided along a slider track 370 to track the progress of the pan effect.
Application of the Fly: Front to Back preset is performed dynamically throughout the media clip. Specifically, different sets of parameters representing different states of the panning preset are successively applied to the media clip as a function of time to produce the desired effect. The progression of the pan effect is illustrated by the progression from the third stage 315 to the fourth stage 320. At the third stage, the playhead 355 is shown to be at the beginning of the selected media clip. The position of the playhead along the media clip corresponds to the position of the slider control 375 on the slider track 370.
As playback of the media clip progresses as illustrated in the fourth stage 320, the playhead 355 and the slider control 375 are shown to progress proportionally along their respective tracks. Each progression in the position of the slider control 375 represents the application of a different set of parameter values. As described above, the successive application of different sets of parameter values produces the panning effect in this example.
D. Multi-Channel Content Generation
In the following description, stereo-microphones are used to produce multi-channel recordings for purpose of explanation. Furthermore, the decoding performed in the following description generates five-channel audio signals, also for the purpose of explanation. However, one of ordinary skill in the art will realize that multi-channel recordings may be performed by several additional microphones in a variety of different configurations, and the decoding of the multi-channel recording may generate audio signals with more than five channels. Accordingly, the invention may be practiced without the use of these specific details.
The audio recording and processing in this figure are shown in four different stages 405-420. In this example, the first stage 405 shows a live event being recorded by multiple microphones 445. The event being recorded includes three performers—a guitarist 425, a vocalist 430, and a keyboardist 435—playing music before an audience 440 in a concert hall. The predominant sources of audio in this example are provided by the vocalist(s) and the instruments being played. Ambient sound is also picked up by the multiple microphones 445. Sources of the ambient sound include crowd noise from the audience 440 as well as reverberations and echoes that bounce off objects and walls within the concert hall.
The second stage 410 conceptually illustrates a multi-channel reproduction (e.g., m channels in this example) of the multi-channel recording of the event. In a multi-channel system, asymmetric output of audio channels to several discrete speakers 450 are used to create the impression of sound heard from various directions. The asymmetric output of the audio channels represents the asymmetric recording captured by the multiple microphones 445 during the event. Without further processing of the multi-channel recording (to simulate additional channels and effects), multi-channel playback through a multi-speaker system (e.g., two-channel playback in a two-speaker system) will provide the most comprehensive reproduction of the recorded event.
In order to further enhance the audio experience, a panning effect is applied to the recorded audio signal in the third stage 415. The panner 455 applies a user selected panning preset to the multi-channel audio signal in some embodiments. Applying a panning preset involves adjusting audio parameters of the audio signal as a function of time to alter the symmetry and quality of the multi-channel signal. For example, a user can pan a music source across the front of the sound field by selecting a Rotation preset to be applied to a decoded media clip. The Rotation preset adjusts the sets of audio parameters to modulate the individual outputs to each channel of the multi-channel speaker system over time to produce a sense of movement in the reproduced sound.
The fourth stage 420 conceptually illustrates the sound field 460 as represented by the user interface (UI) of the media-editing application. The sound field shows the representative positions of the speakers in a five-channel system. The speakers include a left front channel 465, a right front channel 475, a center channel 470, a left surround channel 485 and a right surround channel 480. While this stage illustrates audio being output equally by all the channels, each channel is capable of operating independently of the remaining channels. Thus, applying panning presets to modify the sets of audio parameters of an audio signal over time produces a specified effect. Several illustrative examples of the effects and graphical UI representations of the effects are shown in detail by reference to
E. Keyframe Editing
Keyframes are useful for editing sound output of media clips in certain situations. Traditionally, keyframes have been utilized in making low-level adjustments to different audio parameters.
The location in the sound field to which the audio is panned during the clip is represented by a graph 530. At a first state 535, the keyframe editor shows the audio panned to the center of the sound field at the start of the media clip. As the media clip progresses down the timeline 565, the keyframe editor shows that the sound is panned from the center of the sound field to the right front, as indicated by the second state 540. Between the second state 540 and the third state 545, the graph 530 on the keyframe editor indicates that the sound is panned from the right front to the left rear (i.e., left surround) of the sound field. At the fourth state 550, the sound is panned back to the center of the sound field.
The keyframe editor further shows user markers 555 placed by a user to indicate the start and end of a segment of the media clip that the user chooses to apply the panning effect. These markers may also be used in conjunction with a panning preset to provide a start point and an end point to facilitate scaling of the panning preset to the indicated segment. For example, if the user would like to apply a Fly: Left Surround to Right Front panning preset to only a portion of the media clip, the user drops markers on the media clip indicating the beginning and end points of the segment to which the user would like the panning preset to be applied. Without setting beginning and end markers, the media-editing apparatus applies the panning preset over the duration of the entire clip by default.
While
With panning presets, the user is able to choose high-level behaviors to be applied to a media clip without having to get involved with the low level complexities of producing such effects. Furthermore, panning presets are made available to be applied to a variety of different media clips by simply selecting the preset in the UI. While the keyframe editor might still be utilized to indicate the start and end of a segment to be processed, panning presets eliminates the need to use keyframe editing for most other functionalities.
F. Selection of Panning Presets
The user selects a panning preset from e.g., a drop-down list 645 of panning presets. In this figure, the user selects the “Ambience” panning preset 650, which is subsequently highlighted. Upon selection of a panning preset, the Amount value is set to a default value (e.g., the amount value is set to 50.0 here).
G. Application of Panning Presets
After retrieving the snapshots, process 700 applies (at 720) the retrieved values of each snapshot to the corresponding parameters of an audio clip at different instance in time to create the desired effect. In some embodiments, the predefined sets of parameter values are displayed successively in UI as the user scrolls through the different states of the selected panning preset. After the sets of parameter values are displayed, the process ends.
H. Panning Preset Outputs
In the first step 810, the airplane is shown to be approaching from the left rear of the sound field. As the airplane approaches from this direction, the audio signal is being output predominantly by the left surround channel 855 with some audio being output by the right surround channel 850 to provide some ambience in the airplane sound. Outputting audio primarily from the left surround creates the sense that the airplane is approaching from the left rear of the sound field 805.
As the airplane approaches the middle of the sound field 805 in the second step 815, the amplitude of left surround channel 855 is attenuated slightly while the amplitude of the left front 835 and right front channels 845 are amplified. This effect creates the sense that while the airplane is still approaching from the rear, that it is nearing a position directly in the middle of the sound field 805.
At the third step 820, the airplane has passed overhead and continues to fly in the direction of the right front channel. Accordingly, the amplitude of the left surround 855 and right surround 850 channels continue to attenuate while the amplitude of the center channel 840 is amplified to produce the effect that the airplane is now moving towards the front of the sound field. To produce the effect that the airplane continues to fly towards the right front of the sound field 805, the amplitude of the left front 835 and the two surround 850, 855 channels are progressively attenuated while the right front channel 845 becomes the predominant audio channel via amplification as shown in the fourth step 825.
The modulation of each audio channel in
I. User Interface Depiction of Panning Preset
Furthermore, as described by reference to
At the first state 905, the puck is located in front of the left surround speaker 955, indicating that the panning preset is manipulating the input audio signals to originate from the left surround speaker. The first state 905 further shows that the multiple channels of the input audio are collapsed and output by the left surround speaker only, thus producing an effect that the source of the audio is off in the distance in the rear left direction.
At the second state 910, the panning preset automatically relocates the puck 960 closer to the center of the sound field 930 to indicate that the source of the audio is approaching the audience. As the source of the audio approaches the middle of the sound field 930, the amplitude of the left surround channel 955 is attenuated while the amplitudes of the right surround 950 and the left front 935 are increased. This effect creates the sense that while the source of the sound is approaching from the rear, the source is nearing a position directly in the middle of the sound field 930.
At the third state 915, the panning preset places the source of the sound directly in the center of the sound field 930 as indicated by the position of the puck 960. In order to produce this effect, the amplitudes of each of the five-channel speakers are adjusted to be identical. The third state 915 further shows that each source channel is being output by its respective output channel (i.e., the center source channel is output by the center speaker, the left front source channel is output by the left front speaker, etc.).
At the fourth state 920, the panning preset continues to move the source of the sound in the direction of the right front channel. This progression is again indicated by the change in the position of the puck 960. At this state, the amplitude of the right front channel is shown as being greater than the remaining channels, and the amplitude of surround channels are shown to have been attenuated, particularly the left surround channel. The panning preset adjusts the audio parameters to reflect these characteristics to produce an effect that the source of the sound has passed the center point of the sound field 930 and is now moving away in the direction of the right front channel.
At the fifth state 925, the puck 960 indicates that the panning preset has manipulated the input audio signal to be output only by the right front channel. Outputting all of the collapsed source channels to just the right front channel produces an effect that the source of the audio is off in the distance in the right front direction.
The five states 905-925 also illustrate how the panning preset automatically adjusts certain non-positional parameters to enhance the audio experience for the audience. During the second through fifth states, the value of the Collapse parameter is manipulated by the panning preset. Altering the Collapse parameter relocates the source sound. For example, when a source sound containing ambient noise is collapsed, the ambient noise that's generally output to the rear surround channel is redistributed to the speaker indicated by the position of the puck 960. In the first state 905, the left surround channel outputs the collapsed audio signal, and after fifth state 925, the right front channel outputs the collapsed audio signal.
Furthermore, the second through fifth states also indicate an adjustment to the Balance made by the panning preset. The Balance parameter is used to adjust the mix of decoded and undecoded audio signals. The lower the Balance value, the more the output signal comprises undecoded original audio. The higher the Balance value, the more the output signal comprises decoded surround audio. The example in
II. Authoring Panning Presets
A. Creating Snapshots for Panning Presets
After saving a snapshot, the process determines (at 1015) whether additional snapshots are required to perform an interpolation. When additional snapshots are required, process 1000 returns to 1005 to receive a next set of user determined audio parameter values. The user assigns a numerical value to each parameter to represent a desired effect for another instance in time. The values are then saved (at 1010) as a next snapshot.
When a required number of snapshots have been saved, the process receives (at 1020) an interpolation function for determining interdependent audio parameters based on the saved snapshots. With the interpolation function, process 1000 optionally determines (at 1025) additional sets of audio parameters based on the saved snapshots in some embodiments. The additional interpolated sets of audio parameters are subsequently saved along with the saved snapshots (at 1030) as a custom preset.
As described above by reference to
The combination of parameter values represented by each snapshot produces a desired audio effect for an instance in time. In this example, each snapshot is broken down into three separate parameter components so that a different interpolation function may be determined for each of the different parameter graphs.
The first graph 1105 shows Balance values for each of the three snapshots. The first graph 1105 further shows a first curve 1120 that graphically represents an interpolation function on which interpolated Balance values lie (e.g., i1, i2, and i3). The second graph 1110 shows Front/Rear Bias values for each of the three snapshots. The second graph 1110 also includes a second curve 1125 that graphically represents an interpolation function on which interpolated Front/Rear Bias values lie (e.g., i4, i5, and i6). Similarly, a third curve 1125 of the third graph 1115 represents an interpolation function on which interpolated Ls/Rs Width values (e.g., i7, i8, and i9) lie. The Ls/Rs Width values for each of the tree snapshots also lie on the third curve 1130.
Having determined an interpolation function for each of the three parameters, the media-editing application may interpolate values at run time to be applied to media clips in order to produce the desired effect. In other words, for each value along the X-axis, there exist a parameter value for each of the three Y-axes that form a snapshot of parameter values that produce a desired effect for an instance in time. Thus, successive application of these snapshots to a media clip will produce the desired audio behavior.
This example shows values interpolated by using linear mathematical curves for purpose of explanation; however, one of ordinary skill in the art will recognize that more elaborate interpolation functions may be used. In some embodiments, non-linear mathematical functions (i.e., sine wave function, etc.) are utilized as the interpolation function. After the sets of user defined parameter values have been determined, they are and saved as a preset. Once saved, the preset is made available for use by the user at a later time.
III. Application Of Panning Presets
A. Dynamic Panning Presets
After the media clip and the panning preset selections are received, the process determines (at 1215) the length/duration of a segment of the media clip to which the selected preset is applied. In some embodiments, the length/duration of the segment is indicated by user markers placed by the user on the media clip track in the UI. However, if the user does not specify a length, the process applies the selected preset to the entire media clip by default. In order to provide proper application of the panning preset, the panning effect of the preset is scaled (at 1220) to fit the length/duration of the selected segment. The panning preset is then applied (at 1225) to the media clip. The step of applying the panning preset to the media clip is described in further detail by reference to
For example, when the “Fly: Left Surround to Right Front” preset is applied to a segment of a longer duration, the effects of the preset are scaled such that the progression of the effect is gradually applied throughout the duration of the segment. For a segment of a shorter duration, the scaling is proportionally applied to the media clip. Since the latter example provides a shorter time frame for which the effect is performed, the progression of the effect occurs at a quicker rate than that of the segment of a longer duration, thus producing the effect that the sound source is moving quicker through the sound filed than the in the former example. Accordingly, the scaling of the preset to the user indicated duration is used in producing different qualities of the same preset effect.
Once the preset is applied, a media clip with modified audio tracks is produced and output (at 1230) by the media-editing application. The process then ends. Upon playback of the output media content, the user is provided an audio experience that reflects the effect intended by the panning preset.
As shown, process 1300 determines (at 1305) the groups of audio parameters that correspond to the selected panning preset. Each panning preset includes several snapshots storing audio parameters representing different states along a panning path. For example, in the “Fly: Left Surround to Right Front” preset described by reference to
When all snapshots have been applied, process 1300 outputs (at 1320) the media clip with a modified audio track that includes the behavior of the panning preset. After the media clip has been output, process 1300 ends.
B. Static Panning Presets
In certain instances of media editing, a user may choose to apply a static panning preset to create a consistent audio effect throughout a media clip. An example application of a static panning preset is setting a constant ambience level throughout an entire media clip. In this example, a user selects a preset and a value of the preset to be applied throughout the media clip.
The first stage 1405 illustrates that the user has selected the “Ambience” preset, as indicated by the Mode selection box 1425. When the user selects “Ambience”, the state value indicated by the Amount value box 1445 is set to a default level (e.g., 0 in this example). This state value is also represented by the position of the slider control 1440 along the slider track 1435. In this example, the range of the slider track 1435 goes from a value of −100 to 100. The slider control 1440 indicates that the Ambience preset is set to a first state represented by an Amount value of 0.0. To change the state value, the user drags the slider control 1440 along the slider track 1435 to a position that represents a desired state value. The numerical value in the Amount value box 1445 automatically changes based to the position of the slider control. Sliding the slider control to the left causes a selection of a lower state value, and sliding the slider control to the right causes a selection of a higher state value. In some embodiments, the user selects the state value by directly entering a numerical value into the Amount value box 1445. When the user inputs a numerical value, the slider control 1440 is repositioned on the slider track 1435 accordingly.
The second stage 1410 indicates that a state with a state value of −50.0 has been selected, as indicated by the numerical value in Amount value box 1445. This state value is also reflected by the position of the slider control 1440 along the slider track 1435. As mentioned above, the states of the panning preset are selected by dragging the slider control 1440 along the slider track 1435 to the desired position to select a desired state value. Upon moving the slider control 1440 to a new position on the slider track 1435, the Amount value 1445 is automatically updated to reflect the state value. If, however, the user selects a state via the Amount value box 1445, the position of the slider control 1440 on the slider track 1435 is automatically updated to reflect the new value.
The third stage 1415 shows the selection of a state having an Amount value of 50.0. The figure shows that the slider control 1440 has moved to a state value of 50.0, which is represented by the three-quarter position of the slider control 1440 along the slider track 1435. The Amount value box 1445, as described above, automatically updates to reflect the numerical value of the state.
The position of the slider control 1440 along a slider track 1435 and the state value displayed in the Amount value box 1445 provide a higher level abstraction of the panning preset effect in some embodiments. In
The Ambience preset is an example preset that a user would choose one state value to be applied throughout the media clip. For example, a user who wishes to have a certain level of ambience applied throughout an entire scene selects the Ambience preset and chooses a state value that provides the level of ambience desired. The media-editing application would subsequently apply the level of Ambience preset indicated by the user throughout the entire clip.
In some embodiments, the state value of the preset is selected by a slider control on a slider track and indicates the amount of the panning preset effect to be applied. For instance, the state value is entered as a quantity (i.e., 0 to 100, −180° to +180°, etc.). The state value of a particular panning preset represents a level of the effect (i.e., levels of ambience, dialog, music, etc.) or location along a path of panning (i.e., positions for circle, rotate, fly patterns, etc.). For example, a state value of −100 in an ambience preset (graphically represented by the slider control of the slider track being at the far left) provides no ambient signals to the rear surround channels. As the state value is increased (graphically represented by the slider control of the slider track moving to the right) more and more ambient noise is decoded from the source and biased to the rear surround channels.
With relation to a circle preset which rotates a source audio in the distance around a sound field, a state value of −180° provides sound from the back of the sound field. As the state value is increased (graphically represented by the slider control of the slider track moving to the right) the source of the audio rotates around the sound field in a clockwise direction. A state value of −90°, 0°, 90° and 180° moves the source audio to the left side of the sound field, front of the sound field, right side of the sound field and behind the sound field, respectively.
After a selection of the panning preset and the state value, process 1500 determines (at 1515) the set of audio parameters that correspond to the selected panning preset at the selected state value. As discussed above, each set of audio parameter values in the panning preset represents a different state of the panning preset. In process 1500, a particular state corresponding to a particular set of parameter values of the panning preset is selected. This particular set of audio parameters determined from the particular state of the preset is applied (at 1520) throughout the selected media clip. After applying the set of audio parameters, the process outputs the modified media reflecting the panning effect. The process then ends.
C. User Interface for Panning Preset
Furthermore, as described by reference to
The Ambience preset is used to select a level of ambient sound that the user desires. Each of the five states 1605-1625 contain different sets of audio parameters to produce different levels of ambience effect when applied to a medial clip. When a user selects states with progressively more ambience, the Ambience preset biases more and more decoded sound to the surround channels by moving the source from the center channel to the rear surround channels. The Ambience preset produces an effect of sound being all around an audience as opposed to just coming from the front of the sound field.
The first state 1605 of the Ambience preset places the source of the sound directly in the center of the sound field 1630 as indicated by the position of the puck 1660. Each of the five-channel speakers is also shown to have identical amplitudes. At this state, the Ambience preset produces sound all around the audience.
The second state 1610 illustrates a UI representation of an Ambience value that is higher than the first state 1605. As the Ambience value is increased, the Ambience preset starts to introduce ambient sound into the sound field 1630 by utilizing more decoded audio signals that are outputted to the surround channels. This behavior is indicated by both the Balance parameter value and the Front/Rear Bias parameter value being increased to −50 as compared to the first state. The Ambience preset also decreases the Center bias, which reduces the audio signals sent to the center channel and adds those signals to the front left and right channels.
The third state 1615 illustrates a UI representation of an Ambience value that is higher than the second state 1610. Increasing the Ambience value further causes the amount of decoded audio signals used to increase, as indicated by the rise in the Balance parameter value to 0. The increase in the Ambience value also causes the preset to set the Front/Rear Bias to 0, thereby further biasing the audio signals to the rear surround channels.
The fourth state 1620 illustrates a UI representation of an Ambience value that is higher than the third state 1615. To create even more of an ambient effect than the third state, the Ambience preset decreases the Center bias to −80 to further reduce the audio signals sent to the center channel. The audio signals reduced at the center channel are added to the front left and right channels. The fourth state 1620 also shows that the puck has been shifted farther back in the sound field, thus indicating that origination point of the combination of the source channels has been moved towards the back of the sound field 1630.
The fifth state 1625 of the Ambience preset represents the highest level of ambience that a user can select. The fifth state 1625 differs from the fourth state 1620 only by the position of the puck. The puck in the fifth state 1625 is located all the way to the back of the sound field, thus indicating that origination point of the combination of the source channels is at the absolute rear of the sound field 1630. Other than the puck position, the fourth and fifth states have similar audio parameters.
The depiction of the five states in
In this example, Ambience preset includes five instances of snapshots (e.g., State 1, State 2, State 3, State 4, and State 5). Each snapshot represents a different level of ambience effect. The first graph 1705 plots Balance values extracted from the snapshots in each of the five instances; the second graph 1710 plots Front/Rear Bias values extracted from the snapshots in each of the five instances; and the third graph 1715 plots Center Bias values extracted from the snapshots in each of the five instances.
By breaking down each of the snapshots into three separate parameter components and plotting the values for each snapshot, an interpolation function may be determined for each of the different parameter graphs. In other words, plotting the parameter values of each snapshot on their respective parameter graphs provides an illustration as to how the interpolation function may fit in the plot.
The first graph 1705 shows Balance values for each of the five snapshots and a first curve 1720 that connects the values. The second graph 1710 shows Front/Rear Bias values for each of the five snapshots as well as a second curve 1725 that connects the values. Similarly, a third curve 1725 of the third graph 1715 connects each of the five Center Bias values derived from the snapshots. Each of the three curves provides graphical representations of continuous functions that indicate the parameter value for each snapshot as well as for those values on the X-axis in between the snapshots. These curves further represent the parameter values that would be applied to a media clip at runtime when an Ambience preset is selected. Successive application of the values represented by the curve will produce an audio behavior of fading from stereo sound from the front channels to ambient sound from the rear surround channels. A user utilizing the preset to apply to an audio clip is relieved from the task of manually setting each individual value of the parameters in order to achieve the desired audio behavior. While this example of an Ambience preset shows five sets of audio parameters, several additional sets may be predefined and applied to media clips.
D. Interdependence of Parameter Values
As shown in the figure, process 1800 receives (at 1805) a media clip to be processed (e.g., the user selects the media clip the user would like to process with the panning preset). Next, the process receives (at 1810) a panning preset selection. After selecting the media clip, the user selects the panning preset he wishes to apply to the media clip. This selection is made from a drop down box in some embodiments. The preset from which the user selects includes standard presets that are provided with the media-editing application or customized presets authored by the user or shared by other users of the media-editing application.
Once the selections of the media clip and the panning preset are received, the process receives (at 1815) a user selection of a new value of an audio parameter within the panning preset. The user makes the selection of the audio parameter value by moving a slider control along a slider track to the position of the desired parameter value. In some embodiments, the user selects the parameter value by entering an amount value directly into an amount value box. Since a panning preset selection was received (at 1810), the value for which the user selects for each parameter is constrained to the range of the parameter for the selected preset. For example, the Balance value range of the Ambience preset runs from −100 to 0. Thus, the user would not be permitted to adjust the Balance to a value greater than 0 since that value does not exist in any of the states of the Ambience preset. In some embodiments, selecting an audio parameter value outside of the range specified by the preset causes the media-editing application to exit the selected panning preset.
When a new audio parameter value is selected, the process determines (at 1820) the interdependence of the remaining audio parameters to the user selected parameter value. For a particular preset, different audio parameters have different ranges of values that correspond to a specific state or states of the particular preset. As described above, the Ambience preset has a Balance value that ranges from −100 to 0. Each Balance value within that range has an associated set of parameters whose values are interdependent on the Balance value. For example, if the user selects a Balance value of 0 for the Ambience preset, the process determines from the interdependence of the Front/Rear Bias parameter on the Balance parameter that the Front/Rear Bias must also be set to 0. Process 1800 makes this determination for all audio parameters of the selected preset and adjusts (at 1825) the values of the remaining audio parameters according to the determination. While the example provided above only illustrates the interdependence of one additional audio parameter, several parameters are interdependent on the first parameter in some embodiments. Upon adjusting the remaining audio parameters, the process applies (at 1830) the adjusted audio parameters to the media clip. After the application of the audio parameters, the process ends.
At a first state 1905 of the Create Space preset, the Balance value is set at −100, the Front/Rear Bias value is set at −20, the L/R Steering speed is set at 50, and the Ls/Rs Width is set at 0. When a user manually selects a new value for a first parameter of the panning preset, the panning preset adjusts the values of all the parameters within the preset that are interdependent on the first parameter. In this example, the user selects −40 as a new Balance value. In response to the manual selection by the user, the panning preset automatically increases the Ls/Rs Width to 0.25. By adjusting the Ls/Rs Width in response to the user selection, the preset creates a combination of the two parameters that represents a second state 1910 of the Create Space preset.
A third state 1915 shows the user having selected a Balance value of −20. As a new Balance value is again selected by the user, the panning preset causes the Ls/Rs Width value to adjust in response to this new selection. For the Create Space preset, a Balance value of −20 corresponds to a Ls/Rs Width value of 1.5. Similarly, a fourth state 1915 shows the user having selected a Balance value of −10. In response to the selection, the panning preset causes the Ls/Rs Width value to adjust to a Ls/Rs Width value of 2.0.
As discussed above, the preset maintains a relationship between the interdependent parameters so that a selection of a new value to a first parameter causes an automatic adjustment to the values of other interdependent parameters. Each resulting combination of interdependent parameters in this example represents a state of the selected panning preset.
At a first state 2005 of the Fly: Back to Front preset, the Balance value is set at 20, the Front/Rear Bias value is set at −100, the L/R Steering speed is set at 50, and the Ls/Rs Width is set at 3.0. When the panning preset progresses from a first state 2005 to a second state 2010, or when a user manually selects a new value for a first parameter of the panning preset, the panning preset automatically adjusts the values of all the remaining interdependent parameters. In this example, the second state 2010 shows that a new Balance value of 0 has been selected. In response to the new Balance value, the panning preset automatically reduces the Ls/Rs Width to 2.25 as a result of the interdependence of the parameters. The panning preset adjusts the Ls/Rs Width in response to the new Balance value to create a combination of the two parameters that represents the second state 2010 of the Fly: Back to Front preset.
A third state 2015 shows the Balance value set at a −20 and the interdependent Ls/Rs Width value having been adjusted to −1.5 to correspond to the new Balance value. At a fourth state 2020 where the Balance value is set at −70, the panning preset causes not only an adjustment to the Ls/Rs Width value, but also an adjustment to the Front/Rear Bias value. For this preset, a Balance value of −20 corresponds to a Ls/Rs Width value of 2.438 and a Front/Rear Bias value of −62.5.
The fourth state further illustrates that the interdependence of two parameter values are not only non-linear, but that the interdependence of two parameters switches from a positive correlation to a negative correlation in some embodiments. This is shown by the change in parameter values during a progression from the second to third states and from the third to fourth states. As shown in the progression from the second state 2010 to third state 2015, the selection of a lower Balance value results in a lower Ls/Rs Width value. However, the progression from the third state 2015 to the fourth state 2020 shows that further reducing the Balance value to −70 causes the Ls/Rs Width value to increase to 2.438. The complexity of the relationships of audio parameters between different states of a panning preset shown in this example indicates the difficulties a user would face in trying to keyframe this behavior. Providing presets to process media clips enables a user to forgo the intricacies of low level controls and simply apply the high level behaviors to achieve the desired effects.
The rationale behind the variety of correlations between interdependent parameters is explained by the derivation of the set of parameter values. The above examples explain the interdependent adjustments in the audio parameters values as changes in one parameter causing changes in another; however these representations are provided as high level abstractions to facilitate user operation of the media-editing application. The sets of audio parameter values are in fact discrete instances of different states of a panning preset. Each state of a panning preset is defined by a set of audio parameter that produce the audio effect represented by that state of the panning preset (i.e., a state value of −90° for a Circle preset causes the panning preset to set the parameters values such that the sound source appears to originate from the left side of the sound field). Each additional state is defined by additional sets of audio parameters that produce a corresponding audio effect. Furthermore, since the states are not continuously defined for every state value of a particular panning preset, the parameter values corresponding to additional states must be interpolated by applying a mathematical function based on the parameter values of states that have been defined. Accordingly, each state value corresponds to a snapshot of audio parameters, either defined or interpolated, that produce the audio effect of the panning preset for that state value.
IV. Software Architecture
Many of the above-described features and applications are implemented as software processes that are specified as a set of instructions recorded on a computer readable storage medium (also referred to as computer readable medium). When these instructions are executed by one or more computational elements (such as processors or other computational elements like application specific integrated circuits (ASICs) and field programmable gate arrays (FPGAs)), they cause the computational elements to perform the actions indicated in the instructions. Computer is meant in its broadest sense, and can include any electronic device with a processor. Examples of computer readable media include, but are not limited to, CD-ROMs, flash drives, RAM chips, hard drives, EPROMs, etc. The computer readable media does not include carrier waves and electronic signals passing wirelessly or over wired connections.
In this specification, the term “software” is meant to include firmware residing in read-only memory or applications stored in magnetic storage which can be read into memory for processing by a processor. Also, in some embodiments, multiple software inventions can be implemented as sub-parts of a larger program while remaining distinct software inventions. In some embodiments, multiple software inventions can also be implemented as separate programs. Finally, any combination of separate programs that together implement a software invention described here is within the scope of the invention. In some embodiments, the software programs when installed to operate on one or more computer systems define one or more specific machine implementations that execute and perform the operations of the software programs.
In some embodiments, the processes described above are implemented as software running on a particular machine, such as a computer or a handheld device, or stored in a computer readable medium.
Media-editing application 2100 includes a user interface (UI) interaction module 2105, a panning preset processor 2110, editing engines 2150 and a rendering engine 2190. The media-editing application also includes intermediate media data storage 2125, preset storage 2155, project data storage 2160, and other storages 2165. In some embodiments, the intermediate media data storage 2125 stores media clips that have been processed by modules of the panning preset processor, such as the imported media clips that have had panning presets applied. In some embodiments, storages 2125, 2155, 2160, and 2165 are all stored in one physical storage 2190. In other embodiments, the storages are in separate physical storages, or three of the storages are in one physical storage, while the fourth storage is in a different physical storage.
The peripheral device driver 2175 may include a driver for accessing an external storage device 2115 such as a flash drive or an external hard drive. The peripheral device driver 2175 delivers the data from the external storage device to the UI interaction module 2105. The peripheral device driver 2175 may also include a driver for translating signals from a keyboard, mouse, touchpad, tablet, touchscreen, etc. A user interacts with one or more of these input devices, which send signals to their corresponding device drivers. The device driver then translates the signals into user input data that is provided to the UI interaction module 2105.
The present application describes a graphical user interface that provides users with numerous ways to perform different sets of operations and functionalities. In some embodiments, these operations and functionalities are performed based on different commands that are received from users through different input devices (e.g., keyboard, trackpad, touchpad, mouse, etc.). For example, the present application illustrates the use of a cursor in the graphical user interface to control (e.g., select, move) objects in the graphical user interface. However, in some embodiments, objects in the graphical user interface can also be controlled or manipulated through other controls, such as touch control. In some embodiments, touch control is implemented through an input device that can detect the presence and location of touch on a display of the input device. An example of a device with such functionality is a touch screen device (e.g., as incorporated into a smart phone, a tablet computer, etc.). In some embodiments with touch control, a user directly manipulates objects by interacting with the graphical user interface that is displayed on the display of the touch screen device. For instance, a user can select a particular object in the graphical user interface by simply touching that particular object on the display of the touch screen device. As such, when touch control is utilized, a cursor may not even be provided for enabling selection of an object of a graphical user interface in some embodiments. However, when a cursor is provided in a graphical user interface, touch control can be used to control the cursor in some embodiments.
The UI interaction module 2105 also manages the display of the UI, and outputs display information to the display module 2185. The display module 2185 translates the output of a user interface for an audio/visual display device. That is, the display module 2185 receives signals (e.g., from the UI interaction module 2105) describing what should be displayed and translates these signals into pixel information that is sent to the display device. The display module 2185 also receives signals from a rendering engine 2190 and translates these signals into pixel information that is sent to the display device. The display device may be an LCD, plasma screen, CRT monitor, touchscreen, etc.
The network connection interface 2180 enables the device on which the media-editing application 2100 operates to communicate with other devices (e.g., a storage device located elsewhere in the network that stores the media clips) through one or more networks. The networks may include wireless voice and data networks such as GSM and UMTS, 802.11 networks, wired networks such as Ethernet connections, etc.
The UI interaction module 2105 of media-editing application 2100 interprets the user input data received from the input device drivers and passes it to various modules in the panning preset processor 2110, including the custom preset save function 2120, the parameter control module 2135, the preset selector module 2140, and the parameter interpolation module 2145. The UI interaction module also manages the display of the UI, and outputs this display information to the display module 2185. The UI display information may be based on information from the editing engines 2150, from preset storage 2155, or directly from input data (e.g., when a user moves an item in the UI that does not affect any of the other modules of the application 2100).
The editing engines 2150 receive media clips (from an external storage via the UI module 2105 and the operating system 2170), and stores the media clips into intermediate media data storage 2125. The editing engines 2150 also fetch the media clips and adjusts the audio parameter values of the media clips. Each of these functions fetches media clips from the intermediate audio data storage 2125, and performs a set of operations on the fetched data (e.g., determining segments of media clips and applying presets) before storing a set of processed media clips into the intermediate audio data storage 2125.
The parameter interpolation module 2145 retrieves audio parameter values from sets of media clips from the intermediate media data storage 2125 and interpolates additional parameter values. Upon completion of the comparison operation, the parameter interpolation module 2145 saves the interpolated value into the preset storage 2155.
The preset selector module 2140 selects panning presets for applying to media clips fetched from storage. The parameter control module 2135 receives a command from the UI module 2105 and modifies the audio parameters of the media clips. The editing engines 2150 then compile the result of the application of the panning preset and the modulation of parameter and stores that information in projected data storage 2160. The media-editing application 2100 in some embodiments retrieves this information and determines where to output the media clips.
While many of the features have been described as being performed by one module (e.g., the editing engines 2150 and the parameter interpolation module 2145) one of ordinary skill in the art will recognize that the functions described herein might be split up into multiple modules. Similarly, functions described as being performed by multiple different modules might be performed by a single module in some embodiments (e.g., audio detection, data reduction, noise filtering, etc.).
V. Graphical User Interface
The clip library 2205 includes a set of folders through which a user accesses media clips (i.e. video clips, audio clips, etc.) that have been imported into the media-editing application. Some embodiments organize the media clips according to the device (e.g., physical storage device such as an internal or external hard drive, virtual storage device such as a hard drive partition, etc.) on which the media represented by the clips are stored. Some embodiments also enable the user to organize the media clips based on the date the media represented by the clips was created (e.g., recorded by a camera). As shown, the clip library 2205 includes media clips from both 2125 and 2160.
Within a storage device and/or date, users may group the media clips into “events”, or organized folders of media clips. For instance, a user might give the events descriptive names that indicate what media is stored in the event (e.g., the “New Event 2-8-09” event shown in clip library 2205 might be renamed “European Vacation” as a descriptor of the content). In some embodiments, the media files corresponding to these clips are stored in a file storage structure that mirrors the folders shown in the clip library.
Within the clip library, some embodiments enable a user to perform various clip management actions. These clip management actions may include moving clips between events, creating new events, merging two events together, duplicating events (which, in some embodiments, creates a duplicate copy of the media to which the clips in the event correspond), deleting events, etc. In addition, some embodiments allow a user to create sub-folders of an event. These sub-folders may include media clips filtered based on tags (e.g., keyword tags). For instance, in the “New Event 2-8-09” event, all media clips showing children might be tagged by the user with a “kids” keyword, and then these particular media clips could be displayed in a sub-folder of the event that filters clips in this event to only display media clips tagged with the “kids” keyword.
The clip browser 2210 allows the user to view clips from a selected folder (e.g., an event, a sub-folder, etc.) of the clip library 2205. As shown in this example, the folder “New Event 2-8-09” is selected in the clip library 2205, and the clips belonging to that folder are displayed in the clip browser 2210. Some embodiments display the clips as thumbnail filmstrips, as shown in this example. By moving a cursor (or a finger on a touchscreen) over one of the thumbnails (e.g., with a mouse, a touchpad, a touchscreen, etc.), the user can skim through the clip. That is, when the user places the cursor at a particular horizontal location within the thumbnail filmstrip, the media-editing application associates that horizontal location with a time in the associated media file, and displays the image from the media file for that time. In addition, the user can command the application to play back the media file in the thumbnail filmstrip.
In addition, the thumbnails for the clips in the browser display an audio waveform underneath the clip that represents the audio of the media file. In some embodiments, as a user skims through or plays back the thumbnail filmstrip, the audio plays as well.
Many of the features of the clip browser are user-modifiable. For instance, in some embodiments, the user can modify one or more of the thumbnail size, the percentage of the thumbnail occupied by the audio waveform, whether audio plays back when the user skims through the media files, etc. In addition, some embodiments enable the user to view the clips in the clip browser in a list view. In this view, the clips are presented as a list (e.g., with clip name, duration, etc.). Some embodiments also display a selected clip from the list in a filmstrip view at the top of the browser so that the user can skim through or playback the selected clip.
The timeline 2215 provides a visual representation of a composite presentation (or project) being created by the user of the media-editing application. Specifically, it displays one or more geometric shapes that represent one or more media clips that are part of the composite presentation. The timeline 2215 of some embodiments includes a primary lane (also called a “spine”, “primary compositing lane”, or “central compositing lane”) as well as one or more secondary lanes (also called “anchor lanes”). The spine represents a primary sequence of media which, in some embodiments, does not have any gaps. The clips in the anchor lanes are anchored to a particular position along the spine (or along a different anchor lane). Anchor lanes may be used for compositing (e.g., removing portions of one video and showing a different video in those portions), B-roll cuts (i.e., cutting away from the primary video to a different video whose clip is in the anchor lane), audio clips, or other composite presentation techniques.
The user can add media clips from the clip browser 2210 into the timeline 2215 in order to add the clip to a presentation represented in the timeline. Within the timeline, the user can perform further edits to the media clips (e.g., move the clips around, split the clips, trim the clips, apply effects to the clips, etc.). The length (i.e., horizontal expanse) of a clip in the timeline is a function of the length of media represented by the clip. As the timeline is broken into increments of time, a media clip occupies a particular length of time in the timeline. As shown, in some embodiments the clips within the timeline are shown as a series of images. The number of images displayed for a clip varies depending on the length of the clip in the timeline, as well as the size of the clips (as the aspect ratio of each image will stay constant).
As with the clips in the clip browser, the user can skim through the timeline or play back the timeline (either a portion of the timeline or the entire timeline). In some embodiments, the playback (or skimming) is not shown in the timeline clips, but rather in the preview display area 2220.
In some embodiments, the preview display area 2220 (also referred to as a “viewer”) displays images from video clips that the user is skimming through, playing back, or editing. These images may be from a composite presentation in the timeline 2215 or from a media clip in the clip browser 2210. In this example, the user has been skimming through the beginning of video clip 2240, and therefore an image from the start of this media file is displayed in the preview display area 2220. As shown, some embodiments will display the images as large as possible within the display area while maintaining the aspect ratio of the image.
The inspector display area 2225 displays detailed properties about a selected item and allows a user to modify some or all of these properties. In some embodiments, the inspector displays the composite audio output information related to a user selected panning preset for a selected clip. In some embodiments, the clip that is shown in the preview display area 2220 is selected, and thus the inspector display area 2225 displays the composite audio output information about media clip 2240. This information includes the audio channels and audio levels to which the audio data is output. In some embodiments, different composite audio output information is displayed depending on the panning preset selected. As discussed above in detail by reference to
The additional media display area 2230 displays various types of additional media, such as video effects, transitions, still images, titles, audio effects, standard audio clips, etc. In some embodiments, the set of effects is represented by a set of selectable UI items, each selectable UI item representing a particular effect. In some embodiments, each selectable UI item also includes a thumbnail image with the particular effect applied. The display area 2230 is currently displaying a set of effects for the user to apply to a clip. In this example, several video effects are shown in the display area 2230.
The toolbar 2235 includes various selectable items for editing, modifying what is displayed in one or more display areas, etc. The right side of the toolbar includes various selectable items for modifying what type of media is displayed in the additional media display area 2230. The illustrated toolbar 2235 includes items for video effects, visual transitions between media clips, photos, titles, generators and backgrounds, etc. In addition, the toolbar 2235 includes an inspector selectable item that causes the display of the inspector display area 2225 as well as the display of items for applying a retiming operation to a portion of the timeline, adjusting color, and other functions.
The left side of the toolbar 2235 includes selectable items for media management and editing. Selectable items are provided for adding clips from the clip browser 2210 to the timeline 2215. In some embodiments, different selectable items may be used to add a clip to the end of the spine, add a clip at a selected point in the spine (e.g., at the location of a playhead), add an anchored clip at the selected point, perform various trim operations on the media clips in the timeline, etc. The media management tools of some embodiments allow a user to mark selected clips as favorites, among other options.
One or ordinary skill will also recognize that the set of display areas shown in the GUI 2200 is one of many possible configurations for the GUI of some embodiments. For instance, in some embodiments, the presence or absence of many of the display areas can be toggled through the GUI (e.g., the inspector display area 2225, additional media display area 2230, and clip library 2205). In addition, some embodiments allow the user to modify the size of the various display areas within the UI. For instance, when the display area 2230 is removed, the timeline 2215 can increase in size to include that area. Similarly, the preview display area 2220 increases in size when the inspector display area 2225 is removed.
VI. Electronic System
Many of the above-described features and applications are implemented as software processes that are specified as a set of instructions recorded on a computer readable storage medium (also referred to as computer readable medium). When these instructions are executed by one or more computational or processing unit(s) (e.g., one or more processors, cores of processors, or other processing units), they cause the processing unit(s) to perform the actions indicated in the instructions. Examples of computer readable media include, but are not limited to, CD-ROMs, flash drives, random access memory (RAM) chips, hard drives, erasable programmable read only memories (EPROMs), electrically erasable programmable read-only memories (EEPROMs), etc. The computer readable media does not include carrier waves and electronic signals passing wirelessly or over wired connections.
In this specification, the term “software” is meant to include firmware residing in read-only memory or applications stored in magnetic storage which can be read into memory for processing by a processor. Also, in some embodiments, multiple software inventions can be implemented as sub-parts of a larger program while remaining distinct software inventions. In some embodiments, multiple software inventions can also be implemented as separate programs. Finally, any combination of separate programs that together implement a software invention described here is within the scope of the invention. In some embodiments, the software programs, when installed to operate on one or more electronic systems, define one or more specific machine implementations that execute and perform the operations of the software programs.
The bus 2305 collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of the electronic system 2300. For instance, the bus 2305 communicatively connects the processing unit(s) 2310 with the read-only memory 2330, the GPU 2315, the system memory 2320, and the permanent storage device 2335.
From these various memory units, the processing unit(s) 2310 retrieves instructions to execute and data to process in order to execute the processes of the invention. The processing unit(s) may be a single processor or a multi-core processor in different embodiments. Some instructions are passed to and executed by the GPU 2315. The GPU 2315 can offload various computations or complement the image processing provided by the processing unit(s) 2310. In some embodiments, such functionality can be provided using CoreImage's kernel shading language.
The read-only-memory (ROM) 2330 stores static data and instructions that are needed by the processing unit(s) 2310 and other modules of the electronic system. The permanent storage device 2335, on the other hand, is a read-and-write memory device. This device is a non-volatile memory unit that stores instructions and data even when the electronic system 2300 is off. Some embodiments of the invention use a mass-storage device (such as a magnetic or optical disk and its corresponding disk drive) as the permanent storage device 2335.
Other embodiments use a removable storage device (such as a floppy disk, flash memory device, etc., and its corresponding disk drive) as the permanent storage device. Like the permanent storage device 2335, the system memory 2320 is a read-and-write memory device. However, unlike storage device 2335, the system memory 2320 is a volatile read-and-write memory, such as random access memory. The system memory 2320 stores some of the instructions and data that the processor needs at runtime. In some embodiments, the invention's processes are stored in the system memory 2320, the permanent storage device 2335, and/or the read-only memory 2330. For example, the various memory units include instructions for processing multimedia clips in accordance with some embodiments. From these various memory units, the processing unit(s) 2310 retrieves instructions to execute and data to process in order to execute the processes of some embodiments.
The bus 2305 also connects to the input devices 2340 and output devices 2345. The input devices 2340 enable the user to communicate information and select commands to the electronic system. The input devices 2340 include alphanumeric keyboards and pointing devices (also called “cursor control devices”), cameras (e.g., webcams), microphones or similar devices for receiving voice commands, etc. The output devices 2345 display images generated by the electronic system or otherwise output data. The output devices 2345 include printers and display devices, such as cathode ray tubes (CRT) or liquid crystal displays (LCD), as well as speakers or similar audio output devices. Some embodiments include devices such as a touchscreen that function as both input and output devices.
Finally, as shown in
Some embodiments include electronic components, such as microprocessors, storage and memory that store computer program instructions in a machine-readable or computer-readable medium (alternatively referred to as computer-readable storage media, machine-readable media, or machine-readable storage media). Some examples of such computer-readable media include RAM, ROM, read-only compact discs (CD-ROM), recordable compact discs (CD-R), rewritable compact discs (CD-RW), read-only digital versatile discs (e.g., DVD-ROM, dual-layer DVD-ROM), a variety of recordable/rewritable DVDs (e.g., DVD-RAM, DVD-RW, DVD+RW, etc.), flash memory (e.g., SD cards, mini-SD cards, micro-SD cards, etc.), magnetic and/or solid state hard drives, read-only and recordable Blu-Ray® discs, ultra density optical discs, any other optical or magnetic media, and floppy disks. The computer-readable media may store a computer program that is executable by at least one processing unit and includes sets of instructions for performing various operations. Examples of computer programs or computer code include machine code, such as is produced by a compiler, and files including higher-level code that are executed by a computer, an electronic component, or a microprocessor using an interpreter.
While the above discussion primarily refers to microprocessor or multi-core processors that execute software, some embodiments are performed by one or more integrated circuits, such as ASICs or FPGAs. In some embodiments, such integrated circuits execute instructions that are stored on the circuit itself. In addition, some embodiments execute software stored in programmable logic devices (PLDs), ROM, or RAM devices.
As used in this specification and any claims of this application, the terms “computer”, “server”, “processor”, and “memory” all refer to electronic or other technological devices. These terms exclude people or groups of people. For the purposes of the specification, the terms display or displaying means displaying on an electronic device. As used in this specification and any claims of this application, the terms “computer readable medium,” “computer readable media,” and “machine readable medium” are entirely restricted to tangible, physical objects that store information in a form that is readable by a computer. These terms exclude any wireless signals, wired download signals, and any other ephemeral signals.
While the invention has been described with reference to numerous specific details, one of ordinary skill in the art will recognize that the invention can be embodied in other specific forms without departing from the spirit of the invention. In addition, a number of the figures (including
The present application claims the benefit of U.S. Provisional Patent Application 61/443,711, entitled, “Panning Presets”, filed Feb. 16, 2011, and U.S. Provisional Patent Application 61/443,670, entitled, “Audio Panning with Multi-Channel Surround Sound Decoding”, filed Feb. 16, 2011. The contents of U.S. Provisional Patent Application 61/443,711 and U.S. Provisional Patent Application 61/443,670 are hereby incorporated by reference.
Number | Name | Date | Kind |
---|---|---|---|
5956031 | Berteig et al. | Sep 1999 | A |
6507658 | Abel et al. | Jan 2003 | B1 |
6850254 | Banning et al. | Feb 2005 | B1 |
7039876 | Lavendel et al. | May 2006 | B2 |
7092542 | McGrath | Aug 2006 | B2 |
7221379 | Mullins et al. | May 2007 | B2 |
7383509 | Foote et al. | Jun 2008 | B2 |
7548791 | Johnston | Jun 2009 | B1 |
7549123 | Stewart et al. | Jun 2009 | B1 |
7653550 | Schulz | Jan 2010 | B2 |
7698009 | Cotey et al. | Apr 2010 | B2 |
7756281 | Goldstein et al. | Jul 2010 | B2 |
7805685 | Cannistraro | Sep 2010 | B2 |
7873917 | Brinkmann et al. | Jan 2011 | B2 |
7957547 | Chin | Jun 2011 | B2 |
8767970 | Eppolito | Jul 2014 | B2 |
8887074 | Sheeler et al. | Nov 2014 | B2 |
20020177967 | Fuchs et al. | Nov 2002 | A1 |
20020190991 | Efran et al. | Dec 2002 | A1 |
20030039408 | Smith | Feb 2003 | A1 |
20040030425 | Yeakel | Feb 2004 | A1 |
20040066396 | Hatakenaka | Apr 2004 | A1 |
20040136549 | Pennock | Jul 2004 | A1 |
20060066628 | Brodie et al. | Mar 2006 | A1 |
20070287490 | Green et al. | Dec 2007 | A1 |
20080002844 | Chin | Jan 2008 | A1 |
20080095380 | Hanna | Apr 2008 | A1 |
20080232617 | Goodwin et al. | Sep 2008 | A1 |
20080253577 | Eppolito | Oct 2008 | A1 |
20080253592 | Sanders et al. | Oct 2008 | A1 |
20090091563 | Viz et al. | Apr 2009 | A1 |
20090129601 | Ojala et al. | May 2009 | A1 |
20090144063 | Beack et al. | Jun 2009 | A1 |
20100157726 | Ando | Jun 2010 | A1 |
20100189266 | Oh et al. | Jul 2010 | A1 |
20100199204 | Oh et al. | Aug 2010 | A1 |
20110007915 | Park | Jan 2011 | A1 |
20110145743 | Brinkmann et al. | Jun 2011 | A1 |
20120210223 | Eppolito | Aug 2012 | A1 |
20120210262 | Sheeler et al. | Aug 2012 | A1 |
Entry |
---|
Sony, Vegas Pro 9 User Manual, Apr. 2, 2010, p. 178, 180, 244-247, 404-405. |
Sony, Vegas Pro 9 User Manual, Apr. 2, 2010, p. 178,180,244-247, 404-405. |
Sony, Vegas Pro 9 User Manual, p. 178, 180, 244-246, 405. |
U.S. Appl. No. 13/105,858, filed May 11, 2011, Sheeler, Stephen, et al. |
U.S. Appl. No. 13/151,181, filed Jun. 1, 2011, Eppolito, Aaron M. |
Portions of prosecution history of U.S. Appl. No. 11/271,081, filed Dec. 10, 2010, Brinkmann, Ron, et al. |
Authr Unknown, “Additional Presets, ” Month Unknown, 2007, pp. 1-2, iZotope, Inc. |
Author Unknown, “Octogris by UDM,” Month Unknown, 2000-2011, pp. 1-3, KVR Audio Plugin Resources. |
Author Unknown, “PCM 81 Presets,” Month Unknown, 1989, Lexicon, Bedford Massachusetts, USA. |
Author Unknown, “Spectron—64-bit Spectral Effect Processor,” Month Unknown, 2007, pp. 1-3, iZotope, Inc. |
Lee. Taejin, et al., “A Personalized Preset-Based Audio System for Interactive Service,” Audio Engineering Society (AES) 121st Convention, Oct. 5-8, 2006, pp. 1-6, San Francisco, CA, USA. |
Author Unknown, “Using Adobe premiere elements 8 Editor,”, last updated Aug. 12, 2010, pp. 146-167, Adobe Systems Incorporated, San Jose, California, USA. |
Author Unknown, “Adobe Premiere Pro CS3: User Guide,” Apr. 1, 2008, pp. 240-278, Adobe Systems Incorporated, San Jose, California, USA. |
Number | Date | Country | |
---|---|---|---|
20120207309 A1 | Aug 2012 | US |
Number | Date | Country | |
---|---|---|---|
61443711 | Feb 2011 | US | |
61443670 | Feb 2011 | US |