The present disclosure relates to audio processing. The illustrative embodiments will be described in the context of processing audio associated with producing a movie.
Modern video editing systems including those used professionally in the film and television industry are typically software applications that are used to assemble a production made up of one or more scenes from a collection of constituent elements in the form of digital files and/or data streams. Video editing systems allow these constituent elements—which may include, inter alia, video files, images, animations, titles, audiovisual clips, audio files and associated metadata—to be imported and edited before being merged into the final production.
Digital movie production often uses multiple audio tracks for a scene being produced. For example separate audio tracks might be used for:
a) dialogue—possibly one per character;
b) single background sounds of or groups of background sounds from different sources;
c) sound effects;
d) music;
e) voiceover and/or overdubbing.
Audio production is typically handled using one or more computers running digital audio workstation software. However, in some cases a scene being produced can include hundreds or even thousands of audio tracks. Therefore, in order to handle large productions, conventional mixing studios are forced to construct complex hardware arrangements with multiple interlinked systems, including multiple hardware systems providing hardware based processing acceleration, which are linked to digital mixing consoles. This leads to very complex workflows, as well as high hardware costs.
Despite such systems being fully digital and software-based, most large scale audio post production systems today are still modelled on the conventions of their original analogue predecessors. This includes:
The Applicant's video editing system known as DaVinci Resolve® is an example of a modern video editing system that is extensively used in the professional environment. The functionality of DaVinci Resolve® can conveniently be divided into a number of separate functions/tasks that go into editing a video production. These functions are:
i) media management and clip organization;
ii) non-linear video editing;
iii) VFX design;
iv) color correction and grading;
v) sound editing/digital audio workstation functionality similar to that provided by stand-alone systems noted above; and
vi) final rendering or output.
Other video editing software applications may include some or all of these functions, and some may include others functions.
The present inventor has determined that new systems and methods are needed to better suit the needs of modern audio production, particularly in the context of video editing, or at least alternatives to the existing systems and methods would be useful.
In the present specification, the words movie or video is not intended to be limited to moving images captured with a camera, but includes any other technique for generating video media, including but not limited to animation, film scanning, generating 2d or 3d images from a game engine, rendering engine, graphics engine or other visual development tool. Movies or video may or may not include one or more associated audio tracks, e.g., captured with or rendered for the images.
The systems, devices, methods and approaches described in this section, and components thereof are known to the inventor. Therefore, unless otherwise indicated, it should not be assumed that merely by virtue of their inclusion in this section any of such systems, devices, methods, approaches or their components described are:
citable as prior art;
ordinarily be known to a person of ordinary skill in the art;
form part of the common general knowledge in the art; or
would be understood, regarded as relevant, and/or combined with other pieces of information by a skilled person in the art.
In a first aspect there is provided a method of a performing a plurality of processing operations on a plurality of audio entities using a computer system having multiple data processing units. The method may include:
allocating each data processing operation to one of said data processing units, such that said data processing operation is performed on said one of said data processing units; wherein said allocation is based at least partly on an expected execution time for the data processing operation on said one of said data processing units to which it is allocated;
performing said plurality of said audio processing operations on said plurality of audio entities according to said allocation; and
outputting processed audio.
Allocating each data processing operation to one of said data processing units can further include determining dependencies between processing operations such that a processing operations that is dependent upon an output from another processing operation is performed after said another processing order.
Allocating each data processing operation to one of said data processing units can include identifying one or more realtime processing operations that must be performed in a predetermined time period, and prioritizing the performance of said realtime processing operations during allocation. Prioritizing the performance of said realtime processing operations may include allocating said realtime processing operations to be performed before non-realtime processing operations. Prioritizing the performance of said realtime processing operations may include allocating realtime processing operations such that they are to be performed on separate data processing units to non-realtime data processing operations.
In some embodiments, the method can include determining a revised allocation of each data processing operation to one of said data processing units. The revised allocation can be determined periodically, continuously, or in response to a re-allocation event.
In some embodiments, the method can include allocating some or each data processing operation to one of said data processing units according to said revised allocation. A some or each data processing operation to one of said data processing units according to said revised allocation could be performed either, or both of periodically, or in response a re-allocation event.
A re-allocation event could include any of the following events:
the plurality of processing operations to be performed changes;
the plurality of audio entities changes;
an actual execution time of one or more processing operations on its allocated processing unit differs from a corresponding execution time by a predetermined amount;
said plurality of said audio processing operations to be performed on said audio entities are not completed in a predetermined time period using a current allocation;
it is determined that said plurality of said audio processing operations to be performed on said audio entities cannot be completed in a predetermined time period using a current allocation;
an alternative allocation has been identified that improves overall processing time or efficiency by a predetermined amount;
the number and/or permitted utilization of processing units in the computer system has changed.
The actual execution time of one or more processing operations on its allocated processing unit can be considered to differ from a corresponding execution time by a predetermined amount, in the event that an average actual execution time differs from the current expected execution time by a threshold amount.
In embodiments herein, the predetermined time period may be the duration of an audio slice being processed, or the duration of an audio slice being processed minus a safety margin.
Determining an expected execution time for a data processing operation on said one of said data processing units can include accessing an execution time database containing expected execution time data. The expected execution time data can include one or more of:
standardized execution time data for a plurality of processing operations; and
customized execution time data for a plurality of processing operations that indicate an expected execution time for processing operations on said computer system.
The method may further include:
determining an actual execution time for a processing operation; and
updating the customized time data.
In some embodiments, performing said plurality of said audio processing operations includes, upon completion of a preceding processing operation on an audio entity by a processing unit, signaling said completion to another processing unit to which a succeeding processing operation that is dependent upon the preceding processing operation has been allocated.
A further aspect of the present disclosure provides a method of performing a plurality of processing operations on a plurality of audio entities using a computer system having multiple data processing units.
The method may include:
determining an expected execution time for a data processing operation on said one of said data processing units by accessing an execution time database containing expected execution time data;
allocating each data processing operation to one of said data processing units, such that said data processing operation is performed on said one of said data processing units; wherein said allocation is based at least partly on an expected execution time for the data processing operation on said one of said data processing units to which it is allocated;
performing said plurality of said audio processing operation on said plurality of audio entities according to said allocation;
outputting processed audio;
determining an actual execution time for at least one processing operation; and
updating said execution time database.
In some embodiments, the execution time database may include at least customized execution time data for a plurality of processing operations that indicate an expected execution time for processing operations on said computer system. The method may include updating the customized time data for at least one processing operation using said determined actual execution time.
In some embodiments, the method may further comprise determining a revised allocation of each data processing operation to one of said data processing units using the updated execution time database.
In some embodiments, the method may include allocating some or each data processing operation to one of said data processing units according to said revised allocation; performing said plurality of said audio processing operation on said plurality of audio entities according to said revised allocation; and outputting processed audio.
In some embodiments, allocating each data processing operation to one of said data processing units may include identifying one or more realtime processing operations that must be performed in a predetermined time period, and prioritizing the performance of said realtime processing operations during allocation. Prioritizing the performance of said realtime processing operations may include one or more of allocating said realtime processing operations to be performed before non-realtime processing operations, and allocating realtime processing operations such that they are to be performed on separate data processing units to non-realtime data processing operations.
In a further aspect of the present disclosure, there is provided a method of a performing a plurality of processing operations on a plurality of audio entities using a computer system having multiple data processing units. The method may include:
determining if each processing operation is a realtime processing operation that must be performed in a predetermined time period or non-realtime processing operation;
allocating each realtime data processing operation to one of said data processing units, such that said realtime data processing operation is performed on said one of said data processing units within the predetermined time period; wherein said allocation is based at least partly on an expected execution time for the realtime data processing operation on said one of said data processing units to which it is allocated;
allocating each non-realtime data processing operation to one of said data processing units, such that said non-realtime data processing operation is performed on said one of said data processing units; wherein said allocation is based at least partly on an expected execution time for the non-realtime data processing operation on said one of said data processing units to which it is allocated;
performing said plurality of said audio processing operation on said plurality of audio entities according to said allocation; and
outputting processed audio.
In some embodiments, the method may include allocating said realtime processing operations before the allocation of non-realtime processing operations.
In some embodiments, the method may include allocating realtime processing operations such that they are to be performed on separate data processing units to non-realtime data processing operations.
In some embodiments, a non-realtime processing operation may be performed in a time period twice as long as the predetermined time period.
In some embodiments, the method may include the multiple data processing units may include one or more data processing units that are high speed processing units, and one or more data processing units that are low speed processing units. The method may comprise preferentially allocating realtime processing operations to said high speed data processing units.
In some embodiments, the method may include at least the expected execution time for the realtime data processing operation is stored in an execution time database. The expected execution time for the non-realtime data processing operations may additionally be stored in said execution time database.
The method may include determining an actual execution time for at least one realtime data processing operation and updating said execution time database.
The method may include determining an actual execution time for at least one non-realtime data processing operation and updating said execution time database.
In some embodiments, the method may include determining a revised allocation of at least each realtime data processing operation to one of said data processing units using the updated execution time database. In some embodiments, the method may include determining a revised allocation of at least some non-realtime data processing operations to one of said data processing units using the updated execution time database.
In some embodiments, the method may include allocating some or each realtime data processing operation to one of said data processing units according to said revised allocation; performing said plurality of said audio processing operation on said plurality of audio entities according to said revised allocation; and outputting processed audio. In some embodiments, the method may also include allocating some or each non-realtime data processing operation to one of said data processing units according to said revised allocation.
In another aspect, the present disclosure provides a method of a performing a plurality of processing operations on a plurality of audio entities using a computer system having multiple data processing units. The method may include:
allocating each data processing operation to one of said data processing units, such that said data processing operation is performed on said one of said data processing units; wherein said allocation is based at least partly on an expected execution time for the data processing operation on said one of said data processing units to which it is allocated;
performing said plurality of said audio processing operation on said plurality of audio entities according to said allocation;
outputting processed audio; and
determining a revised allocation of each data processing operation to one of said data processing units.
In some embodiments, said revised allocation may be determined, one or more of: periodically, continuously, or in response to a re-allocation event.
In some embodiments, the method may include allocating some or each data processing operation to one of said data processing units according to said revised allocation; performing said plurality of said audio processing operation on said plurality of audio entities according to said revised allocation; and outputting processed audio.
In some embodiments, allocating some or each data processing operation to one of said data processing units according to said revised allocation may be performed either or both of periodically or in response to a re-allocation event.
For example, a re-allocation event may be any one of the following events:
the plurality of processing operations to be performed changes;
the plurality of audio entities changes;
an actual execution time of one or more processing operations on its allocated processing unit differs from a corresponding execution time by a predetermined amount;
said plurality of said audio processing operations to be performed on said audio entities are not completed in a predetermined time period using a current allocation;
it is determined that said plurality of said audio processing operations to be performed on said audio entities cannot be completed in a predetermined time period using a current allocation;
an alternative allocation has been identified that improves overall processing time or efficiency by a predetermined amount;
the number and/or permitted utilization of processing units in the computer system has changed.
In another aspect, the present disclosure provides a method of processing a plurality of audio entities using a computer system having multiple data processing units.
The method may include:
performing a plurality of audio processing operations on said plurality of audio entities according to a predetermined allocation, said predetermined allocation defining which data processing unit is to perform each data processing operation on each audio entity;
outputting processed audio;
determining an actual execution time for at least one processing operation performed on one audio entity by one data processing unit;
updating an execution time database to include said actual execution time; and
determining a revised allocation of each data processing operation to one of said data processing units using the updated execution time database, wherein said revised allocation is based at least partly on an expected execution time for the data processing operation on said one of said data processing units to which it is allocated.
In some embodiments, said revised allocation may be determined, one or more of periodically, continuously, in response to a re-allocation event.
In some embodiments, the method may include allocating some or each data processing operation to one of said data processing units according to said revised allocation; performing a plurality of audio processing operations on said plurality of audio entities according to revised allocation; and outputting processed audio.
Allocating some or each data processing operation to one of said data processing units according to said revised allocation is performed either, or both of periodically, or in response a re-allocation event.
In a further aspect, an audio processing system is disclosed. The audio processing system includes multiple data processing units, said audio processing system being configured to perform processing operations on a plurality of audio entities, wherein each audio entity has at least one data processing operation performed on it, the audio processing system including a control unit arranged to allocate each data processing operation to one of said data processing units, such that said data processing operation is performed on said one of said data processing units; wherein the control unit performs said allocation at least partly on the basis of an expected execution time for the data processing operation on said one of said data processing units to which it is allocated.
The control unit may be further arranged to cause the audio processing system to perform a method according to an embodiment of any of the foregoing aspects of the disclosure.
The control unit may be arranged to determine dependencies between processing operations such that a processing operation that is dependent upon an output from another processing operation is performed after said another processing order.
The control unit may be arranged to identify one or more realtime processing operations that must be performed in a predetermined time period, and prioritize the performance of said realtime processing operations during allocation.
The control unit may allocate said realtime processing operations to processing units such that said realtime processing units are performed before non-realtime processing operations.
The control unit may be may generate a revised allocation of each data processing operation to one of said data processing units.
The control unit may allocate some or each data processing operation to one of said data processing units according to said revised allocation either, or both of periodically, or in response a re-allocation event.
A re-allocation event may be any one of the following events:
the plurality of processing operations to be performed changes;
the plurality of audio entities changes;
an actual execution time of one or more processing operations on its allocated processing unit differs from a corresponding execution time by a predetermined amount;
said plurality of said audio processing operations to be performed on said audio entities are not completed in a predetermined time period using a current allocation;
it is determined that said plurality of said audio processing operations to be performed on said audio entities cannot be completed in a predetermined time period using a current allocation;
an alternative allocation has been identified that improves overall processing time or efficiency by a predetermined amount;
the number and/or permitted utilization of processing units in the computer system has changed.
Embodiments may further include an execution time databased containing expected execution time data.
Embodiments may further include an execution monitoring component configured to determine an actual execution time for a processing operation, and update the execution time database.
In some embodiments, a processing unit that performs a preceding processing operation on an audio entity, signals completion of the preceding processing operation to another processing unit to a which a succeeding processing operation, that is dependent upon the preceding processing operation, has been allocated.
In a further aspect, there is provided a non-transitory computer readable medium configured to carry instructions, which when executed by a computer system, cause the computer system to perform a method as described in any example herein. The instructions may implement digital audio workstation software. The instructions may implement audio processing functions in video editing software, e.g., a non-linear editor.
In the present specification, an audio entity may comprise any one of: An audio track; An audio bus; An audio file; or A stem.
In the present specification, an audio bus can comprise a plurality of audio tracks, stems or busses, or a combination thereof, which are combined into a single audio entity.
In the present specification, a data processing unit can include any one or more of a computer processor, a computer processor core, a sound processor, an FPGA, or a hardware acceleration card.
In the present specification examples of processing operations include:
reading or writing an audio entity from memory, including operations such as audio entity playback (e.g., track playback), audio entity recording (e.g., track recording), black-box recording.
level control including operations such as Input trim, output level, phase control.
mixing, including operations such as, single in-line mixing, multi-tier sub mixing with combiner for larger mixes, mono and multi-format mixing of audio elements, panning signals in 0, 1, 2 and 3 planes, mixing in track-to-bus, bus-to-bus, and bus to speaker scenarios.
audio Metering and analysis; including for measurements such as Sample PPM, True Peak, RMS, Loudness, Spectrum, Phase;
third party audio plug-in processing;
tonal Control, including operations such as, Static and Dynamic Audio Equalization, and Static and Dynamic Audio Filtering, distortion generators;
dynamics processing; including use of tools such as an Expander, single and multiband compressor, limiter;
time and pitch processing, including operations such as, Pitch change, pitch correction;
audio signal generation and synthesis, including generating the following, Mono and stereo sinewaves, White and Pink noise, time code;
restoration operations such as, noise reduction, De-essing, De-humming, stereo width control;
audio device emulation and simulation of devices such as an Optical Compressor;
audio element categorization including to indicate Dialog vs non-dialog and other characteristics, tagging or meta data updating;
time-based balancing of audio signal phase; including to compensate for delays in external signal paths; to compensate for delays in internal processing, to facilitate look-ahead;
spatial enhancers, such as Delay, Echo, Reverb, flanger, chorus, modulator;
integration of internal and third party audio rendering technologies; and
audio I/O management including operations such as: handling of asynchronous input and output environments (e.g., 44K1 in, 48K out); handling of semi-synchronous input and output environments (e.g., 48K in, 48K out), application of dither, application of catch-all limiting.
While the aspect(s) disclosed herein are amenable to various modifications and alternative forms, specific embodiments are shown by way of example in the drawings and are described in detail. It should be understood, however, that the drawings and detailed description are not intended to limit the disclosure(s) to the particular form disclosed. Furthermore, all alternative combinations of two or more of the individual features mentioned or evident from the text or drawings comprise additional aspects or inventive disclosures, which may form the subject of claims.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure. It will be apparent, however, that the present disclosure may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form in order to avoid unnecessary obfuscation of salient details.
As will be appreciated, the method is performed in a digital environment so the audio entities are made up of samples having a particular sampling rate. Such audio is typically stored in a data storage system (either local or remote) and then loaded into memory prior to processing. Usually digital audio will be processed in blocks of samples, the size of which is usually dependent on the hardware capabilities of the computer system performing the processing. For example, blocks will typically range in size from 32 to 512 samples, but may be longer or shorter. The processing of data in blocks means that latency is introduced into the audio processing as all of the audio samples of a block need to be read then processed together prior to output. This introduces a critical time element into the processing of the audio stream insofar as it is necessary to have a continuous output, so all processing of a block must be completed before output of the previous block has concluded. For example, if the audio is recorded with a sample rate of 48 kHz and a block contains 512 samples this represents a time slice of approximately 10.6 milliseconds. This necessarily means that all processing operations on the next 512 samples of each audio entity must be completed within 10.6 milliseconds so that a continuous output can be generated. On average this means that each sample must be processed within 20.8 microseconds. Latency can be reduced by making the processing block smaller. But there are trade-offs in doing so because more blocks need to be processed. For example, processing overhead increases (e.g., there is more switching between tasks such as reads and writes from memory, etc.), and the risk of failing to complete processing within a time slice increases.
Accordingly the description of the present embodiment will assume a block size of 512 samples of audio at 48 kHz sample rate, meaning a time slice is approximately 10.6 ms, although this should not be considered to be limiting on the present disclosure. Also, in preparation for processing a given audio entity, several blocks of data to be processed in the future will first be loaded into a track cache buffer prior to processing according to the present disclosure.
As noted above, the audio being processed may include a large number of audio entities. The audio entities can include audio tracks, audio files, audio buses, stems or other entity encoding audio data. In some cases, the plurality of audio entities will include a mixture of types of audio entitles. An audio bus can be defined by the combination of other audio entities such as files, tracks, stems or even other buses.
The method 100 includes, at step 106, allocating each data processing operation to one of the data processing units. This allocation is based, at least partly, on an expected execution time for the data processing operation on the processing unit to which it is allocated. After allocation is performed in step 106, the processing units perform their allocated processing operations on the relevant audio entity and processed audio is output. Output can include being output for playback by speakers or the like, writing a processed audio file to a data storage device or other output event. Step 108 is performed on a block by block basis on the necessary audio entities until the whole audio processing sequence is completed.
Typically each audio entity will have at least one data processing operation performed on it during a time slice. In some embodiments even when a given track has no audio output or is otherwise inactive in a given time slice (e.g., a track has zero volume, or a track is not in use) the audio entity will still be processed. This situation is one reason why audio processing associated with movies is a significant technical challenge. Even though a particular sound might only be used for a few seconds of the movie, the audio entity associated with it will may still be processed throughout the whole production. This is a factor in the accumulation of such large numbers of audio entities to be processed. However, in some embodiments, audio entities that are inactive during a time slice may be treated with a lower priority than active audio entities, or potentially excluded from processing during a given time slice in order to minimize processing load.
The single core of the computer system is controlled by software to perform audio processing as follows. The order of processing progresses downward, as indicated by increasing time units in the second column. However, it should be noted that the time periods indicated in
The first processing operation performed is playback of Track 1, i.e., reading track 1 from the track cache buffer or other memory. Track 1 is then processed in time period 2 by application of a second processing operation. Track 1 is subsequently added to Bus 1 (which typically involves writing Track 1 to a suitable buffer or memory location).
The next processing operation performed (in time period 4) is playback of Track 2. Track 2 is then processed in time period 5 in a second processing operation. Track 2 is then added to Bus 1. This process continues with Tracks 3 and 4. Track 3 is played back from the track cache buffer in time period 7. Track 3 is then processed in time period 8 in a second processing operation applicable to it. Track 3 is then added to Bus 1. Track 4 is played back from the track cache buffer in time period 10. Track 4 is then processed in time period 11 in a second processing operation applicable to it. Track 4 is then added to Bus 1.
Next (in time period 13) the Bus (which is now the relevant audio entity—instead of individual tracks) is processed according to a processing operation. It is then read into an output buffer as the final processing operation on Bus 1. The output is now ready for final rendering, e.g., audio playback or writing to a file for storage. The processing operations performed on each track in this example may be the same as that performed on one or more of the other tracks, or different to one or more of the processing operations performed on the other tracks, or they may have different parameters applied by a user to the tracks.
In this example, since there is a single processing unit, the order in which the processing operations are performed is not critical except that dependency of processing operations must be observed. That is, a processing operation (and succeeding processing operations) that is dependent on the output of (at least) another processing operation (a preceding processing operation) must be performed on the audio entity after the completion of the preceding processing operation. For example, the “Track 1 processing” (time period 2) operation must occur after “Track 1 playback” (time period 2) as the track must be available prior to other processing occurring. Also the “Bus 1 Processing” must occur after all tracks comprising Bus 1 are added to the bus. However, there is no dependency between the processing of Track 1 and the processing of any of the other tracks. So the Track 4 processing steps may all occur before the track 1 processing steps, or be interleaved with them, so long as those processing operations applicable to each Track which have a dependency on a preceding processing operation occur first. As will be seen in connection with
Because the quad-core processor of
In the first time period, “Playback” of Track 1 is performed by Core 1; “Playback” of Track 2 is performed by Core 2, “Playback” of Track 3 is performed by Core 3, and “Playback” of Track 4 is performed by Core 4. These processing operations occur in parallel. Next, in the second time period “Track 1 Processing” is performed by Core 1, “Track 2 Processing” is performed by Core 2, “Track 3 Processing” is performed by Core 3 and “Track 4 Processing” is performed by Core 4. Note that the “Track X Processing” steps are dependent upon the performance of the “Track X Playback” step concluding and thus are performed after conclusion of the corresponding playback step.
Next, in time periods 3 to 6, Core 4 is allocated the processing operations of adding each track to Bus 1. These processing operations are dependent on each other in the sense that they need to be performed by the same processing unit. However, these steps are performed in numerical order of the track number for convenience, but need not be. And as will be explained below, it may be advantageous to perform execution of these processing operations in a different order. Next, in time period 7, Core 4 executes the Bus 1 Processing operation on the contents of Bus 1. Core 4 then performs the processing operation “Bus 1 Output” in time period 8 and the output is ready for downstream use. As would be expected, using multiple cores in parallel results in faster processing than the equivalent processing operations performed on a single processing unit having otherwise equivalent performance.
Embodiments of the disclosure provide a method of a performing a plurality of processing operations on a plurality of audio entities using a computer system having multiple data processing units, such as the quad core system of
A simple example can be discussed in relation to
It will also be noted that
In order to avoid a risk that a preceding processing operation is not competed before a dependent processing operation begins, the allocation process can include a safety margin. This can be done by adding a safety margin of a particular duration to the expected completion time of each processing operation. For example, a safety margin of 0.4 microseconds may be added to account for variations in switching time, variation in actual execution time, or other delays. Either instead or in addition to this, the computer system according to an embodiment may employ signaling between processing units to indicate completion of a preceding operation. For example, in
In order to allocate each data processing operation to one of said data processing units, in a way that takes the expected execution time for the data processing operation into account it is necessary for the computer system to have an estimate of the execution time of each processing operation. This can be achieved by having an execution time database containing expected execution time data for each data processing operation. The execution timing database can take several forms.
In a simplest form, the execution timing database can include standardized execution time data for processing operations. This may include a standard expected execution time for each processing operation type, or class of processing operations. These standard expected execution data may be tailored to the hardware configuration of the computer system being used, or be generic in the sense that they are not-system specific. Such standardized execution time data can be provided by the computer system or software supplier, based on empirical testing of representative systems or theoretical estimates. The expected execution time data (whether standardized or customized) can include one or more of: a minimum execution time, maximum execution time, average execution time or other useful indication of execution time.
In an alternative form, the execution time data may be customized execution time data. Said customized execution time data can be generated by monitoring the actual performance of the computer system processing the audio data. Alternatively, they may be generated based on testing of the computer system, e.g., using test audio entities and test processing operations.
A hybrid system can be used, whereby the execution time database either contains both standardized execution time data and customized execution time data, or which uses standardized execution time data as a baseline and updates it over time to reflect the computer system performance such that it becomes customized execution time data.
what operations preceded it in order to account for performance of a specific processing unit's cache and memory performance;
other processes being executed in the same slice;
other operational parameters of the computer system, e.g., if the computer system is a laptop or other battery powered device the context may be whether the device is operating on battery power or mains power, etc.
This series of processing operations can be allocated to the 12 processing units as illustrated in
The allocation scheme used is able to be tuned to accommodate many trade-offs in addition to pure speed, for example allocations may be made to minimize the need to signal between processing units or to avoid the need to add additional safety margins between operations in a situation where the output of one processor needs to be completed before another processor can perform a function. In the example of
Accordingly, the allocation process used in some embodiments can make a distinction between realtime processing operations that must be processed within a time slice, and non-realtime processing operations, such as neartime processing operations which can be delayed by one slice. During allocation of processing operations to processor units, realtime processing operations are preferably prioritized. For example, they may be performed earlier in the time slice, or performed on processors which offer shorter expected execution times. In some cases the realtime processing operations are performed on one group of processing units while neartime tasks are processed on different processing units. The division may be made based on the speed of the processing units, such as if a computer system has high speed and low speed cores or processors, the highspeed cores or processors can be used of the highest priority (i.e., realtime) processing operations, while the other operations are performed on the slower processors or cores.
Any definitions expressly provided herein for terms contained in the appended claims shall govern the meaning of those terms as used in the claims. No limitation, element, property, feature, advantage or attribute that is not expressly recited in a claim should limit the scope of the claim in any way.
As used herein, the terms “include” and “comprise” (and variations of those terms, such as “including”, “includes”, “comprising”, “comprises”, “comprised” and the like) are intended to be inclusive and are not intended to exclude further features, components, integers, or steps.
For aspects of the disclosure that have been described using flowcharts, a given flowchart step could potentially be performed in various ways and by various devices, systems or system modules. A given flowchart step could be divided into multiple steps and/or multiple flowchart steps could be combined into a single step, unless the contrary is specifically noted as essential. Furthermore, the order of the steps can be changed without departing from the scope of the present disclosure, unless the contrary is specifically noted as essential.
Computer system 1000 also includes a main memory 1006, such as a random access memory (RAM) or other dynamic storage device, coupled to bus 1002 for storing information and instructions to be executed by processor system 1004. Main memory 1006 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor system 1004. Such instructions, when stored in non-transitory storage media accessible to processor system 1004, render computer system 1000 into a special-purpose machine that is customized and configured to perform the operations specified in the instructions.
Computer system 1000 may further include a read only memory (ROM) 1008 or other static storage device coupled to bus 1002 for storing static information and instructions for processor system 1004. A storage system 1010, such as a magnetic disk, SSD, optical disk or other mass storage device, may be provided and coupled to bus 1002 for storing information and instructions including the audio editing software application described above. Other storage may also be coupled to the computing system (not shown) to provide expanded storage capability. For example the computer system can be connected to one or more external data storage systems directly or via the communications interface 1018. The external data storage system may be a NAS data storage system or cloud data storage system.
The computer system 1000 may be coupled via bus 1002 to a display 1012 (such as an LCD, LED, touch screen display, or other display) for displaying information to a user via a graphical user interface. One or more input devices 1014 may be coupled to the bus 1002 for communicating information and command selections to processor system 1004. The input devices may include a keyboard or other input device adapted for entering alphanumeric information into the computer system 1000. The input device 1004 can also include a device specially adapted for audio editing and production, such as a mixing desk or mixing console (e.g., any of the Fairlight Desktop Console, Fairlight Advanced Consoles or Fairlight Desktop Audio editor from Blackmagic Design), or other similar audio mixer or control consoles from other manufacturers. Another type of user input device is cursor control 1016, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor system 1004 and for controlling cursor movement on display 1012.
According to at least one embodiment, the techniques herein are performed by computer system 1000 in response to processor system 1004 executing one or more sequences of one or more instructions contained in main memory 1006. Such instructions may be read into main memory 1006 from another storage medium, such as a remote disk or database. Execution of the sequences of instructions contained in main memory 1006 causes processor system 1004 to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions.
The terms “storage media” or “storage medium” as used herein refers to any non-transitory media that stores data and/or instructions that cause a machine to operate in a specific fashion. Such storage media may comprise non-volatile media and/or volatile media. Non-volatile media includes, for example, optical or magnetic disks, such as storage device 1010. Volatile media includes dynamic memory, such as main memory 1006. Common forms of storage media include, for example, a floppy disk, a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge.
Computer system 1000 may also include a communication interface 1018 coupled to bus 1002. Communication interface 1018 provides a two-way data communication coupling to a network link 1020 that is connected to communication network 1050. For example, communication interface 1018 may be an integrated services digital network (ISDN) card, cable modem, satellite modem, etc. As another example, communication interface 1018 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such implementation, communication interface 1018 sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.
The various embodiments described above can be combined to provide further embodiments. These and other changes can be made to the embodiments in light of the above-detailed description. In general, in the following claims, the terms used should not be construed to limit the claims to the specific embodiments disclosed in the specification and the claims, but should be construed to include all possible embodiments along with the full scope of equivalents to which such claims are entitled.
Number | Date | Country | Kind |
---|---|---|---|
2021903578 | Nov 2021 | AU | national |