The invention relates to an electronic device, and particularly to an electronic device that can generate triangular waves.
Musical Instrument Digital Interface (MIDI) is a format for the creation, communication and playback of audio sounds, such as music, speech, tones, alerts, and the like. A device that supports the MIDI format may store sets of audio information that can be used to create various “voices.” Each voice may correspond to a particular sound, such as a musical note by a particular instrument. For example, a first voice may correspond to a middle C as played by a piano, a second voice may correspond to a middle C as played by a trombone, a third voice may correspond to a D# as played by a trombone, and so on. In order to replicate the sounds played by various instruments, a MIDI compliant device may include a set of information for voices that specify various audio characteristics, such as the behavior of a low frequency oscillator, effects such as vibrato, and a number of other audio characteristics that can affect the perception of different sounds. Almost any sound can be defined, conveyed in a MIDI file, and reproduced by a device that supports the MIDI format.
A device that supports the MIDI format may produce a musical note (or other sound) when an event occurs that indicates that the device should start producing the note. Similarly, the device stops producing the musical note when an event occurs that indicates that the device should stop producing the note. An entire musical composition may be coded in accordance with the MIDI format by specifying events that indicate when certain voices should start and stop. In this way, the musical composition may be stored and transmitted in a compact file format according to the MIDI format.
MIDI is supported in a wide variety of devices. For example, wireless communication devices, such as radiotelephones, may support MIDI files for downloadable ringtones or other audio output. Digital music players, such as the “iPod” devices sold by Apple Computer, Inc and the “Zune” devices sold by Microsoft Corp. may also support MIDI file formats. Other devices that support the MIDI format may include various music synthesizers such as keyboards, sequencers, voice encoders (vocoders), and rhythm machines. In addition, a wide variety of devices may also support playback of MIDI files or tracks, including wireless mobile devices, direct two-way communication devices (sometimes called walkie-talkies), network telephones, personal computers, desktop and laptop computers, workstations, satellite radio devices, intercom devices, radio broadcasting devices, hand-held gaming devices, circuit boards installed in devices, information kiosks, video game consoles, various computerized toys for children, on-board computers used in automobiles, watercraft and aircraft, and a wide variety of other devices.
A number of other types of audio formats, standards and techniques have also been developed. Other examples include standards defined by the Motion Pictures Expert Group (MPEG), windows media audio (WMA) standards, standards by Dolby Laboratories, Inc., and quality assurance techniques developed by THX, ltd., to name a few. Moreover, many audio coding standards and techniques continue to emerge, including the digital MP3 standard and variants of the MP3 standard, such as the advanced audio coding (AAC) standard used in “iPod” devices. Various video coding standards may also use audio coding techniques, e.g., to code multimedia frames that include audio and video information.
One important feature of the MIDI format is its ability to store data related to the articulation of a particular note. The articulation data includes information about sound effects, such as a vibrato or a tremolo, which can help to emulate the sound of an acoustic instrument. A device that utilizes MIDI may implement these effects using a combination of low frequency oscillators and envelope generators. Typically, a low frequency oscillator (LFO) can be used to generate a periodic low-frequency wave to modulate the pitch, amplitude, and frequency of a particular note. In order to generate a low-frequency wave that operates within acceptable tolerance ranges, numerous and complex calculations are required, which can require the storage of a number of parameters and the utilization of a significant amount of chip area.
In general, this disclosure describes techniques for generating triangular waves. The techniques may be particularly useful for playback of audio files that comply with the musical instrument digital interface (MIDI) format, although the techniques may be useful with other audio formats, techniques or standards.
In one aspect, this disclosure provides a method for generating a set of data points that form a triangular wave having a desired frequency and a desired gain. The method includes the step of (a) determining an increment value based on the desired frequency and the desired gain of the triangular wave. The method further includes the step of (b) adding the increment value to a current data point to generate a next data point, the current data point and the next data point forming a subset of the set of data points. The method further includes the step of iteratively performing (a) and (b) to generate the set of data points that form the triangular wave.
In another aspect, this disclosure provides a device for generating a set of data points that form a triangular wave having a desired frequency and a desired gain. The device includes an electrical circuit that determines an increment value based on the desired frequency and the desired gain of the triangular wave. The device further includes an adder that adds the increment value to a current data point to generate a next data point, the current data point and the next data point forming a subset of the set of data points, wherein the electrical circuit iteratively determines increment values and the adder iteratively adds the increment values to successive data points to generate the set of data points that form the triangular wave.
In another aspect, this disclosure provides a device for generating a set of data points that form a triangular wave having a desired frequency and a desired gain. The device includes a first means for determining an increment value based on the desired frequency and the desired gain of the triangular wave. The device further includes a second means for adding the increment value to a current data point to generate a next data point, the current data point and the next data point forming a subset of the set of data points, wherein the first means iteratively determines increment values and the second means iteratively adds the increment values to successive data points to generate the set of data points that form the triangular wave.
Various aspects of the techniques described in this disclosure may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the software may be executed in one or more processors, such as a microprocessor, application specific integrated circuit (ASIC), field programmable gate array (FPGA), or digital signal processor (DSP). The software that executes the techniques may be initially stored in a computer-readable medium and loaded and executed in the processor.
Accordingly, this disclosure also contemplates a computer-readable medium comprising instructions that upon execution by one or more processors cause the processors to generate a set of data points that form a triangular wave having a desired frequency and a desired gain, wherein the instructions cause the one or more processors to: (a) determine an increment value based on the desired frequency and the desired gain of the triangular wave, (b) add the increment value to a current data point to generate a next data point, the current data point and the next data point forming a subset of the set of data points, and iteratively perform (a) and (b) to generate the set of data points that form the triangular wave.
In some cases, the computer readable medium may form part of a computer program product, which may be sold to manufactures and/or used in a video coding device. The computer program product may include a computer readable medium, and in some cases, may also include packaging materials.
In other cases, this disclosure may be directed to a circuit, such as an integrated circuit, chipset, application specific integrated circuit (ASIC), field programmable gate array (FPGA), logic, or various combinations thereof configured or adapted to perform one or more of the techniques described herein.
Accordingly, this disclosure also contemplates a circuit for generating a set of data points that form a triangular wave having a desired frequency and a desired gain, wherein the circuit is adapted to: (a) determine an increment value based on the desired frequency and the desired gain of the triangular wave, (b) add the increment value to a current data point to generate a next data point, the current data point and the next data point forming a subset of the set of data points, and iteratively perform (a) and (b) to generate the set of data points that form the triangular wave.
The details of one or more embodiments of the invention are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the invention will be apparent from the description and drawings, and from the claims.
This disclosure describes techniques for generating triangular waves. The techniques may be particularly useful for playback of audio files that comply with the musical instrument digital interface (MIDI) format, although the techniques may be useful with other audio formats, techniques or standards that make use of triangular waves. As used herein, the term MIDI file refers to any audio data or file that contains at least one audio track that conforms to the MIDI format. Examples of various file formats that may include MIDI tracks include CMX, SMAF, XMF, SP-MIDI to name a few. CMX stands for Compact Media Extensions, developed by Qualcomm Inc. SMAF stands for the Synthetic Music Mobile Application Format, developed by Yamaha Corp. XMF stands for eXtensible Music Format, and SP-MIDI stands for Scalable Polyphony MIDI.
MIDI files, or other audio files can be conveyed between devices within audio frames, which may include audio information or audio-video (multimedia) information. An audio frame may comprise a single audio file, multiple audio files, or possibly one or more audio files and other information such as coded video frames. Any audio data within an audio frame may be termed an audio file, as used herein, including streaming audio data or one or more audio file formats listed above. A plurality of hardware elements that operate simultaneously can be used to service various synthesis parameters generated from one or more audio files, such as MIDI files.
A general purpose processor may execute software to parse MIDI files and schedule MIDI events associated with the MIDI files. The scheduled events can then be serviced by a DSP in a synchronized manner, as specified by timing parameters in the MIDI files. The general purpose processor dispatches the MIDI events to the DSP in a time-synchronized manner, and the DSP processes the MIDI events according to the time-synchronized schedule in order to generate MIDI synthesis parameters. The DSP then schedules processing of the synthesis parameters in hardware, and a hardware unit can generates audio samples based on the synthesis parameters.
The general purpose processor may service MIDI files for a first frame (frame N), and when the first frame (frame N) is serviced by the DSP, a second frame (frame N+1) can be simultaneously serviced by the general purpose processor. Furthermore, when the first frame (frame N) is serviced by the hardware, the second frame (frame N+1) is simultaneously serviced by the DSP while a third frame (frame N+2) is serviced by the general purpose processor. In this way, MIDI file processing is separated into pipelined stages that can be processed at the same time, which can improve efficiency and possibly reduce the computational resources needed for given stages, such as those associated with the DSP. Each frame passes through the various pipeline stages, from the general purpose processor, to the DSP, and then to the hardware. In some cases, audio samples generated by the hardware may be delivered back to the DSP, e.g., via interrupt-driven techniques, so that any post-processing can be performed. Audio samples may then be converted into analog signals, which can be used to drive speakers and output audio sounds to a user.
Alternatively, the tasks associated with MIDI file processing can be delegated between two different threads of a DSP and the dedicated hardware. That is to say, the tasks associated with the general purpose processor (as described herein) could alternatively be executed by a first thread of a multi-threaded DSP. In this case, the first thread of the DSP executes the scheduling, a second thread of the DSP generates the synthesis parameters, and the hardware unit generates audio samples based on the synthesis parameters. This alternative example may also be pipelined in a manner similar to the example that uses a general purpose processor for the scheduling.
The various components illustrated in
As illustrated in the example of
Device 4 may implement an architecture that separates audio processing tasks between software, hardware and firmware. As shown in
Once DSP 12 has generated the synthesis parameters, these synthesis parameters can be stored in memory unit 10. Memory unit 10 may comprise volatile or non-volatile storage. In order to support quick data transfer, memory unit 10 may comprise random access memory (RAM), dynamic random access memory (DRAM), synchronous dynamic random access memory (SDRAM), FLASH memory, or the like. The synthesis parameters stored in memory unit 10 can be serviced by audio hardware unit 14 to generate audio samples.
The audio hardware unit 14 may include several processing elements for servicing the synthesis parameters. The processing elements may comprise an arithmetic logic unit (ALU) that supports operations such as multiply, add and accumulate. In addition, each processing element may also support hardware specific operations for loading and/or storing to other hardware components. The other hardware components in audio hardware unit 14, for example, may comprise a low frequency oscillator (LFO), a wave fetch unit (WFU), and a summing buffer (SB). Thus, the processing elements in audio hardware unit 14 may support and execute instructions for interacting and using these other hardware components in the audio processing. In accordance with this disclosure, a processing element may interact with the low frequency oscillator in order to generate a particular articulation sound effect such as a vibrato or a tremolo. The low frequency oscillator may provide a periodic waveform, such as a triangular wave, in response to specific parameters provided to the LFO by a processing element. For example, the processing element may specify the desired gain and the desired frequency of a triangular wave in an instruction to the low frequency oscillator. In response, the LFO may provide a series of data points corresponding to the triangular wave requested by the processing element. Additional details of one example of audio hardware unit 14 are provided in greater detail below with reference to
In some cases, the processing of audio files by device 4 may be pipelined. For example, processor 8, DSP 12 and audio hardware unit 14 may operate simultaneously with respect to successive audio frames. Each audio frames may correspond to a block of time, e.g., a 10 millisecond (ms) interval that includes many coded audio samples. Digital output of hardware unit 14, for example, many include 480 digital audio samples per audio frame, which can be converted into an analog audio signal by digital-to-analog converter 16. Many events may correspond to one instance of time so that many different sounds or notes can be included in one instance of time according to the MIDI format or similar audio format. Of course, the amount of time delegated to any audio frame and the number of audio samples defined in one frame may vary in different implementations.
In some cases, audio samples generated by audio hardware unit 14 are delivered back to DSP 12, e.g., via interrupt-driven techniques. In this case, DSP 12 may also perform post processing techniques on the audio samples. The post processing may include filtering, scaling, volume adjustment, or a wide variety of audio post processing that may ultimately enhance the sound output. Digital-to-analog converter (DAC) 16 then converts the audio samples into analog signals, which can be used by drive circuit 18 to drive speakers 19A and 19B for output of audio sounds to a user.
Local memory 10 may be structured such that processor 8, DSP 12 and MIDI hardware 14 can access any information needed to perform the various tasks delegated to these different components. In some cases, the storage layout of MIDI information in local memory 10 may be arranged to allow for efficient access from the different components 8, 12 and 14. Again, local memory 10 is used to store (among other things) the synthesis parameters associated with one or more audio files. Once DSP 12 generates these synthesis parameters, they can be processed by hardware unit 14 to generate audio samples. The audio samples generated by audio hardware unit 14 may comprise pulse-code modulation (PCM) samples, which are a digital representations of an analog signal wherein the analog signal is sampled at regular intervals. Additional details of exemplary audio generation by audio hardware unit 14 is discussed in greater detail below with reference to
In addition, audio hardware unit 20 may include a coordination module 32. Coordination module 32 coordinates data flows within audio hardware unit 20. When audio hardware unit 20 receives an instruction from DSP 12 (
At the direction of coordination module 32, synthesis parameters may be loaded from memory 10 (
The instructions loaded into program RAM unit 44A or 44N instruct the associated processing element 34A or 34N to synthesize one of the voices indicated in the list of synthesis parameters in VPS RAM unit 46A or 46N. There may be any number of processing elements 34A-34N (collectively “processing elements 34”), and each may comprise one or more ALUs that are capable of performing mathematical operations, as well as one or more units for reading and writing data. Only two processing elements 34A and 34N are illustrated for simplicity, but many more may be included in hardware unit 20. Processing elements 34 may synthesize voices in parallel with one another. In particular, the plurality of different processing elements 34 work in parallel to process different synthesis parameters. In this manner, a plurality of processing elements 34 within audio hardware unit 20 can accelerate and possibly improve the generation of audio samples.
When coordination module 32 instructs one of processing elements 34 to synthesize a voice, the respective processing element may execute one or more instructions associated with the synthesis parameters. Again, these instructions may be loaded into program RAM unit 44A or 44N. The instructions loaded into program RAM unit 44A or 44N cause the respective one of processing elements 34 to perform voice synthesis. For example, processing elements 34 may send requests to a waveform fetch unit (WFU) 36 for a waveform specified in the synthesis parameters. Each of processing elements 34 may use WFU 36. An arbitration scheme may be used to resolve any conflicts if two or more processing elements 34 request use of WFU 36 at the same time.
In response to a request from one of processing elements 34, WFU 36 returns one or more waveform samples to the requesting processing element. However, because a wave can be phase shifted within a sample, e.g., by up to one cycle of the wave, WFU 36 may return two samples in order to compensate for the phase shifting using interpolation. Furthermore, because a stereo signal may include two separate waves for the two stereophonic channels, WFU 36 may return separate samples for different channels, e.g., resulting in up to four separate samples for stereo output.
After WFU 36 returns audio samples to one of processing elements 34, the respective processing element may execute additional program instructions based on the synthesis parameters. In particular, instructions cause one of processing elements 34 to request an asymmetric triangular wave from a low frequency oscillator (LFO) 38 in audio hardware unit 20. By multiplying a wave returned by WFU 36 with a triangular wave returned by LFO 38, the respective processing element may manipulate various sonic characteristics of the wave to achieve a desired audio affect. For example, multiplying a wave by a triangular wave may result in a wave that sounds more like a desired musical instrument.
Other instructions executed based on the synthesis parameters may cause a respective one of processing elements 34 to loop the waveform a specific number of times, adjust the amplitude of the waveform, add reverberation, add a vibrato effect, or cause other effects. In this way, processing elements 34 can calculate a waveform for a voice that lasts one MIDI frame. Eventually, a respective processing element may encounter an exit instruction. When one of processing elements 34 encounters an exit instruction, that processing element signals the end of voice synthesis to coordination module 32. The calculated voice waveform can be provided to summing buffer 40 at the direction of another store instruction during the execution of the program instructions. This causes summing buffer 40 to store that calculated voice waveform.
When summing buffer 40 receives a calculated wave from one of processing elements 34, summing buffer 40 adds the calculated wave to the proper instance of time associated with an overall wave for a MIDI frame. Thus, summing buffer 40 combines output of the plurality of processing elements 34. For example, summing buffer 40 may initially store a flat wave (i.e., a wave where all digital samples are zero.) When summing buffer 40 receives audio information such as a calculated wave from one of processing elements 34, summing buffer 40 can add each digital sample of the calculated wave to respective samples of the wave stored in summing buffer 40. In this way, summing buffer 40 accumulates and stores an overall digital representation of a wave for a full audio frame.
Summing buffer 40 essentially sums different audio information from different ones of processing elements 34. The different audio information is indicative of different instances of time associated with different generated voices. In this manner, summing buffer 40 creates audio samples representative of an overall audio compilation within a given audio frame.
Processing elements 34 may operate in parallel with one another, yet independently. That is to say, each of processing elements 34 may process a synthesis parameter and then move on to the next synthesis parameter once the audio information generated for the first synthesis parameter is added to summing buffer 40. Thus, each of processing elements 34 performs its processing tasks for one synthesis parameter independently of the other processing elements 34, and when the processing for synthesis parameter is complete that respective processing element becomes immediately available for subsequent processing of another synthesis parameter.
Eventually, coordination module 32 may determine that processing elements 34 have completed synthesizing all of the voices required for the current audio frame and have provided those voices to summing buffer 40. At this point, summing buffer 40 contains digital samples indicative of a completed wave for the current audio frame. When coordination module 32 makes this determination, coordination module 32 sends an interrupt to DSP 12 (
Cache memory 48, WFU/LFO memory 39 and linked list memory 42 are also shown in
Any number of processing elements 34 may be included in audio hardware unit 20 provided that a plurality of processing elements 34 operate simultaneously with respect to different synthesis parameters stored in memory 10 (
Processing elements 34 may process all of the synthesis parameters for an audio frame. After processing each respective synthesis parameter, the respective one of processing elements 34 adds its processed audio information in to the accumulation in summing buffer 40, and then moves on to the next synthesis parameter. In this way, processing elements 34 work collectively to process all of the synthesis parameters generated for one or more audio files of an audio frame. Then, after the audio frame is processed and the samples in summing buffer are sent to DSP 12 for post processing, processing elements 34 can begin processing the synthesis parameters for the audio files of the next audio frame.
Accumulator register 50 can store a single data point from the set of data points that form a triangular wave. Accumulator register 50 is electrically coupled to adder 58 and the output of multiplexer 56. Accumulator register 50 outputs the current data point via line 66. During a single clock-cycle, accumulator register 50 receives, via line 64, either the next data point or a zero value depending on the output of multiplexer 56. During the next clock-cycle, accumulator register 50 outputs the value received from the previous clock cycle via line 66 as the current data point. For a given clock cycle, the current data point is defined as the output of the accumulator register via line 66 and the next data point is defined as the output of adder 58 via line 68.
Phase accumulator 52 can store a single phase data point corresponding to the time-axis of a triangular wave. In general, the phase data determines which of the four ratios 70, 72, 76, 78 is selected to be the increment value 70. Phase accumulator 52 is electrically coupled to adder 60 via lines 80 and 82 and to zero forcing logic 62 via line 84. During a particular clock-cycle, phase accumulator 52 receives, via line 80, the next phase data point and outputs the current phase data point via line 82. During the next clock-cycle, phase accumulator 52 outputs the value received from the previous clock cycle via line 80 as the current phase data point. For a given clock cycle, the current phase data point is defined as the output of phase accumulator 52 via line 82 and the next phase data point is defined as the output of adder 60 via line 80. Accumulator register 50 and phase accumulator 52 can be implemented utilizing any sequential storage element such as a flip-flop, latch, RAM cell, or the like.
Multiplexer 54 outputs an increment value on line 76 based on a selection of one of four ratios provided on input lines 72, 74, 76, 78. The selection is based on control line 86, which contains the two most significant bits of the next phase data point. When the two most significant bits are “00”, multiplexer 54 selects the ratio RP as the increment value. Similarly, when the two most significant bits are “01”, multiplexer 54 selects the ratio −RP as the increment value. When the most significant bits are “10”, multiplexer 54 selects the ratio −RN as the increment value. Finally, when the most significant bits are “11”, multiplexer 54 selects the ratio RN as the increment value. The increment value is placed on the output of multiplexer 54, which is transmitted to adder 58 via line 70. Multiplexer 54 can be implemented using any digital selection scheme such as logic gates, an FPGA, RAM, or the like. Although multiplexer 54 shown here has four ratio values and the selection is based on the two most significant bits of the next phase data point, it should be recognized that other examples of the techniques described herein may utilize a multiplexer 54 having more or less inputs and additional or fewer selection bits. In addition, other examples of the techniques described herein may utilize a multiplexer that selects an increment value based on the current phase data point, which is the output of phase accumulator 52 via line 82.
Adder 58 can generate the sum of the current data point and the increment value. Adder 58 receives the current data point from accumulator register 50 via line 66 and receives the increment value from multiplexer 54 via line 70. The sum is defined as the next data point and is placed as an output on line 68.
Adder 60 can generate the sum of the current phase data point and the phase increment. Adder 60 receives the current phase data point from phase accumulator 52 via line 82 and receives the phase increment via line 90. The sum is defined as the next phase data point and is placed as an output on line 80. Both adders 58, 60 can be implemented using any conventional digital adding circuitry as is known in the art.
In accordance with an example of this disclosure, the ratios RP and RN and the phase increment can be calculated according to the following formulas:
R
P=round(4*GP*Ft) (1)
R
N=round(4*GN*Ft) (2)
Phase Increment=round(2ÂB*Ft) (3)
The positive gain is the value of the positive peak or the highest point of the triangular wave. The negative gain is the value of the negative peak or the lowest point of the triangular wave. The phase increment is calculated so that the values in phase accumulator 52 will traverse the entire range of phase accumulator 52 in one period of the desired triangular wave. The value B represents the number of bits in the phase accumulator. The normalized desired frequency is the desired frequency divided by the clocking rate of the hardware.
In the exemplary audio hardware unit 20 shown in
One advantage of the techniques described herein is that the positive and negative ratios contain information regarding both the desired gain and frequency of the resulting triangular wave. These ratios allow triangular wave generator 44 to calculate successive data points by adding successive increment values without the need of utilizing a multiplier during each clock cycle to correct for the gain. Because hardware multipliers can take up valuable chip area and commonly require large amounts of processing time, the elimination of the need for a multiplier can reduce the complexity of the hardware and allow for more efficient operation of a triangular wave generator.
Zero forcing logic block 62 detects a phase accumulator roll over condition and resets accumulator register 50 to a zero value to start a new period of the triangular wave. A roll over condition is detected when the most significant bit of the next phase data point is a logic zero and the most significant bit of the current phase data point is a logic one. A roll over condition in phase accumulator 52 can happen simultaneously with a negative to positive transition in accumulator register 50. Thus, it should be noted that in other examples, zero forcing logic block 62 can be implemented by detecting when the current data point is negative and the next data point is positive.
Multiplexer 56 can force the next data point received by the accumulator register 50 to be a zero value based on the output of zero forcing logic block 62. Multiplexer 56 may select between the next data point from line 68 and the zero value on line 92. The selection may be based on the output of zero forcing logic block 62, which is transmitted to the multiplexer via line 94. During normal operation with no roll over conditions, control line 94 remains inactive and multiplexer 56 places the next data point on line 64 as an output of triangular wave generator 44 and as an input to accumulator register 50. When a roll over condition occurs, zero forcing logic block 62 activates control line 94 and multiplexer 56 places a zero value on line 64. It should be noted that in other embodiments of the present invention, the output of triangular wave generator 44 may also be connected to the output of accumulator register 50 via line 68.
Another advantage of the techniques described herein is that zero-forcing logic 62 prevents positive and negative biases from occurring in subsequent periods of the triangular wave due to the finite precision of phase accumulator 52 and the phase increment. For example, consider a case where the number of data points in a single period of the triangular waveform is not an even multiple of four. When a roll over condition occurs, the value in accumulator register 50 may not be zero. If no correction takes place, the non-zero offset can continue to accumulate over successive clock cycles with the potential to create positive or negative biases in successive triangular waves. The introduction of a positive or negative bias may also have the potential to cause a deviation in the frequency of the triangular wave over several periods. By utilizing zero-forcing logic 62, the bias and frequency problems associated with a roll over offset value can be removed because zero-forcing logic 62 forces the next data point to be a zero whenever a roll over condition occurs.
In step 102, waveform parameters are received by triangular wave generator 44. The waveform parameters may be transmitted to triangular wave generator 44 by a processing element 34 in the audio hardware unit 20. The waveform parameters may contain information concerning the desired positive gain, the desired negative gain, and the desired frequency of the triangular wave to be generated. In other embodiments, the waveform parameters that are received by triangle wave generator 44 may contain information concerning the positive ratio, the negative ratio, and the phase increment of the desired triangular wave.
In step 104, triangular wave generator 44 calculates the positive ratio, the negative ratio, and the phase increment according to the equations (1)-(3). This step is optional and is illustrated by broken lines because the positive ratio, the negative ratio, and the phase increment may already be provided by the processing element in step 102.
In step 106, triangular wave generator 44 resets accumulator register 50 and phase accumulator 52 to zero to begin generation of a triangular waveform. In step 108, triangular wave generator 44 adds the phase increment to the current value of phase accumulator 52 to generate a next phase data point. The current value of phase accumulator 52 corresponds to the current phase data point.
In step 110, triangular wave generator 44 selects an increment value from a set of ratios. The set of ratios may include the positive ratio, the negative ratio, and the additive inverses of both the positive and negative ratios. The selection of the increment value may be based on the two most significant bits of the sum generated in step 108. In other examples, the selection of the increment may be based on the two most significant bits of the current phase data point.
In step 112, triangular wave generator 44 adds the increment value selected in step 110 to the current value of the accumulator register 50. The current value of accumulator register 50 corresponds to the current data point.
In step 114, triangular wave generator 44 detects whether a roll over condition has occurred in phase accumulator 52. The roll over condition can be detected by examining the most significant bit of the current value of phase accumulator 52 and the most significant bit of the sum generated in step 108. If the most significant bit of the current value of phase accumulator 52 is a logic one and the most significant bit of the sum generated in step 108 is a logic zero, a roll over condition has occurred and the triangular wave generator proceeds to step 118. If any other combination occurs, a roll over condition has not occurred and the triangular wave generator proceeds to step 116.
In other examples, the roll over condition can be detected by examining the output of the current value of accumulator register 50 and the sum generated in step 112. If the current value of accumulator register 50 is negative and the sum generated in step 112 is positive, a roll over condition has occurred and triangular wave generator 44 proceeds to step 118. If any other combination occurs, a roll over condition has not occurred and triangular wave generator proceeds to step 116.
In step 116, triangular wave generator 44 stores the sum generated in step 112 in accumulator register 50. This step causes the next data point of the current clock cycle to become the current data point in the next clock cycle. In step 118, triangular wave generator 44 forces accumulator register 50 to zero. This step causes the current data point to be zero in the next clock cycle.
In step 120, triangular wave generator 44 stores the sum generated in step 108 in phase accumulator 52. This step causes the next phase data point in the current clock cycle to become the current phase data point in the next clock cycle.
After step 120, the triangular wave generator 44 loops back to step 108. Triangular wave generator 44 can loop as many times as necessary to iteravely generate a set of data points that form a triangular wave.
The current data point, which is stored in accumulator register 50, corresponds to the output value associated with axis 148. As time axis 146 is traversed, different ratios are selected as the increment value based on the region of operation 152, 154, 156, 158 of triangular wave generator 44. For example, in region 152 the two most significant bits of the next phase data point are “00” and the ratio added to the accumulator register 50 is RP. This causes the output data points of the triangular wave generator 44 to increase to a value at or near the desired positive gain. Similarly, in region 154, the two most significant bits of the next phase data point are “01” and the additive inverse of the ratio RP is added to the accumulator register 50. This causes the output data points to decrease to a value at or near zero. In region 156, the two most significant bits of the next phase data point are “10” and the additive inverse of the ratio RN is added to the accumulator register 50. This causes the output data points to decrease to a value at or near the additive inverse of RN. Finally, in region 158, the two most significant bits of the next phase data point are “11” and the ratio RN is added to accumulator register 50. This causes the output data points to increase to a value at or near zero. After the wave has traversed region 158, a roll over condition is detected and the accumulator register 50 is forced to zero 160 to start a new period of the triangular wave.
Various examples have been described in this disclosure. For example, a triangular wave generator that does not require the use of a multiplier has been disclosed. The elimination of the need for a multiplier can reduce the complexity of the hardware and allow for more efficient operation of a triangular wave generator. In addition, a triangular wave generator that corrects for any offset during a roll over condition has also been described. Correcting any offset that occurs during a roll over condition can alleviate many of bias problems associated with a roll over offset and also allow for better control over the frequency of the resulting triangular waves. Nevertheless, various modifications can be made to the techniques described above. For example, other types of devices could also implement the triangular wave generation techniques described herein. Also, other approaches could be utilized for detecting and correcting roll over offset values such as examining the output of the accumulator register or forcing the accumulator to zero on the negative slope of the triangular wave rather than the positive slope.
The techniques described herein may be implemented in hardware, software, firmware, or any combination thereof. Any features described as modules or components may be implemented together in an integrated logic device or separately as discrete but interoperable logic devices. If implemented in software, the techniques may be realized at least in part by a computer-readable medium comprising instructions that, when executed, performs one or more of the methods described above. The computer-readable data storage medium may form part of a computer program product, which may include packaging materials. The computer-readable medium may comprise random access memory (RAM) such as synchronous dynamic random access memory (SDRAM), read-only memory (ROM), non-volatile random access memory (NVRAM), electrically erasable programmable read-only memory (EEPROM), FLASH memory, magnetic or optical data storage media, and the like. The techniques additionally, or alternatively, may be realized at least in part by a computer-readable communication medium that carries or communicates code in the form of instructions or data structures and that can be accessed, read, and/or executed by a computer.
The code may be executed by one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Accordingly, the term “processor,” as used herein may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein. In addition, in some aspects, the functionality described herein may be provided within dedicated software modules or hardware modules configured or adapted to perform the techniques of this disclosure.
If implemented in hardware, this disclosure may be directed to a circuit, such as an integrated circuit, chipset, ASIC, FPGA, logic, or various combinations thereof configured or adapted to perform one or more of the techniques described herein.
It should also be noted that a person having ordinary skill in the art will recognize that a circuit may implement some or all of the functions described above. There may be one circuit that implements all the functions, or there may also be multiple sections of a circuit that implement the functions. With current mobile platform technologies, an integrated circuit may comprise at least one DSP, and at least one Advanced Reduced Instruction Set Computer (RISC) Machine (ARM) processor to control and/or communicate to DSP or DSPs. Furthermore, a circuit may be designed or implemented in several sections, and in some cases, sections may be re-used to perform the different functions described in this disclosure.
These and other embodiments are within the scope of the following claims.
The present Application for Patent claims priority to Provisional Application No. 60/896,463 entitled “METHOD AND DEVICE FOR GENERATING TRIANGULAR WAVES” filed Mar. 22, 2007, and assigned to the assignee hereof and hereby expressly incorporated by reference herein.
Number | Date | Country | |
---|---|---|---|
60896463 | Mar 2007 | US |