None.
This disclosure relates generally to speech enhancement and more specifically to converting a speech to a singing voice in, for example, real-time applications.
Many interactions occur online over different communication channels and via many media types. An example of such interactions is real-time communication (RTC) using video conferencing or streaming or a simple telephone voice calls (e.g., Voice over Internet Protocol). The video can include audio (e.g., speech, voice) and visual content. One user (i.e., a sending user) may transmit (e.g., the video) to one or more receiving users. For example, a concert may be live-streamed to many viewers. For example, a teacher may live-stream a classroom session to students. For example, a few users may hold a live chat session that may include live video.
In real-time communications, some users may wish to add filters, masks, and other visual effects to add an element of fun to the communications. To illustrate, a user can select a sunglasses filter, which the communications application digitally adds to the user's face. Similarly, users may wish to modify their voice. More specifically, a user may wish to modify his/her voice to be a singing voice according to some reference sample.
A first aspect of the disclosed implementations is a method of converting a frame of a voice sample to a singing frame. The method includes obtaining a pitch value of the frame; obtaining formant information of the frame using the pitch value; obtaining aperiodicity information of the frame using the pitch value; obtaining a tonic pitch and chord pitches; using the formant information, the aperiodicity information, the tonic pitch, and the chord pitches to obtain the singing frame; and outputting or saving the singing frame.
A second aspect of the disclosed implementations is an apparatus for converting a frame of a voice sample to a singing frame. The apparatus includes a processor that is configured to obtain a pitch value of the frame; obtain formant information of the frame using the pitch value; obtain aperiodicity information of the frame using the pitch value; obtain a tonic pitch and a chord pitch; use the formant information, the aperiodicity information, the tonic pitch and the chord pitch to obtain the singing frame; and output or save the singing frame.
A third aspect of the disclosed implementations is a non-transitory computer-readable storage medium that includes executable instructions that, when executed by a processor, facilitate performance of operations including obtaining a pitch value of the frame; obtaining formant information of the frame using the pitch value; obtaining aperiodicity information of the frame using the pitch value; obtaining a tonic pitch and chord pitches; using the formant information, the aperiodicity information, the tonic pitch, and the chord pitches to obtain the singing frame; and outputting or saving the singing frame.
It will be appreciated that aspects can be implemented in any convenient form. For example, aspects may be implemented by appropriate computer programs which may be carried on appropriate carrier media which may be tangible carrier media (e.g. disks) or intangible carrier media (e.g. communications signals). Aspects may also be implemented using suitable apparatus which may take the form of programmable computers running computer programs arranged to implement the methods and/or techniques disclosed herein. Aspects can be combined such that features described in the context of one aspect may be implemented in another aspect.
The description herein makes reference to the accompanying drawings wherein like reference numerals refer to like parts throughout the several views.
As mentioned above, a user may wish to have his/her voice (i.e., speech) converted to a singing voice according to a reference sample. That is, while the user is speaking in his/her regular voice (i.e., a source voice sample), a remote recipient of the user's voice may hear the user's speech being sung according to the reference sample. That is, the pitch of the speaker is modified (e.g., tuned, etc.) to follow the melody of the reference sample, which may be a song, a tune, a musical composition, or the like.
While traditional pitch tuning techniques, such as phase vocoder or Pitch Synchronous Overlap and Add (PSOLA), can modify the pitch of a speech, such techniques may also change the voice formant as the energy distribution of the whole frequency band may be expanded or squeezed evenly. As a result, the output (e.g., result) of such techniques is speech (e.g., voice) that does not resemble that of the speaker, may sound like that of another person, or become unnatural (e.g., robotic, etc.). That is, the traditional techniques tend to lose the identity of the original speaker.
When converting a voice sample to a singing voice according to a reference, preservation of the identity of the speaker is desirable. The identity of the speaker (e.g., the uniqueness of the speaker's voice) can be embedded (e.g., encoded, etc.) in the formant information. A formant is a concentration of acoustic energy around a particular frequency in a speech wave. A formant denotes resonance characteristics of the vocal tract when a vowel is uttered. Each cavity within the vocal tract can resonate at a corresponding frequency. These resonance characteristics can be used to identify the voice quality of an individual.
With respect to the reference sample, the tonic pitch trajectory and the chords of the reference sample are to be applied to the voice sample. Tonic pitch refers to the beginning and ending note of the scale used to compose a piece of music. A tonic note can be defined as the first scale degree of a diatonic scale, a tonal center, and/or a final resolution tone. For example, referring to a reference sample (e.g., a musical composition) as being “in the key of” C major implies that the reference sample is harmonically centered on the note C and making use of a major scale whose first note, or tonic, is C. The main pitch in the reference sample can be defined as the tone which occurs with the greatest amplitude. The tonic pitch trajectory refers to the sequence of tonic pitches in the reference sample. A chord is defined as a sequence of notes separated by intervals. A chord can be a set of notes that are played together.
The traditional technique for singing voice generation may generate multiple tracks for chords based on the tonic track and mix the chords tracks with the tonic track to generate the singing signal. Such techniques result in increased computational cost, a downside of which is the impracticality of implementation on portable devices, such as a mobile phone.
Implementations according to this disclosure can be used to convert a voice sample (e.g. speech sample) to a singing voice based on a reference sample. The speech-to-singing techniques described herein can modify the pitch trajectory of an original voice according to the pitch reference of a given melody without changing the identity of the speaker. The conversion can be performed in real time. The conversion can be performed according to a static reference sample or a dynamic reference sample. In the case of the static reference sample, preset trajectories for tonic and chords pitches can be looped over time. In the case of a dynamic reference sample (i.e., dynamic mode), tonic and chords pitch signals can be received (e.g., calculated, extracted, analyzed, etc.) in real time from an input device (or virtual device) such as a keyboard or touch screen. For example, a musical instrument may be playing in the background as the user is speaking and the voice of the user can be modified according to the tonic and chords of the played music.
The apparatus 100 can receive the audio sample (e.g., speech) of a sending user. For example, the audio sample may be spoken by the sending user, such as during an audio or a video teleconference with one or more receiving users. In an example, the sending device of the sending user can convert the voice of the sending user to a singing voice and then transmit the singing voice to the receiving user. In another example, the voice of the sending user can be transmitted as is to the receiving user and the receiving device of the receiving user can convert the received voice to a singing voice prior to outputting the singing voice to the receiving user, such as using a microphone of the receiving device. The singing voice output can be output to a storage medium, such as to be played later.
The apparatus 100 receives the source voice in frames, such as a source audio frame 108. In another example, the apparatus 100 itself can partition a received audio signal into the frames, including the source audio frame 108. The apparatus 100 processes the source voice frame by frame. A frame can correspond to an m number of milliseconds of audio. In an example, m can be 20 milliseconds. However, other values of m are possible. The apparatus 100 outputs (e.g., generates, obtains, results in, calculates, etc.) a singing audio frame 112. The source audio frame 108 is the original speech of the sending user and the singing audio frame 112 is the singing audio frame according to a reference signal 110.
The apparatus 100 includes a feature extraction module 102, a singing feature generation module 104, and a singing synthesis module 106. The feature extraction module 102 can estimate the pitch and formant information of each received audio frame (i.e., the source audio frame 108). As used in this disclosure, “estimate” can mean calculate, obtain, identify, select, construct, derive, form, produce, or other estimate in any manner whatsoever. The singing feature generation module 104 can provide the tonic pitch and the chords pitches, from the reference signal 110 to be applied to each frame (i.e., the source audio frame 108). The singing synthesis module 106 uses the information provided by the feature extraction module 102 and the singing feature generation module 104 to generate the singing signals (i.e., the singing audio frame 112) frame by frame.
To summarize, and by way of illustration, when a speaker is speaking, the features of the real-time speech signal are extracted by the feature extraction module 102; meanwhile singing information such as tonic and chords pitches are generated by the singing feature generation module 104; and the singing synthesis module 106 generates the singing signals based on both speech and singing features.
The feature extraction module 102, the singing feature generation module 104, and the singing synthesis module 106 are further described below with respect to
Each of the modules of the apparatus 100 can be implemented, for example, as one or more software programs that may be executed by computing devices, such as a computing device 600 of
For each source audio frame 108 of a speech signal, the pitch detection block (i.e., the formant extraction block 210) can calculate a pitch value (FO). The pitch value can be used to determine window lengths of Fast Fourier Transforms (FFTs) 206 used by the formant extraction block 210 and the aperiodicity estimation block 208. The FFT 206 can also be used to determine audio signal lengths needed to perform the FFT. As further described below, the lengths can be 2*T0 and 3*T0 for aperiodicity estimation and formant extraction, respectively, where T0 depends on the pitch F0 (e.g., T0=1/F0). In an example, the feature extraction module 102 can search for the pitch value (F0) within a pitch search range. In an example, the pitch search range can be 75 Hz to 800 Hz, which covers the normal range of human pitch. The pitch value (F0) can be found by the autocorrelation block 204, which performs the autocorrelation on portions of the signal stored in a signal buffer 202. The length of the signal buffer 202 can be at least 40 ms, which can be determined by the lowest pitch (75 Hz) of the pitch detection range. The signal buffer 202 can include sampled data of at least 2 frames of the source audio signal. The signal buffer 202 can be used to store audio frames for a certain total length (e.g., 40 ms).
The feature extraction module 102, via a concatenation block 212, can provide the formant (i.e., the spectrum envelope) and aperiodicity information to the singing synthesis module 106, as shown in
At 222, the technique 220 calculates an autocorrelation of signals in the signal buffer. Autocorrelation can be used to identify patterns in data (such as time series data). An autocorrelation function can be used to identify correlations between pairs of values at a certain lag. For example, a lag-1 autocorrelation can measure the correlation between immediate neighboring data points; and a lag-2 autocorrelation can measure the correlation between pairs of values that are 2 periods (i.e., 2 time distances) apart. The autocorrelation can be calculated using formula (1):
r
n
=r(nΔτ) (1)
In formula (1), r( ) is the auto-correlation function used to calculate autocorrelation with different time delays (e.g., nΔτ); Δτ is the sampling time. For example, given a sampling frequency fs of the source audio frame 108 of 10 K, then Δτ would be 0.1 milliseconds (ms); and n can be in the range of [12, 134], which corresponds to the pitch search range.
At 224, the technique 220 finds (e.g., calculates, determines, obtains, etc.) the local maxima in the autocorrelation. In an example, the local maxima in the autocorrelation can be found between each (m−1)Δτ and (m+1)Δτ, where m has the same range as n. That is, within all of the calculated rn's, local maxima rn's are determined. Each local maximum rm is such that:
r
m
>r
m+1 and rm>rm−1 (2)
At 226, for each local maximum rm, a corresponding time position within the frame of a local maximum (τmax), and an interpolated value of the autocorrelation local maximum (rmax) are calculated using formulae (3) and (4), respectively. τmax can be the delay with a maximum autocorrelation (rmax). However, other ways of finding τmax and rmax are possible.
At 228, the technique 220 sets (e.g., calculates, selects, identifies, etc.) the pitch value (F0). In an example, if there exists a local maximum with rmax>0.5, then the pitch value can be calculated using the τmax with the largest rmax using formula (5) and set a flag Pitch_flag to true; otherwise (i.e., If there is no local maximum rmax>0.5), F0 can be set to a predefined value and the Pitch_flag is set to false. The predefined value can be a value in the pitch detection range, such as the middle of the range. In another example, the predefined value can be 75, which is the lowest pitch of the pitch detection range).
At 242, the technique 240 calculates the group delay. The Group delay represents (e.g., describes, etc.) how the spectral envelope is changing at (e.g., within) different time points. As such, the group delay of the source audio frame 108 can be calculated as follows.
For each frame, use the signal s(t) of length (2*T0) to calculate the group delay, τD, where T0=1/F0. The group delay is defined through the equation (6):
In equation (6), and ℑ represent, respectively, the real and imaginary parts of a complex value; and S(ω) represents the spectrum of the signal s(t) and the S′(ω) is a weighted spectrum calculated using formula (7) where represents the Fourier transform:
S′(ω)=[−jts(t)] (7)
At 244, the technique 240 calculates the aperiodicity for each sub frequency band using the group delay. The whole vocal frequency range (i.e., [0-15] kHz) can be separated into a predefined number of frequency bands. In an example, the predefined number of frequency bands can be 5. However other numbers are possible. Thus, in an example, the frequency bands can be the sub-bands [0-3 kHz], [3 kHz-6 kHz], [6 kHz-9 kHz], [9 kHz-12 kHz], and [12 kHz-15 kHz]. However other partitions of the vocal frequency range are possible. Aperiodicities ap(ωci) of the sub frequency bands can be calculated using equations 8-10.
In the equations 8-10, ωci=2πfci where f is the center frequency of i the sub frequency band; w(ω) is a window function; wl is the window length (which can be equal to 2 times the sub frequency bandwidth); and −1 is the inverse Fourier transform. Thus, the waveform p(t, ωci) can be calculated using the inverse Fourier transform. With respect to the parameter Pc(t, wci) (equation (9)), ps(t, ωc) represents a parameter calculated by sorting the power waveform |p(t, ωci)|2 in descending order in the time axis. In equation (10), wbw represents the main-lobe bandwidth of the window function w(ω), which has dimension of time. Since the main-lobe bandwidth can be defined as the shortest frequency range from 0 Hz to the frequency at which the amplitude indicates 0, 2wbw can be used.
In an example, a window function with a low side lobe can be used to prevent data from being aliased (or copied) in the frequency domain. For example, a Nuttall window can be used as this window function has a low side lobe. In another example, a Blackman window can be used.
At 262, the technique 260 calculates power cepstrum from the windowed signal. As is known, the cepstrum of a signal is the inverse Fourier transform of the Fourier transform of the signal and its logarithm of that Fourier transform. The length of the window can be 3*T0, where T0=1/F0, as described above. As the cepstrum is obtained using an inverse Fourier, the cepstrum is in the time domain. The power cepstrum can be calculated using formula (11) using a Hamming window w(t):
p
s(t)=−1[log({s(t)*w(t)}|2)] (11)
At 264, the technique 260 calculates the smoothed spectrum (i.e., the formant) from the cepstrum using equation (12):
The constants 1.18 and 0.18 are empirically derived to obtain a smooth formant. However, other values are possible.
Turning now to the singing feature generation module 104 of
In an example, the reference 302 can be a Musical Instrument Digital Interface (MIDI) file. A MIDI file can contain the details of a recording to a performance (such as on a piano). The MIDI file can be thought of as containing a copy of the performance. For example, a MIDI file would include the notes played, the order of the notes, the length of each played note, whether (in the case of piano) a pedal is pressed, and so on.
In an example, the reference 302 can be a pitch trajectory file.
In the static mode, the singing feature generation module 104 (e.g., tonic pitch loop block 304 therein) repetitively provides the tonic pitch at each frame according to a preset pitch trajectory as described (e.g., configured, recorded, set, etc.) in the reference 302. When all the all the pitches of the reference 302 are exhausted, the tonic pitch loop block 304 restarts with the first frame of the reference 302. In an example, the reference 302 (e.g., a MIDI file) can also include chords pitches. As such, a chord pitch generation block 306 can also use the reference 302 to obtain the chord pitches (e.g., one or more chord pitches) per frame. In another example, the chord pitch generation block 306 can obtain (e.g., derive, calculate, etc.) the chord pitches using a chord rule, such as triad, perfect fifth, or some other rule. An example of chord pitches using the perfect fifth rule is shown in
For each frame of the source audio frame 108, a concatenation block 308 concatenates the tonic pitch and the chords pitches to provide to the singing synthesis module 106 of
It is noted that the normal human tonic pitch is distributed from 55 Hz to 880 Hz. Thus, in an example, and to achieve a natural singing voice, the tonic and chord pitches can be assigned within the range of the normal human tonic pitch. That is, the tonic and/or chord pitches can be clamped to within the range [55, 880]. For example, if the pitch is less than 55 Hz, then it can be set (e.g., clipped) to 55 Hz; and if it is greater than 880, then it can be set (e.g., clipped) to 880. In another example, as clipping may produce unharmonic sounds, a pitch that is outside of the range is not produced.
The technique 400 generates two kinds of sounds: a periodic sound, which can be generated from a pulse signal block (i.e., a block 416), and a noise signal block (i.e., a block 418). A pulse signal is a rapid, transient change in the amplitude of a signal followed by a return to a baseline value. For example, a clap sound injected into, or is within, a signal can be an example of the pulse signal.
At block 416, pulse signals Spulsei are prepared and, at block 418, white noise signals Snoisei are prepared (e.g., calculated, derived, etc.) for at least some (e.g., each) of the frequency sub-bands (e.g., the five sub-bands described above). As such, a respective pulse signal and noise signal can be obtained for at least some (e.g., each) of the frequency sub-bands.
The pulse signals can be used by a block 414 to generate a period response (i.e., a periodic sound).
The pulse signals Snoisei can be obtained using any known technique. In an example, the pulse signals Snoisei can be calculated using equations (13)-(14).
In equation (13), which obtains the frequency domain pulse signals for each sub-band, the index i represents the sub frequency bands and the index j represents the frequency bins. The parameters a, b, and c can be constants that are imperially derived. In an example, the constants a, b, and c can have the values 0.5, 3000, and 1500, respectively, which result in pulse signals that approximate the human voice. f (j) is the frequency of jth frequency bin of the pulse signal spectrum—the range of f (j) can be the full frequency band (e.g., 0-24 kHz). To illustrate, if the ith frequency band is 150-440 Hz, then SpecPulsei (j) would have some value when f (j) is within 150-440 Hz, and equal to 0 if f(j) is not in the range. Equation (14) obtains the time domain pulse signals for each frequency sub-band by performing an inverse Fourier transform. Thus, for each frequency bin of a frequency sub-band, a respective pulse spectrum is obtained; and these pulse spectra are combined into a time domain pulse signal.
The noise signals Snoisei can be obtained, by a block 420, using any known technique. In an example, the noise signals Snoisei can be calculated using equations (15)-(17).
The spectrum noise (i.e., white noise), Specnoise
A block 414 can calculate locations within the source audio frame 108 where pulses should be added (e.g., started, inserted, etc.). Pitch values for each sampled point of the source audio frame 108 are first obtained. For a current source voice frame (i.e., frame k) (i.e., the source audio frame 108), the pitch value for each sampled point, j (i.e., the timing index), of the frame k, an interpolated pitch value F0int(j) can be obtained using the pitch value of the previous frame. That is F0int(j) can be obtained by interpolating F0(k) and F0(k−1). The interpolation can be a linear interpolation. To illustrate, assume, for example, that F0(k)=100 and F0(k−1)=148 and that there are 480 sampled points in each frame, then the interpolated pitch values F0int(j) for kth frame can be [147.9,147.8, . . . ,100] for j=1, . . , 480.
Given a frame size of Fsize samples and a sampling frequency of fs, each of the sampling locations can be a potential pulse location. The pulse locations in the kth frame can be obtained by first obtaining a phase shift at sampling location j using equation 18, which calculates the phase modulo (MOD) 2π. The phase can be in the range of [−π, π]. As illustrated by the pseudocode of Table I, if the phase difference between a current timing point (j) and its immediate successor timing point (j+1) is greater than it, then the current timing point is identified as a pulse location. Thus, there could be 0 or more places in just one frame, depending on the pitch, where pulses are added. When the phase difference is large (e.g., greater than π), a pulse can be added to avoid phase discontinuities.
PW
k
i=Mod[Σi=1j 2πF0int(j)/fs)+PWk−1j, 2π] (18)
At a block 422, an excitation signal is obtained by combining (e.g., mixing, etc.), at each pulse location, a corresponding pulse and noise signal. The amounts of pulse signal and noise signal used is based on the aperiodicity. The aperiodicity in each sub-band, ap(ωci), can be used as a percentage apportionment of pulse to noise ratio in the excitation signal. The excitation signal, Sex[PLks], where s indicates the pulse location and k indicates the current frame, can be obtained using equation (19).
S
ex
[PL
k
s]=Σi=1n(1−ap(ωci))Splusei−ap(ωci)Snoisei (19)
The excitation signal can be used by a block 424 (i.e., a wave-generating block) to obtain the singing audio frame 112. The excitation signal and the cepstrum, which is calculated as described above, are combined using equations (20)-(22) to obtain to generate the resultant wave signal, Swav, which is the singing audio frame 112.
Equation (20) obtains the Fourier transform of the smoothed spectrum (i.e., formant), which is calculated by the feature extraction module 102 as described above. In equation (21), fftsize is the size of fast Fourier transform (FFT) which is the same as the FFT size used to calculate the smoothed spectrum. Equation (21) is an intermediate step used in the calculation of Swav. In an example, f ftsize can equal to 2048 to provide enough frequency resolution. In equation (22), whan is a Hanning window.
The technique 500 can be implemented by an apparatus such as the apparatus 100 of
At 502, the technique 500 obtains a pitch value of the frame. The pitch value can be obtained as described above with respect to FO. As such, including the pitch value of the frame can include, as described above, calculating an autocorrelation of signals in a signal buffer; identifying local maxima in the autocorrelation; and obtaining the pitch value using the local maxima.
At 504, the technique 500 obtains formant information of the frame using the pitch value. Obtaining the formant information can be as described above. As such, obtaining the formant information of the frame using the pitch value can include obtaining a window length using the pitch value; calculating a power cepstrum of the frame using the window length; and obtaining the formant from the cepstrum.
At 506, the technique 500 obtains aperiodicity information of the frame using the pitch value. Obtaining the aperiodicity information can be as described above. As such, obtaining the aperiodicity information can include calculating a group delay using the pitch value; and calculating a respective aperiodicity value for each frequency sub-band of the frame.
At 508, the technique 500 obtains a tonic pitch and chord pitches to be applied to (e.g., combined with, etc.) the frame. In an example, at least one of the tonic pitch or chords pitches can be provided statically according to a preset pitch trajectory, as described above. In an example, the chord pitches are calculated using chord rules. In an example, the tonic pitch and chord pitches can be calculated in real-time from a reference sample. The reference sample, can be a real or virtual playing instrument concurrently with the speech.
At 510, the technique 500 uses the formant information, the aperiodicity information, and the tonic and chord pitches to obtain the singing frame. Obtaining the singing frame can be as described above. As such, obtaining the singing frame can include obtaining respective pulse signals for frequency sub-bands of the frame; obtaining respective noise signals for the frequency sub-bands of the frame; obtaining locations within the frame to inset the respective pulse signals and the respective noise signals; obtaining an excitation signal; obtaining the singing frame using the excitation signal.
At 512, the technique 500 outputs or saves the singing frame. For example, the singing frame may be converted to a savable format and stored for later playing. For example, the singing frame may be output to the sending user or the receiving user. For example, if the singing frame is generated using a sending user's device, then outputting the singing frame can mean transmitting (or causing to be transmitted) the singing frame to a receiving user. For example, if the singing frame is generated using a receiving user's device, then outputting the singing frame can mean outputting the singing frame so that it is audible by the receiving user.
A processor 602 in the computing device 600 can be a conventional central processing unit. Alternatively, the processor 602 can be another type of device, or multiple devices, capable of manipulating or processing information now existing or hereafter developed. For example, although the disclosed implementations can be practiced with one processor as shown (e.g., the processor 602), advantages in speed and efficiency can be achieved by using more than one processor.
A memory 604 in computing device 600 can be a read only memory (ROM) device or a random access memory (RAM) device in an implementation. However, other suitable types of storage devices can be used as the memory 604. The memory 604 can include code and data 606 that are accessed by the processor 602 using a bus 612. The memory 604 can further include an operating system 608 and application programs 610, the application programs 610 including at least one program that permits the processor 602 to perform at least some of the techniques described herein. For example, the application programs 610 can include applications 1 through N, which further include applications and techniques useful in real-time speech to singing conversion. For example the application programs 610 can include one or more of the techniques 200, 220, 240, 250, 300, 350, 400, or 500 or aspects thereof, to implement a speech to singing conversion. The computing device 600 can also include a secondary storage 614, which can, for example, be a memory card used with a mobile computing device.
The computing device 600 can also include one or more output devices, such as a display 618. The display 618 may be, in one example, a touch sensitive display that combines a display with a touch sensitive element that is operable to sense touch inputs. The display 618 can be coupled to the processor 602 via the bus 612. Other output devices that permit a user to program or otherwise use the computing device 600 can be provided in addition to or as an alternative to the display 618. When the output device is or includes a display, the display can be implemented in various ways, including by a liquid crystal display (LCD), a cathode-ray tube (CRT) display, or a light emitting diode (LED) display, such as an organic LED (OLED) display.
The computing device 600 can also include or be in communication with an image-sensing device 620, for example, a camera, or any other image-sensing device 620 now existing or hereafter developed that can sense an image such as the image of a user operating the computing device 600. The image-sensing device 620 can be positioned such that it is directed toward the user operating the computing device 600. In an example, the position and optical axis of the image-sensing device 620 can be configured such that the field of vision includes an area that is directly adjacent to the display 618 and from which the display 618 is visible.
The computing device 600 can also include or be in communication with a sound-sensing device 622, for example, a microphone, or any other sound-sensing device now existing or hereafter developed that can sense sounds near the computing device 600. The sound-sensing device 622 can be positioned such that it is directed toward the user operating the computing device 600 and can be configured to receive sounds, for example, speech or other utterances, made by the user while the user operates the computing device 600. The computing device 600 can also include or be in communication with a sound-playing device 624, for example, a speaker, a headset, or any other sound-playing device now existing or hereafter developed that can play sounds as directed by the computing device 600.
Although
For simplicity of explanation, the techniques 200, 220, 240, 250, 300, 350, 400, or 500 of
The word “example” is used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as “example” is not necessarily to be construed as being preferred or advantageous over other aspects or designs. Rather, use of the word “example” is intended to present concepts in a concrete fashion. As used in this application, the term “or” is intended to mean an inclusive “or” rather than an exclusive “or.” That is, unless specified otherwise or clearly indicated otherwise by the context, the statement “X includes A or B” is intended to mean any of the natural inclusive permutations thereof. That is, if X includes A; X includes B; or X includes both A and B, then “X includes A or B” is satisfied under any of the foregoing instances. In addition, the articles “a” and “an” as used in this application and the appended claims should generally be construed to mean “one or more,” unless specified otherwise or clearly indicated by the context to be directed to a singular form. Moreover, use of the term “an implementation” or the term “one implementation” throughout this disclosure is not intended to mean the same implementation unless described as such.
Implementations of the computing device 600, and/or any of the components therein described with respect to
Further, in one aspect, for example, the techniques described herein can be implemented using a general purpose computer or general purpose processor with a computer program that, when executed, carries out any of the respective methods, algorithms, and/or instructions described herein. In addition, or alternatively, for example, a special purpose computer/processor can be utilized which can contain other hardware for carrying out any of the methods, algorithms, or instructions described herein.
Further, all or a portion of implementations of this disclosure can take the form of a computer program product accessible from, for example, a computer-usable or computer-readable medium. A computer-usable or computer-readable medium can be any device that can, for example, tangibly contain, store, communicate, or transport the program for use by or in connection with any processor. The medium can be, for example, an electronic, magnetic, optical, electromagnetic, or semiconductor device. Other suitable mediums are also available.
While the disclosure has been described in connection with certain embodiments, it is to be understood that the disclosure is not to be limited to the disclosed embodiments but, on the contrary, is intended to cover various modifications and equivalent arrangements included within the scope of the appended claims, which scope is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures as is permitted under the law.