1. Field of the Invention
The present invention relates to a sound signal processing apparatus used to separate speech from an input sound signal containing ambient sound such as ambient noise, background noise, etc., and speech and used to attenuate ambient sound so as to accentuate speech, and relates to a degree of speech computation method for use with the sound signal processing apparatus.
2. Description of the Related Art
In applications such as mobile phones and speech recognition, it is desirably to suppress noise such as ambient noise and background noise, which is contained in a picked-up sound signal or an audible signal in order to accentuate speech components and to separate noise and speech.
As such, a conventional technology for separating speech and noise, as disclosed in, for example, Japanese Unexamined Patent Application Publication Nos. 2000-81900 and 8-79897, a method for separating speech and noise from differences in sound signals received by each microphone by using a plurality of microphones is known. Furthermore, as disclosed in Japanese Unexamined Patent Application Publication Nos. 2001-42886 and 2000-222000, a method of learning ambient sound at the time of a particular timing is known. In, for example, Japanese Unexamined Patent Application Publication No. 2003-70097, a method is disclosed in which the minimum average amplitude value in a fixed period is assumed as noise, and a determination as to ambient sound and speech is made based on the magnitude relationship with that value.
As recognized by the present inventors, the above-described conventional technologies have the following problems.
In the case of the technologies disclosed in Japanese Unexamined Patent Application Publication Nos. 2000-81900 and 8-79897, in which a plurality of microphones are used, it is required that the microphones be at a minimum fixed spacing or more. In the case of a directional microphone, the direction needs to be changed in accordance with the movement of a target.
In the case of the technologies disclosed in Japanese Unexamined Patent Application Publication Nos. 2001-42886 and 2000-222000, in which ambient sound is learned, ambient sound of a time that is necessary and sufficient for learning is required, and the technologies lack general versatility.
In the case of the technology disclosed in Japanese Unexamined Patent Application Publication No. 2003-70097, it is not possible to cope with noise of a large amplitude, and a determination is difficult when the entirety of a fixed period is only speech or only ambient sound.
The present invention has been made in view of the limitations of such conventional systems and methods. An object of the present invention is to address the above-identified and other limitations of conventional systems and methods by, for instance, providing a sound signal processing apparatus that is capable of determining speech likeliness or a degree of speech with a simple configuration or with a small amount of processing by using, as an input, a sound signal picked up by one microphone or played back from a recording medium, and that is capable of easily performing speech separation or noise suppression and speech accentuation on an input sound signal, and to provide a degree of speech computation method for use with the sound signal processing apparatus.
To achieve the above-mentioned object, in one aspect, the present invention provides a processor-implemented sound signal processing apparatus comprising: a degree of vocally generated sound computation mechanism configured to compute and output an indicia of a degree of vocally generated sound of a sound signal input thereto, the sound signal including a vocally generated sound and ambient sound; and a voice processor configured to characterize the input sound signal based on the indicia of degree of vocally generated sound output by the degree of vocally generated sound computation mechanism, wherein the degree of vocally generated sound computation mechanism is configured to compute the indicia of degree of vocally generated sound based on features in a wavelength direction of a waveform of the input sound signal. (The wavelength direction is, in other words, time direction.)
The degree of vocally generated sound computation mechanism may includes a degree of speech computation mechanism; the vocally generated sound is speech; and the voice processor is configured to characterize the input sound signal based on the degree of speech in the sound signal as determined by the degree of vocally generated sound computation mechanism.
The features in the wavelength direction are typically changes in a duration of a period of waveform of the sound signal or changes in a level direction of the waveform of the sound signal.
The degree of speech computation mechanism may be configured to compute the degree of speech in units of frames sliced in predetermined time length units of the sound signal.
The degree of speech computation mechanism may comprise: a half-wavelength increase and decrease repetition rate computation mechanism configured to compute an increase and a decrease in repetition rate of half wavelengths of a waveform of the sound signal; a zero-cross rate computation mechanism configured to compute a rate of zero crossings of half wavelengths of the waveform of the sound signal; and a degree of vocally generated sound output mechanism configured to output the indicia of degree of vocally generated sound on the basis of an output from the half-wavelength increase and decrease repetition rate computation mechanism and an output from the zero-cross rate computation mechanism.
The half-wavelength increase and decrease repetition rate computation mechanism may be configured to compute the increase and decrease repetition rate of the half wavelength of the waveform based on a rate of a portion of the waveform where an upward half-wavelength of the waveform of the sound signal changes so as to increase and decrease alternately or changes so as to decrease and increase alternately, and based on a rate of a portion where a downward half-wavelength of the waveform of the sound signal changes so as to increase and decrease alternately or changes so as to decrease and increase alternately.
The half-wavelength increase and decrease repetition rate computation mechanism may include a first output value adjustment mechanism configured to adjust the repetition rate of the half wavelengths produced by the half-wavelength increase and decrease repetition rate computation mechanism, the zero-cross rate computation mechanism includes a second output value adjustment mechanism configured to adjust the rate of zero-crossings produced by the zero-cross rate computation mechanism, and the first and second output value adjustment mechanisms are configured to adjust and provide respective output values to the degree of vocally generated sound output mechanism.
The processor-implemented sound signal processing apparatus further comprises: a band dividing mechanism configured to divide the sound signal into a plurality of frequency bands, wherein the degree of vocally generated sound computation mechanism is configured to compute the indicia of degree of vocally generated sound for each band, and the voice processor is configured to process each band on the basis of the computed degree of vocally generated sound of each band.
In another aspect, the present invention provides a method for computing an indicia of degree of vocally generated sound, comprising steps of: slicing a waveform of an input sound signal in units of frames of a predetermined length; computing and outputting the indicia of a degree of vocally generated sound of a sound signal input thereto, the sound signal including a vocally generated sound and ambient sound; and characterizing the input sound signal based on the indicia of degree of vocally generated sound; wherein the computing step includes computing the indicia of degree of vocally generated sound based on features in a wavelength direction of a waveform of the input sound signal.
In another aspect, the present invention provides a computer program product having computer readable instructions that when executed by a processor implement a processing comprising steps of: slicing a waveform of an input sound signal in units of frames of a predetermined length; computing and outputting the indicia of a degree of vocally generated sound of a sound signal input thereto, the sound signal including a vocally generated sound and ambient sound; and characterizing the input sound signal based on the indicia of degree of vocally generated sound; wherein the computing step includes computing the indicia of degree of vocally generated sound based on features in a wavelength direction of a waveform of the input sound signal.
In the other aspect, the present invention provides a processor-implemented sound signal processing apparatus comprising: means for slicing a waveform of an input sound signal in units of frames of a predetermined length;
means for computing and outputting an indicia of a degree of vocally generated sound of a sound signal input thereto from the means for slicing, the sound signal including a vocally generated sound and ambient sound; and
means for characterizing the input sound signal based on the indicia of degree of vocally generated sound; wherein the means for computing includes means for calculating the indicia of degree of vocally generated sound based on features in a wavelength direction of a waveform of the input sound signal.
In the present invention, the input sound signal is subjected to a waveform slicing process in frame units, the increase and decrease rate of a half wavelength in a frame is computed, the zero-cross rate in a frame is computed, and the degree of vocally generated sound is determined using each of the computed rates. In accordance with this determined degree of vocally generated sound, a process for separating or accentuating/attenuating vocally generated sound and background noise is performed.
Specific embodiments to which the present invention is applied will now be described below in detail with reference to the drawings.
The sound signal processing apparatus shown in
The degree of speech computation section 30 of
Next, a description is given of the operation of each section in the configuration shown in
First, the sound signal input section 10 shown in
Next, the waveform slicing section 20 slices the sound signal into a particular length. Here, the sliced period is called a “frame”. The frame length is, for example, 1000 sample points. However, the frame length is not limited to this number of samples and also needs not to be fixed. Furthermore, portions of the previous and subsequent frames may overlap with each other. Regarding the number of cycles, preferably 2 cycles are minimally effective for detecting signal features, such as the pitch of a target speech. When using half-wavelength processing according to the present invention at least 3 wavelengths (cycles) are preferred so as to reliably separate vocally generated sound from mixed sound signals.
The degree of speech of the sound signal of the frame sliced by the waveform slicing section 20 is determined by the degree of speech computation section 30. The degree of speech computation section 30 has a configuration shown in, for example,
In the half-wavelength increase and decrease repetition rate computation section 31 of
For example, in
In the zero-cross rate computation section 32 of
In the degree of speech output section 33 of
In the speech processing section 40, a process for separating or accentuating/attenuating speech and background noise using the degree of speech output from the degree of speech computation section 30 is performed on the speech waveform of each frame from the waveform slicing section 20, forming an output waveform. For example, a process for outputting the product with the speech waveform of the frame by using the degree of speech as a magnification may be performed.
The above procedure, which is approximated by a flowchart, is shown in
The gist of the embodiment of the present invention is such that whether the waveform of the input sound signal is “speech” or “ambient sound (traveling sound of a vehicle, wind sound, noise)” is discriminated. That is, as in the conventional case, in a technique for simply discriminating between speech and ambient sound in accordance with the magnitude of the level, there is a drawback in that even noise with a high level is regarded as speech. Therefore, in the embodiment of the present invention, whether the waveform is “speech” or “ambient sound” at each time is converted into numbers as “speech likeliness”. The reason for this is that both ambient sound and speech may be contained, and a determination by a binary value of either of them is difficult. The term “speech likeliness” is used in the implication of the possibility that the waveform in a fixed period is speech or used in the implication of the rate of the speech waveform contained in the waveform.
The technique used in the embodiment of the present invention is specialized for vowel parts. Since the vowel part of speech is composed of a fundamental frequency and harmonic tone components thereof, the wavelength becomes steady. In the embodiment of the present invention, one wavelength is from a relative maximum point to the next relative maximum point or from a relative minimum point to the next relative minimum point. For this reason, in general, if the jitter of the wavelength is to be properly characterized, the length always becomes “always a fixed value→no jitter” or “varied in a fixed range→jitter exists”. In the embodiment of the present invention, the “jitter” means fluctuation or amount of changes in the portions where this half wavelength “increases, decreases, increases, and decreases” and also, means changes of the waveform in the level direction on the basis of zero cross (or a deviation of the center point) in an example as a reference for speech likeliness.
More specifically, in the embodiment of the present invention, two types of jitter, that is, “jitter of the wavelength” (amount of increase/decrease changes) and “jitter in the level direction” (amount of zero crossings), are defined. In each case, jitter occurs in the following cases.
First, the phrase “jitter of the wavelength” refers to alternating changes of the length of the upward half-wavelength or the downward half-wavelength, such as “increase, decrease, increase, and decrease”. Next, the phrase “jitter in the level direction” refers to a case where the half wavelength does not zero cross. Here, as the “jitter in the level direction”, a case in which the center point in the level direction of the half wavelength is away (above or below) from the zero cross by a predetermined amount may be used. In this case, as shown in
In the relationship between each jitter and speech likeliness, regarding the “jitter of the wavelength”, the more there is jitter, that is, the more there are the wavelengths where the changes of the length of the half wavelength are “increase, decrease, increase, and decrease”, the possibility of being speech is high. Regarding the “jitter in the level direction”, the smaller the jitter, that is, the smaller the rate of the half wavelength that does not zero cross or the closer the center point in the level direction of the half wavelength to the zero cross, the possibility of being speech is high. As more specific, although non-limiting examples, the following repetition rates (e.g., increase, decrease, increase) were shown to correspond with the following probability gradations.
about 40% or less—no vocally generated sound (VSG)
about 40% to 60%—low probability of speech/VGS
about 60% to 80%—high probability of speech/VGS
about 80% or more-very high probability of speech/VGS
Regarding zero-crossing rate, the following non-limiting examples illustrate the relevant probability gradations.
about 50% or less—no vocally generated sound
about 50% to 70%—low probability of speech/VGS
about 70% to 85%—high probability of speech/VGS
about 85% or more-very high probability of speech/VGS.
This is known to have a harmonic structure of a particular fundamental frequency if the spectrum of the sound signal waveform is obtained. In general, the fundamental frequency corresponds to a pitch indicating the height of sound and is also called a “pitch frequency”. For example, a peak appears at a position that is an integral multiple times as high as the pitch frequency. Furthermore, with respect to the pitch period corresponding to adjacent peaks in the sound signal waveform, an actual waveform signal contains components of the wavelength longer than the pitch frequency. In particular, components of the pitch period two times as high appear comparatively dominantly. Such components of the pitch period two times as high correspond to the fact that, when viewed by the upward half-wavelength or the downward half-wavelength, the increase and decrease in the changes of the length repeatedly appears alternately. The more there are the wavelengths such that the changes of the length of the half wavelength are “increase, decrease, increase, and decrease”, the possibility of being speech is high. This holds to a certain degree not only in the case of human voice, but also in the case of a so-called musical sound signal containing musical instrument tone. In the embodiment of the present invention, a speech signal containing musical sound and ambient sound (noise) can be separated or accentuated/attenuated.
The above-described relationship between jitter and speech likeliness is summarized in
As is clear from
Next, a description is given, with reference to the drawings, of a more specific example of the configuration for half-wavelength increase and decrease repetition rate computation and zero-cross rate computation for the purpose of determining speech likeliness or a degree of speech.
The half-wavelength increase and decrease repetition rate computation section 31 shown in
Next, a description is given, with reference to
In the upward half-wavelength increase and decrease repetition rate computation section 51, first, the number of sets in which the changes of the length of three adjacent half wavelengths in the frame are alternate as “increase and decrease” or “decrease and increase” is denoted as Aup. When the number of all the upward half-wavelengths in the frame is denoted as Nup, the upward half-wavelength increase and decrease repetition rate Rup is defined by Rup=Aup/(Nup−2). With respect to the downward half-wavelength of the downward half-wavelength increase and decrease repetition rate computation section 52, Rdown is defined by Rdown=Adown/(Ndown−2).
In the example of
Rup=Aup/(Nup−2)=2/(5−2)=0.67
Rdown=Adown/(Ndown−2)=1/(5−2)=0.33.
The upward and downward half-wavelength increase and decrease repetition rates Rup and Rdown determined by the upward half-wavelength increase and decrease repetition rate computation section 51 and the downward half-wavelength increase and decrease repetition rate computation section 52, respectively, in the above-described manner are sent to the half-wavelength increase and decrease repetition rate integration section 53, whereby they are integrated. In an example of this integration method, the product, the average, the larger value, and the smaller value of Rup and Rdown are determined. The output from the half-wavelength increase and decrease repetition rate integration section 53 is sent to the output value adjustment section 54 for adjusting a value range. For example, the output value is changed to the range from 0.0 to 1.0 and is output. In an example of this processing, when an input to the output value adjustment section 54 is denoted as “in”, and an output from the output value adjustment section 54 is denoted as “out”, the following holds:
where TH is a threshold value greater than or equal to 0 and less than 1 (0≦TH<1.0). Since the expected value of the rate at which “increase and decrease” becomes alternate is 0.5, TH is preferably a value greater than that value. The output value adjustment section 54 may be omitted.
As a calculation method in the upward half-wavelength increase and decrease repetition rate computation section 51 and the downward half-wavelength increase and decrease repetition rate computation section 52, in addition to the above-described method for counting the cases where the changes of the length of three half wavelengths in the sliced frame are “increase and decrease” or “decrease and increase”, various methods may be used. Example thereof include a method for determining the maximum value of the length in which “increase and decrease” or “decrease and increase” continues alternately, and the method for determining variations of the length in which “increase and decrease” or “decrease and increase” continues alternately. These methods are described below with reference to
The method for determining the maximum value of the lengths in which “increase and decrease” or “decrease and increase” continues alternately is such that the maximum value of the number of the lengths in which “increase and decrease” or “decrease and increase” continues alternately is determined for each upward half-wavelength and for each downward half-wavelength in the sliced frame. For example, in the example of the waveform of
As an example of the method for determining variations of the lengths in which “increase and decrease” or “decrease and increase” continues alternately, if variations to be determined for the upward half-wavelength and the downward half-wavelength are denoted as Vup and Vdown, respectively, these are defined by the following equations.
Vup=(Aveup/Varup)/(Nup−2)
Vdown=(Avedown/Vardown)/(Ndown−2)
where Aveup and Avedown are the average values of the lengths of the increase and decrease repetition for the upward and downward half-wavelengths, respectively, Var is a variance of the lengths of the increase and decrease repetition, and Nup and Ndown are the numbers of the upward and downward half-wavelengths, respectively.
In the case of
Vup=(2.33/0.22)/(9−2)=1.5
Vdown=(2/2)/(9−2)=0.14
However, if these are kept as is, the output value does not fall within the range of 0 to 1. Therefore, Vup and Vdown need to be adjusted by the output value adjustment section 54. More specifically, a sigmoid function shown in equation (2) below is used as an example.
where “in” is an input to the output value adjustment section 54, “out” is an output from the output value adjustment section 54, and a is a parameter.
Next, the zero-cross rate computation section 32 shown in
In the zero-cross rate computation section 32, as a zero-cross rate, (the number of half wavelengths having a zero cross)/(the number of all the half wavelengths) is determined, and this is sent, as a zero-cross rate output value, to the output value adjustment section 57. For example, in the example of the waveform in
In the output value adjustment section 57, the output value of the zero-cross rate determined by the zero-cross rate calculation section 56 by performing the above calculation is adjusted to the range of, for example, 0.0 to 1.0 and is output. In an example of this processing, similarly to the output value adjustment section 54, the calculation of equation (1) or equation (2) is performed. In equations (1) and (2), “in” is an input to the output value adjustment section 57, “out” is an output from the output value adjustment section 57, and a of equation (2) is a parameter.
Next, a description will be given, with reference to FIGS. 16 to 20, of an output waveform or an output value from each section in the configuration shown in
According to the above-described embodiment of the present invention, even if ambient sound noise is contained, only speech can be separated. Since ambient sound can be removed even from monaural sound, the present invention can be applied to any sound signal. Furthermore, since simple features are used, a smaller amount of processing is required, and processing in a real time is possible.
Next, another embodiment of the present invention will be described with reference to
The computer system 1201 includes a bus 1202 or other communication mechanism for communicating information, and a processor 1203 coupled with the bus 1202 for processing the information. The computer system 1201 also includes a main memory 1204, such as a random access memory (RAM) or other dynamic storage device (e.g., dynamic RAM (DRAM), static RAM (SRAM), and synchronous DRAM (SDRAM)), coupled to the bus 1202 for storing information and instructions to be executed by processor 1203. In addition, the main memory 1204 may be used for storing temporary variables or other intermediate information during the execution of instructions by the processor 1203. The computer system 1201 further includes a read only memory (ROM) 1205 or other static storage device (e.g., programmable ROM (PROM), erasable PROM (EPROM), and electrically erasable PROM (EEPROM)) coupled to the bus 1202 for storing static information and instructions for the processor 1203. Such memory (or other peripheral device) may be connected via a peripheral interface such as a USB port.
The computer system 1201 also includes a disk controller 1206 coupled to the bus 1202 to control one or more storage devices for storing information and instructions, such as a magnetic hard disk 1207, and a removable media drive 1208 (e.g., USB flash memory, floppy disk drive, read-only compact disc drive, read/write compact disc drive, compact disc jukebox, tape drive, and removable magneto-optical drive). The storage devices may be added to the computer system 1201 using an appropriate device interface (e.g., small computer system interface (SCSI), integrated device electronics (IDE), enhanced-IDE (E-IDE), direct memory access (DMA), or ultra-DMA).
The computer system 1201 may also include special purpose logic devices (e.g., application specific integrated circuits (ASICs)) or configurable logic devices (e.g., simple programmable logic devices (SPLDs), complex programmable logic devices (CPLDs), and field programmable gate arrays (FPGAs)).
The computer system 1201 may also include a display controller 1209 coupled to the bus 1202 to control a display 1210, such as a cathode ray tube (CRT), for displaying information to a computer user. The computer system includes input devices, such as a keyboard 1211 and a pointing device 1212, for interacting with a computer user and providing information to the processor 1203. The pointing device 1212, for example, may be a mouse, a trackball, or a pointing stick for communicating direction information and command selections to the processor 1203 and for controlling cursor movement on the display 1210. In addition, a printer may provide printed listings of data stored and/or generated by the computer system 1201.
The computer system 1201 performs a portion or all of the processing steps of the invention in response to the processor 1203 executing one or more sequences of one or more instructions contained in a memory, such as the main memory 1204. Such instructions may be read into the main memory 1204 from another computer readable medium, such as a hard disk 1207 or a removable media drive 1208. One or more processors in a multi-processing arrangement may also be employed to execute the sequences of instructions contained in main memory 1204. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions. Thus, embodiments are not limited to any specific combination of hardware circuitry and software.
As stated above, the computer system 1201 includes at least one computer readable medium or memory for holding instructions programmed according to the teachings of the invention and for containing data structures, tables, records, or other data described herein. Examples of computer readable media are compact discs, hard disks, floppy disks, tape, magneto-optical disks, PROMs (EPROM, EEPROM, flash EPROM), DRAM, SRAM, SDRAM, or any other magnetic medium, compact discs (e.g., CD-ROM), or any other optical medium, punch cards, paper tape, or other physical medium with patterns of holes, a carrier wave (described below), or any other medium from which a computer can read.
Stored on any one or on a combination of computer readable media, the present invention includes software for controlling the computer system 1201, for driving a device or devices for implementing the invention, and for enabling the computer system 1201 to interact with a human user (e.g., print production personnel). Such software may include, but is not limited to, device drivers, operating systems, development tools, and applications software. Such computer readable media further includes the computer program product of the present invention for performing all or a portion (if processing is distributed) of the processing performed in implementing the invention.
The computer code devices of the present invention may be any interpretable or executable code mechanism, including but not limited to scripts, interpretable programs, dynamic link libraries (DLLs), Java classes, and complete executable programs. Moreover, parts of the processing of the present invention may be distributed for better performance, reliability, and/or cost.
The term “computer readable medium” as used herein refers to any medium that participates in providing instructions to the processor 1203 for execution. A computer readable medium may take many forms, including but not limited to, non-volatile media, volatile media, and transmission media. Non-volatile media includes, for example, optical, magnetic disks, and magneto-optical disks, such as the hard disk 1207 or the removable media drive 1208. Volatile media includes dynamic memory, such as the main memory 1204. Transmission media includes coaxial cables, copper wire and fiber optics, including the wires that make up the bus 1202. Transmission media also may take the form of acoustic or light waves, such as those generated during radio wave and infrared data communications.
Various forms of computer readable media may be involved in carrying out one or more sequences of one or more instructions to processor 1203 for execution. For example, the instructions may initially be carried on a magnetic disk of a remote computer. The remote computer can load the instructions for implementing all or a portion of the present invention remotely into a dynamic memory and send the instructions over a telephone line using a modem. A modem local to the computer system 1201 may receive the data on the telephone line and use an infrared transmitter to convert the data to an infrared signal. An infrared detector coupled to the bus 1202 can receive the data carried in the infrared signal and place the data on the bus 1202. The bus 1202 carries the data to the main memory 1204, from which the processor 1203 retrieves and executes the instructions. The instructions received by the main memory 1204 may optionally be stored on storage device 1207 or 1208 either before or after execution by processor 1203.
The computer system 1201 also includes a communication interface 1213 coupled to the bus 1202. The communication interface 1213 provides a two-way data communication coupling to a network link 1214 that is connected to, for example, a local area network (LAN) 1215, or to another communications network 1216 such as the Internet. For example, the communication interface 1213 may be a network interface card to attach to any packet switched LAN. As another example, the communication interface 1213 may be an asymmetrical digital subscriber line (ADSL) card, an integrated services digital network (ISDN) card or a modem to provide a data communication connection to a corresponding type of communications line. Wireless links may also be implemented. In any such implementation, the communication interface 1213 sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.
The network link 1214 typically provides data communication through one or more networks to other data devices. For example, the network link 1214 may provide a connection to another computer through a local network 1215 (e.g., a LAN) or through equipment operated by a service provider, which provides communication services through a communications network 1216. The local network 1214 and the communications network 1216 use, for example, electrical, electromagnetic, or optical signals that carry digital data streams, and the associated physical layer (e.g., CAT 5 cable, coaxial cable, optical fiber, etc). The signals through the various networks and the signals on the network link 1214 and through the communication interface 1213, which carry the digital data to and from the computer system 1201 maybe implemented in baseband signals, or carrier wave based signals. The baseband signals convey the digital data as unmodulated electrical pulses that are descriptive of a stream of digital data bits, where the term “bits” is to be construed broadly to mean symbol, where each symbol conveys at least one or more information bits. The digital data may also be used to modulate a carrier wave, such as with amplitude, phase and/or frequency shift keyed signals that are propagated over a conductive media, or transmitted as electromagnetic waves through a propagation medium. Thus, the digital data may be sent as unmodulated baseband data through a “wired” communication channel and/or sent within a predetermined frequency band, different than baseband, by modulating a carrier wave. The computer system 1201 can transmit and receive data, including program code, through the network(s) 1215 and 1216, the network link 1214 and the communication interface 1213. Moreover, the network link 1214 may provide a connection through a LAN 1215 to a mobile device 1217 such as a personal digital assistant (PDA) laptop computer, or cellular telephone.
The present application contains subject matter related to Japanese patent documents JP2004-045237, JP2004-045238, filed in the JPO on Feb. 20, 2004, JP2005-041169 filed in the JPO on Feb. 17, 2005, and JP2004-194646 filed in the JPO on Jun. 30, 2004, the entire contents of each of which is incorporated herein by reference in its entirety.
Number | Date | Country | Kind |
---|---|---|---|
2004-194646 | Jun 2004 | JP | national |