The present invention relates to electronic music on hand portable and communication enabled devices and, more particularly, but not exclusively to electronic music on PDA and cellular type devices.
PDA and cellular devices with musical ability are available. Such devices have sound card capabilities that enable them to play high quality musical notes and they are able to play music files, for example ring tones, or allow a user to enter his own ringtone via the keyboard.
A major limitation with such electronic devices is the limited resources. Both permanent and temporary memory are severely limited as compared with desktop or laptop computers, and the musical ability must not interfere with the other activities of the device, for example its communication abilities. Thus an add-on feature such as music should not exceed resource requirements as follows: ROM should be limited to about 20 MB (including built-in content), and RAM usage should be limited to about 8 MB of dynamic RAM.
Another consideration is power consumption—sound hardware consumes a relatively large amount of battery power. Any sound hardware should be kept in a sleep mode whenever possible in order to conserve battery power. It is expected that a full battery could be emptied by about 1-2 hours of sound playback.
Due to these limitations, the utilization of the sound card has generally been limited. Users are able to play and set up ring tones but nothing much larger, and the ability to set ring tones allows the user nothing more than to play a set of tones. The communication ability of the cellular device is used solely to download such ringtone files.
In general a musical product is wanted that is simple for a beginner to use, but that also satisfies the requirements of the more sophisticated user. In this case the sophisticated user is a user with a good musical background, often one who can play a musical instrument or has a knowledge of musical theory. In particular the product should produce better results for the advanced user, and for the beginner should be expected to produce steadily better results the more the product is used.
There is thus the overall feeling that the full capabilities of the cellular device are not being fully utilized.
According to one aspect of the present invention there is provided a portable electronic device having a screen and a numeric keypad, the device includes a sound card for processing sound signals to produce audible musical tones at an audible output of the device; a musical module, associated with the sound card, for electronically synthesizing musical instruments; and a user interface for interfacing the musical module to a user via the screen and the numeric keypad, the user interface being configured to set a user play mode in which input at the numeric keypad is played as audio output via the sound card.
Preferably, the musical module is a musical synthesizer.
Preferably, the musical synthesizer is a software synthesizer.
Additionally or alternatively, the musical synthesizer is a hardware device.
Preferably, the user interface is configured to set a play back mode, in which data, from a stored music file or from a communication channel, is played as audio output via the sound card.
Preferably, the user interface is configured to set a record mode in which input at the numeric keypad is played as audio output via the sound card and recorded in data storage.
Preferably, the user interface is configured to set a play and record mode in which data from a stored music file is played as audio output via the sound card and input at the numeric keypad is also played as audio output together with the data from the stored music file.
The device may further include a parameter extractor for extracting musical parameters of the stored music file; and a constraint unit associated with the parameter extractor for setting ones of the extracted musical parameters of the stored music file as constraints for the user input, thereby to bring about an automatic fit between the user input and the stored music file.
Preferably, the parameter extractor is configured to obtain the musical parameters of the stored music file from metadata associated with the file.
Preferably, the parameter extractor is configured to obtain the musical parameters of the stored music file from an event list within the file.
Preferably, the user interface comprises a layered menu, respective layers comprising selections at one level between at least two of record, user play and playback modes, selections at a second level between a plurality of musical instruments to be synthesized, or a plurality of stored files to be played, selections at a third level between standalone play and grouping with other devices, selections at a fourth level between musical keys or musical timings, and selections at a fifth level between musical notes in a selected musical key.
Preferably, the numeric keyboard is configured with key strike detection that is able to detect velocity of a key strike event, and wherein the musical module is able to use the velocity as a control parameter.
The device may comprise a cellular communication channel.
Preferably, the music file is a ring tone file.
According to a second aspect of the present invention there is provided a method of combined playing and editing of a track-based music file comprising a plurality of tracks, the method including the step of playing the music file in a repetitive loop; at one of the loops adding musical material to one of the plurality of tracks;
at subsequent ones of the loops playing the plurality of tracks including the one track with the added musical material.
Preferably, the playing and editing comprises reading existing tracks of the file for playing and clearing an additional track for the adding.
Preferably, the reading is carried out at an advancing reading position and the adding is carried out at an advancing adding position and wherein the adding position is behind the reading position.
Preferably, the music file is an event stream file.
Preferably, the adding the musical material comprises setting a musical instrument and entering musical notes via a numeric keypad.
The method may comprise a stage of obtaining at least one of a musical key and musical timing for constraining the adding the musical material, by analysis of the music file.
Preferably, the analysis comprises analyzing metadata stored with the music file.
Preferably, the analysis comprises analyzing an event stream of the music file.
The method may comprise an initial stage of receiving the track-based file over a communication channel.
The method may comprise a subsequent stage of sending the track-based file with the added musical material over a communication channel.
According to a third aspect of the present invention there is provided a portable electronic apparatus for playing and editing of a track-based music file comprising a plurality of musical tracks, the apparatus includes a loop based player unit for playing tracks of the music file in a repetitive loop; and an editing unit, associated with the loop-based player unit; for adding musical material to a selected one of the plurality of tracks whilst playing in one of the repetitive loops, and wherein further repetitive loops of the music file include the selected one of the plurality of tracks with the added musical material.
Preferably, the editing unit is operable to delete existing material from the selected one of the plurality of tracks prior to the adding.
Preferably, recording is carried out at a virtual record head and playing is carried out at a virtual play head and wherein the virtual record head is temporally behind the virtual play head during the playing and editing.
Preferably, the music file is an event stream file.
The apparatus may further comprise a numeric keypad and a screen, and the adding the musical material comprises setting a musical instrument and entering musical notes via the numeric keypad and the screen.
The apparatus may comprise a constraint unit configured to obtain at least one of a musical key and musical timing for constraining the adding the musical material, by analysis of the music file.
Preferably, the analysis comprises analyzing metadata stored with the music file.
Preferably, the analysis comprises analyzing an event stream of the music file.
The apparatus may comprise a communication channel able to receive the track-based file from a remote location and to send the track-based file with the added musical material over a communication channel.
According to a fourth aspect of the present invention there is provided a portable electronic apparatus for playing a music file and allowing a user to input musical material, the apparatus includes a play unit for playing music from data, including user input musical material and music file data; a parameter extractor for extracting musical parameters of the music file; a user input unit, for receiving the user input musical material, and a constraint unit associated with the parameter extractor and the user input unit, for setting ones of the extracted musical parameters of the music file as constraints for the user input, thereby to achieve at least one of playing and recording of the user input musical material in accordance with the constraints.
Preferably, the parameter extractor is configured to read metadata associated with the music file to extract the musical parameters therefrom.
Preferably, the parameter extractor is configured to infer the musical parameters from an event stream of the data file.
The apparatus may comprise cellular communication functionality for receiving input music files and for sending music files after augmentation by the constrained user input.
According to a fifth aspect of the present invention there is provided a portable electronic device includes music playing capability for playing music from electronic storage or from a communication channel or from user input; and a grouping capability for allowing the device to synchronize itself with at least one other device; and group playing capability, associated with the grouping capability and the music playing capability, for allowing the device to play music from the communication channel together with music from the user input in synchronized manner.
Preferably, the group playing capability comprises adding a delay of at least one cycle to overcome latency of the communication channel.
The device is preferably configured such that the user input is transmitted over the communication channel to be available at the at least one other device for a subsequent time period.
Preferably, the subsequent time period is an immediately following time period.
Preferably, the grouping capability comprises group mastering capability for forming the group and controlling the synchronizing.
Preferably, the grouping capability is configurable to operate as a slave to a remotely located master device to synchronize therewith.
The device may comprise a display screen and comprising a representation capability for providing indications of other devices of the group as icons on the screen.
Preferably, the icons are animated icons.
The device may comprise a feedback unit for analyzing the user input to provide a respective user with feedback on his musical playing.
Preferably, the feedback unit is configured to analyze the user input as an event stream.
According to a sixth aspect of the present invention there is provided a musical content input system for a portable electronic device, the system includes a motion detector for detecting motion parameters of the portable electronic device, a user input unit, for receiving user input musical material, and a constraint unit associated with the motion detector and the user input unit, for using the motion parameters to define musical generation parameters to modify the user input, thereby to allow the portable electronic device to play the user input musical material according to the parameters.
Preferably, the motion detector is part of an integrally mounted camera.
According to a seventh aspect of the present invention there is provided a portable electronic device includes an audio input, an electronic musical synthesizer having a plurality of instrument modes each for synthesizing a different musical instrument, and an additional mode for the electronic musical synthesizer in which clips obtained from the input are used as a device-defined instrument.
The device may comprise a categorizer for categorizing and storing the clips from the input using musical qualities of the clips.
Preferably, the categorizer is preceded by a musical analyzer for identifying musical qualities of material from the audio input.
Preferably, the categorizing comprising arranging clips with different notes into a customized instrument.
The device may comprise an autonomous software unit for autonomously carrying out recording at the audio input.
Preferably, the autonomous software unit is associated with the musical analyzer to use analyzing properties thereof to decide whether to store or discard a given audio recording.
The device may comprise a camera input for taking images and storing the images within the device, and wherein the additional mode comprises a function for adding stored images to a music file of the device-defined instrument.
The device may comprise an autonomous software unit for operating the camera input for the taking of images.
The device may comprise a video camera input for taking video images and storing video image clips within the device, and wherein the additional mode comprises a function for adding the stored video image clips to a music file of the device-defined instrument.
The device may comprise an autonomous software unit for operating the video input.
The device may comprise a pattern mode for composing music according to a prestored pattern.
The device may comprise an artificial learning module for monitoring musical activity on the device and using monitoring results for modifying the prestored pattern.
According to an eighth aspect of the present invention there is provided a method of editing a music file comprising at least one track on a portable electronic device, the method including the steps of playing the track on the portable electronic device, and simultaneously with the playing using an interface of the portable electronic device to edit the track.
Preferably, the music file is a multi-track file.
Additionally or alternatively, the music file is a ring tone file.
Preferably, the portable electronic device is a cellular communication enabled device configured to receive and transmit the music file.
Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The materials, methods, and examples provided herein are illustrative only and not intended to be limiting.
Implementation of the method and system of the present invention involves performing or completing certain selected tasks or steps manually, automatically, or a combination thereof Moreover, according to actual instrumentation and equipment of preferred embodiments of the method and system of the present invention, several selected steps could be implemented by hardware or by software on any operating system of any firmware or a combination thereof For example, as hardware, selected steps of the invention could be implemented as a chip or a circuit. As software, selected steps of the invention could be implemented as a plurality of software instructions being executed by a computer using any suitable operating system. In any case, selected steps of the method and system of the invention could be described as being performed by a data processor, such as a computing platform for executing a plurality of instructions.
The invention is herein described, by way of example only, with reference to the accompanying drawings. With specific reference now to the drawings in detail, it is stressed that the particulars shown are by way of example and for purposes of illustrative discussion of the preferred embodiments of the present invention only, and are presented in order to provide what is believed to be the most useful and readily understood description of the principles and conceptual aspects of the invention. In this regard, no attempt is made to show structural details of the invention in more detail than is necessary for a fundamental understanding of the invention, the description taken with the drawings making apparent to those skilled in the art how the several forms of the invention may be embodied in practice.
In the drawings:
The present embodiments comprise a portable electronic device such as a mobile telephone or PDA or the like having musical capabilities and an interface for allowing the user to make elementary or sophisticated use of the musical capabilities. In a further preferred embodiment the device is capable of working in a group with other devices in a musical version of a conference call.
The principles and operation of a portable electronic device with a musical interface according to the present invention may be better understood with reference to the drawings and accompanying description.
Before explaining at least one embodiment of the invention in detail, it is to be understood that the invention is not limited in its application to the details of construction and the arrangement of the components set forth in the following description or illustrated in the drawings. The invention is capable of other embodiments or of being practiced or carried out in various ways. Also, it is to be understood that the phraseology and terminology employed herein is for the purpose of description and should not be regarded as limiting.
Reference is now made to
Preferably, the user interface is configured to allow the user conveniently to set a number of modes for using the synthesizer and the musical properties of the device. Reference is now made to
Play and record mode operates as follows: There exist N “background tracks” and M “user tracks” at any given moment. The user tracks are initially empty. All tracks are played repetitively simultaneously at all times. Background tracks contain background music which is a part of the song and are never changed. Each of the user tracks is associated with an instrument, which determines how notes in this track will sound. At any given moment, there is one user track which is designated as the “active track”. When in play mode (not recording), the only thing that the active track influences is that when pressing the keypad keys, the notes that result are played with the instrument of the active track. When pressing the record key, it is the active track that gets armed for recording, and the telephone enters “record standby” state. As soon as the cycle ends, several thing happen: the telephone enters the “recording” state, the current track becomes “armed” for recording, and the entire sequence it contains is cleared. From this moment and until the end of the cycle, all the user's action are recorded for storage in the active track. When the cycle ends, the system returns to a non-recording state, unless the record key has been pressed during this cycle. In the latter case, the system acts again as if entering the record state.
Typical operation of the system usually involves: selecting an active track, recording something to the track, switching to another track, recording another thing to be playing with the first and with the background music, and so forth. During the process we may sometimes wish to change the content of an existing track rather than adding to it, and in that case, one simply re-records into the same track, since as stated above, the track is cleared as soon as recording to it starts.
Reference is now made to
Reference is now made to
In a preferred embodiment of the present invention the numeric keyboard is configured with key strike detection that is able to detect velocity of a key strike event, so that the telephone keypad has all of the properties of a MIDI keyboard, and the synthesizer is able to obtain the velocity information and use it to modify the generated note.
When recording music over a background track, one way of using the interface is to enter the play and record mode and set a given file as the background music which is playing. The track may then be played in a repetitive loop, and during the loop user input is added to the track, whilst being echoed to the sound card. At subsequent loops the track is played with the added material. In one embodiment, the music file is an event stream type of file such as a MIDI file, rather than a waveform type of sound representation file such as the WAV type file. An advantage of the event stream type file is that an event stream is much easier to analyze for musical parameters than a waveform and therefore is much easier to carry out within the limited resource availability of the cellular device. In an alternative embodiment analysis of the file is carried out offline.
Preferably, adding the musical material comprises setting a musical instrument and other parameters as necessary and entering musical notes via a numeric keypad, as explained above.
Reference is now made to
1. Content provisioning—transfer of electronic music files between users, from Web sites to users, and from user's PC to user's mobile device.
2. Playback synchronization for both users using the mobile version of electronic music and users using a desktop version.
The server or PC may optionally be used to support the group, depending on the way in which the group is set up, as will be described in greater detail below.
The link can be used to transfer instrument files. In a preferred embodiment of the present invention, instruments are represented by electronic files, each of which contains the information about the sound of each note in the given instrument. An instrument manager provided on each device is a file manager that manages the instrument files. For example the instrument manager shows a list of currently installed instruments to the user, and allows the user to delete or rename instruments. The user can add new instruments, either by defining a note set himself, as will be described in greater detail below or by downloading via the network or via a PC link.
The portable electronic device 70 comprises music playing capability for playing music from electronic storage or from a communication channel or from user input as explained above. It also includes a grouping capability which allows itself to be grouped with other devices over a communication channel. For the purposes of playing music the devices are preferably able to synchronize themselves over the communication channel. As a result there is formed a group playing capability, which enables the individual devices to play music from the communication channel together with music from the user input in synchronized manner. The input made at the local device is then transmitted to the other devices in the group. Due to latency in the transmission channel the input at a given device is not available to the other devices in the group until the beginning of the next cycle so that the group members cannot hear the results of the group session until later. Nevertheless, as all the users are synchronized, since all the players play loop-based music, and all play in the same scale and time base, the effect of a band or orchestra can be obtained, albeit not in real time. Using the group playing capability, each group member can listen to a background track and play along therewith, and his user input can then be added to the input of the other members of the group to form a compilation. The group synchronization signals can be used during compilation to ensure that the group compilation observes correct timing, but may not be needed except for new tracks.
One of the devices in the group is set up as a master. The master device is the device that controls synchronization and is the default device for setting the background music or for making any other settings that are needed for the group session. The master device invites the other devices to participate in the group or the other devices apply to join by calling the master or by calling a preset number. Technically the group session is similar to setting up a conference call and thus any of the techniques available for conference calls are available to the skilled person to enable the group session. For example the support necessary for the group can be provided on a dedicated server such as server 76, or from one of the devices themselves. The other devices operate in the session as slaves, responding to the synchronization and other signals set by the master. In the group session the master preferably defines what item is to be played, what parts the different group members are to play and in addition acts as conductor, bringing in the various parts as required. It is of course possible that one device can be the master and yet assign tasks such as conducting to other group members as desired. Alternatively a freeform version could be used in which a conductor is dispensed with.
Preferably, the devices can set themselves into a do-not-disturb mode so that they cannot be called during a concert.
In a preferred embodiment, the portable electronic device, which typically has a display screen, supports a representation capability for showing other group members, or players, as icons. The icons may for example show the instrument assigned to the given player. The icons can be animations and may for example indicate activity of the given group member. Thus a group member whose part does not require him to play at a given time may be shown inactive. Active members may be shown playing their assigned instruments etc. Outside of group activity, animations may be used to dance according to a currently set musical style or indicate a rhythm, and the like.
In one preferred embodiment, the portable device comprises a feedback unit or personal tutor, for analyzing the user input to provide feedback on his musical playing. The feedback unit may be incorporated in the portable device as such or may be available over the network, say via the PC link or network link. The feedback unit may compare the notes played by the user to a target sequence or may comment on the timing, tempo or scale and the like. The feedback unit may achieve this in one embodiment by analyzing the user input as an event stream. Analyzing an event stream is well within the capabilities of the limited resources of the mobile device whereas analyzing audio waveforms is more difficult and probably requires at least a PC. Analysis may be as simple as comparing the user note sequence to a target note sequence provided along with the song file. The target note sequence may have been deliberately provided in order to teach the user to play a specific song. In an alternative, or more complex embodiment, analysis may involve checking “musical correctness” of a user's autonomous creation. The user can, in either case, be graded according to his performance, along with textual commentary. The feedback unit preferably gives the user visual or audio feedback in real time, and in one preferred embodiment user performance can in fact be corrected before adding the newly added user part to the looped sequence, for example using time quantization or pitch quantization to scale degrees.
It is pointed out that the personal tutor is relevant to any of the embodiments herein, not just to the group-playing feature.
The training session can appear in the form of a game: the machine plays a sequence, and the user has to repeat the sequence, based on hearing and visual content, thus notes and other representations. Whenever the user succeeds, he is allowed to move on to the next sequence. Whenever he fails, he gets a message with tips for improvement and gets the same sequence again. All of the above process happens while background music is playing continuously, and the process of training itself creates one long piece of music that comprises a combination of the machine and user sequences. There is no need to stop during the session. The entire process is carried out “on-the-fly”.
For evaluation purposes of the user sequence against the target sequence, two alternative embodiments are provided:
Calculation of Total Error Energy Compared to Total Signal Energy:
If we compare the user sequence to the target sequence at a given instant, we can compare the set of notes that we expect to be playing at that instant with the notes actually playing. The size of the difference between the expected and actual sets is referred to as the “instantaneous error”. If we integrate the instantaneous error over time a metric is obtained which represents the total sequence error. We can than divide the total signal energy (integral of the size of the target set over time) in the sum of the signal energy and the error energy, to get a number between 0 and 1which represents the “similarity” between the target sequence and the user sequence. In practice, calculating such an integral reduces to a simple sum—so implementation is relatively easy.
Calculation of a Minimum Edit Distance:
This method is more complex than the first but provides the ability to give feedback to the user, hence giving the user a better insight as to the reason for the error. The method involves computing a so-called “minimum edit distance” using the Levinstein algorithm, disclosed in U.S. Pat. No. 6,073,099, filed Nov. 4, 1997, the contents of which are hereby incorporated by reference. In the present case the string characters for the algorithm are musical notes, each note having 3 attributes: pitch, start time, and duration. The metrics to be used are the cost of adding an extra note, the cost of skipping a note, and the distance between two notes, computed by a formula which takes into account all three attributes of the note, with different weights for each one of them. Generally timing errors are given much less weight than pitch errors. If the minimum edit distance is too large—the system concludes that the user played the wrong sequence of notes, provides feedback to the user to guide him to play the correct notes. The system is able to show him exactly where he went wrong. If the edit distance is fairly small, the system checks the total timing error, by accumulating the square of the difference in time/duration between each two paired notes: the target note and the actual note. The pairing to be used is a by-product of the Levinstein algorithm: and each two notes that are considered for a replacement operation are a pair. We may then provide the user with feedback on his timing and duration accuracy, e.g. to output a message stating “You played the right notes, but please pay more attention to tempo”.
Reference is now made to
Reference is now made to
The system illustrated in
Reference is now made to
In another embodiment, samples are taken from the environment, say using an autonomous agent which records randomly, to be analyzed for musical characteristics. The samples may subsequently be built into personalized musical compositions by a process involving quantization of the sample, use of a background, and playing of the samples. Gathered samples can be either played as background music, or be used as individual notes, which are fed to a synthesizer to generate audio in response to notes. Such a concept is commonly referred to as wavetable synthesis.
It will be appreciated that the two embodiments, namely pattern generation on the one hand and wavetable synthesis on the other hand, can be used together by for example playing the patterns of the previous embodiment using the resulting synthesizer.
Subsequently, use of artificial intelligence can allow generation of new patterns
Generated patterns can be used as building blocks for creation of music. Similarly to the mechanism described above, where the user plays the actual notes in each track, the user can have a more “high-level” control of the track contents, by filling them with the generated patterns, rather than playing the notes himself For example, the user sets track no. 3 as the active track, and presses one of the keypad keys, which in turn causes the contents of track 3 to be cleared and replaced with a newly generated pattern. Different keys may influence the pattern generation algorithm, for example a specific algorithm may produce a pattern that is more “sad” or “happy” in response to pressing different keys.
Reference is now made to
Modes 120 and 122 are standalone modes, for which the portable electronic device does not need to be communication enabled. In addition there are four modes which do require communication ability. The first is a mode 126 in which the computer is able to download data from a computer and upload data to the computer, say via a USB port or other suitable link. The data may typically be a music file. Mode 128 is a mode for communicating via a network such as the Internet. Mode 130 is for group playing as the master device, which involves setting up the group, selecting the song to be played and assigning parts to the other users 131. Mode 132 involves group playing as a slave. Both modes 130 and 132 involve synchronizing and sending synchronized information 134.
Reference is now made to
Reference is now made to
Reference is now made to
Reference is now made to
At the end of the band session a stop packet is preferably sent to free all the slave devices from the session.
Reference is now made to
Pattern generation algorithms are preferably held in a pattern generation algorithm store 186. These too may be obtained via network sharing.
A composition system 188 makes use of an artificial intelligence pattern generator 190, and/or composition control data from a user 192. The composition system also makes use of available instruments from instrument store 194 and produces compositions. The compositions are placed in composition store 196.
Composition store 196 may contain compositions from the composition system 188 or in addition, compositions obtained externally from the network or the like.
Essentially,
As illustrated in
It is expected that during the life of this patent many relevant portable electronic devices, cellular devices and systems will be developed and the scopes of the terms herein, particularly of the terms “portable electronic device”, “personal digital assistant” or “PDA” and “communication channel”, are intended to include all such new technologies a priori.
It is appreciated that certain features of the invention, which are, for clarity, described in the context of separate embodiments, may also be provided in combination in a single embodiment. Conversely, various features of the invention, which are, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable subcombination.
Although the invention has been described in conjunction with specific embodiments thereof, it is evident that many alternatives, modifications and variations will be apparent to those skilled in the art. Accordingly, it is intended to embrace all such alternatives, modifications and variations that fall within the spirit and broad scope of the appended claims. All publications, patents and patent applications mentioned in this specification are herein incorporated in their entirety by reference into the specification, to the same extent as if each individual publication, patent or patent application was specifically and individually indicated to be incorporated herein by reference. In addition, citation or identification of any reference in this application shall not be construed as an admission that such reference is available as prior art to the present invention.
Number | Date | Country | Kind |
---|---|---|---|
165817 | Dec 2004 | IL | national |
This application is a divisional application of U.S. patent application Ser. No. 11/031,027 filed on Jan. 7, 2005 and claims priority to an application entitled “ELECTRONIC MUSIC ON HAND PORTABLE AND COMMUNICATION ENABLED DEVICES”, filed in the Israel Patent Office on Dec. 16, 2004 and assigned Ser. No. 165817, the entire contents of which are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
4257306 | Laflamme | Mar 1981 | A |
4294154 | Fujisawa | Oct 1981 | A |
4412473 | Laflamme | Nov 1983 | A |
4519044 | Munetsugu | May 1985 | A |
4982643 | Minamitaka | Jan 1991 | A |
5054360 | Lisle et al. | Oct 1991 | A |
5151873 | Hirsh | Sep 1992 | A |
5553220 | Keene | Sep 1996 | A |
5646648 | Bertram | Jul 1997 | A |
5689641 | Ludwig et al. | Nov 1997 | A |
6067566 | Moline | May 2000 | A |
6069310 | James | May 2000 | A |
6075998 | Morishima | Jun 2000 | A |
6094587 | Armanto et al. | Jul 2000 | A |
6140565 | Yamauchi et al. | Oct 2000 | A |
6143973 | Kikuchi | Nov 2000 | A |
6175872 | Neumann et al. | Jan 2001 | B1 |
6353174 | Schmidt et al. | Mar 2002 | B1 |
6501967 | Makela et al. | Dec 2002 | B1 |
6640241 | Ozzie et al. | Oct 2003 | B1 |
6653545 | Redmann et al. | Nov 2003 | B2 |
6751439 | Tice et al. | Jun 2004 | B2 |
6803511 | Mizuno | Oct 2004 | B2 |
6872878 | Ono et al. | Mar 2005 | B2 |
6898729 | Virolainen et al. | May 2005 | B2 |
6953887 | Nagashima et al. | Oct 2005 | B2 |
6969794 | Suzuki | Nov 2005 | B2 |
7009942 | Fujimori et al. | Mar 2006 | B2 |
7012185 | Holm et al. | Mar 2006 | B2 |
7050462 | Tsunoda et al. | May 2006 | B2 |
7069058 | Kawashima | Jun 2006 | B2 |
7071403 | Chang | Jul 2006 | B2 |
7129408 | Uehara | Oct 2006 | B2 |
7167725 | Nakamura et al. | Jan 2007 | B1 |
7189911 | Isozaki | Mar 2007 | B2 |
7196260 | Schultz | Mar 2007 | B2 |
7197149 | Mita et al. | Mar 2007 | B1 |
7233659 | Davis et al. | Jun 2007 | B1 |
7259311 | Ashida | Aug 2007 | B2 |
7297858 | Paepcke | Nov 2007 | B2 |
7405355 | Both et al. | Jul 2008 | B2 |
7518051 | Redmann | Apr 2009 | B2 |
7649136 | Uehara | Jan 2010 | B2 |
7714222 | Taub et al. | May 2010 | B2 |
7758427 | Egozy | Jul 2010 | B2 |
20010047717 | Acki et al. | Dec 2001 | A1 |
20020066358 | Hasegawa et al. | Jun 2002 | A1 |
20020073827 | Gaudet | Jun 2002 | A1 |
20020107803 | Lisanke et al. | Aug 2002 | A1 |
20020197993 | Cho et al. | Dec 2002 | A1 |
20030013497 | Yamaki et al. | Jan 2003 | A1 |
20030027591 | Wall | Feb 2003 | A1 |
20030128834 | Laine | Jul 2003 | A1 |
20030133700 | Uehara | Jul 2003 | A1 |
20030164084 | Redmann et al. | Sep 2003 | A1 |
20036621903 | Oda | Sep 2003 | |
20040055443 | Nishitani et al. | Mar 2004 | A1 |
20040123726 | Kato et al. | Jul 2004 | A1 |
20040142680 | Jackson et al. | Jul 2004 | A1 |
20040154460 | Virolainen et al. | Aug 2004 | A1 |
20040154461 | Havukainen et al. | Aug 2004 | A1 |
20040173082 | Bancroft et al. | Sep 2004 | A1 |
20040176025 | Holm et al. | Sep 2004 | A1 |
20040264391 | Behboodian et al. | Dec 2004 | A1 |
20050107075 | Snyder | May 2005 | A1 |
20050107128 | Deeds | May 2005 | A1 |
20050150362 | Uehara | Jul 2005 | A1 |
20060011044 | Chew | Jan 2006 | A1 |
20060027080 | Schultz | Feb 2006 | A1 |
20060079213 | Herberger et al. | Apr 2006 | A1 |
20060085343 | Lisanke et al. | Apr 2006 | A1 |
20060105818 | Andert et al. | May 2006 | A1 |
20060112814 | Paepcke | Jun 2006 | A1 |
20060123976 | Both et al. | Jun 2006 | A1 |
20060137513 | Billon et al. | Jun 2006 | A1 |
20060180006 | Kim | Aug 2006 | A1 |
20060230908 | Lee et al. | Oct 2006 | A1 |
20060230909 | Song et al. | Oct 2006 | A1 |
20060230910 | Song et al. | Oct 2006 | A1 |
20070012167 | Bang et al. | Jan 2007 | A1 |
20070026844 | Watanabe | Feb 2007 | A1 |
20070028750 | Darcie et al. | Feb 2007 | A1 |
20070039449 | Redmann | Feb 2007 | A1 |
20070140510 | Redmann | Jun 2007 | A1 |
20070186750 | Zhang | Aug 2007 | A1 |
20070199432 | Abe et al. | Aug 2007 | A1 |
20070283799 | Carruthers et al. | Dec 2007 | A1 |
20080047415 | Schultz | Feb 2008 | A1 |
20080060499 | Sitrick | Mar 2008 | A1 |
Number | Date | Country |
---|---|---|
1262951 | Dec 2002 | EP |
05-027750 | Feb 1993 | JP |
2615721 | Mar 1997 | JP |
11-175061 | Jul 1999 | JP |
11-352969 | Dec 1999 | JP |
2000-029463 | Jan 2000 | JP |
2001-142388 | May 2001 | JP |
2001-195067 | Jul 2001 | JP |
2001-203732 | Jul 2001 | JP |
2001236066 | Aug 2001 | JP |
2002-156982 | May 2002 | JP |
2002-200338 | Jul 2002 | JP |
2003-208169 | Jul 2003 | JP |
2004-341385 | Dec 2004 | JP |
1020010016009 | Mar 2001 | KR |
1020040048470 | Jun 2004 | KR |
WO0120594 | Mar 2001 | WO |
WO02077585 | Oct 2002 | WO |
Number | Date | Country | |
---|---|---|---|
20100218664 A1 | Sep 2010 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 11031027 | Jan 2005 | US |
Child | 12719660 | US |