1. Technical Field of the Invention
The invention relates generally to processing and outputting of media; and, more particularly, it relates to processing audio based on characteristic(s) associated therewith thereby enabling and providing an enhanced perceptual experience for a user.
2. Description of Related Art
Various systems and/or devices operate to output media for user consumption. For example, various systems and/or devices can playback media (e.g., video, audio, etc.) for or as directed by a user. For example, any of a variety of devices (e.g., televisions, media players, audio players, etc.) may be employed for playing back media for enjoyment and consumption by a user.
Such media may come from any of a variety of sources (e.g., from the Internet, from a content service provider [such as a cable, satellite, etc. service provider], from a local sources [such as a memory storage device, a CD, a DVD, etc.], from some combination thereof, etc). While there has been a great deal of effort to provide for improved user experience in regards to media consumption for many years, yet there seems to be an ever increasing demand for even greater improvements in the art.
The present invention is directed to apparatus and methods of operation that are further described in the following Brief Description of the Several Views of the Drawings, the Detailed Description of the Invention, and the claims. Other features and advantages of the present invention will become apparent from the following detailed description of the invention made with reference to the accompanying drawings.
A variety of devices and communication systems may operate using signals that include media content. In some embodiments, a media signal is a video signal (e.g., including both video and audio components), and in others, such a media signal may be an audio signal with no corresponding or associated video signal component. In accordance with playback of such a media signal, an associated audio signal can be output in such a manner as to enhance the perceptual experience of a user (e.g., a viewer such as in the case of a video signal, a listener such as in the case of an audio signal, etc.).
Information employed to modify or enhance an audio signal may come from any of a variety of sources. For example, in some embodiments, the information is extracted from the corresponding or associated video signal component itself. The video signal undergoes processing to identify certain characteristic(s) thereof. Information related to the type of scene depicted in one or more frames of a video signal (e.g., being indoors, outdoors [e.g., as ascertained by a sky region], within a concert hall, large room, small room, etc.) can be extracted from the video signal using various recognition means. For example, a characteristic associated with the media signal may be determined based on image information associated with one or more frames of the video signal, and such a characteristic may be used to modify the audio signal portion of a media signal and/or direct the manner by which the audio signal is output. Such image information may correspond to any one or more (any combination) of a color, a contrast, a brightness, a background, a foreground, an object, an object location, a change of the color, a change of the contrast, a change of the brightness, a change of the background, a change of the foreground, a change of the object, and a change of the object location, etc.
For example, information may be attained in accordance with certain operations performed in a video signal as well. For example, information may be extracted from a video signal in accordance with 2D to 3D conversion of the video signal. As one example of a specific type of information attained in 2D to 3D conversion, depth information can be extracted from 2D video and used to render two independent views representing a stereo pair (e.g., thereby generating 3D video). This stereo video pair is then viewed by the audience who then perceives the content as 3D. Video features such as this depth information can also be used to enhance the audio soundtrack in several novel ways. It is also noted that other processing operations of video may also provide various types of information that may be used, at least in part, to assist in the modification or enhancement of an audio signal and/or the playback thereof.
In another embodiment, information employed to modify or enhance an audio signal may come from meta data associated with the media signal. Generally speaking, meta data can include information that is descriptive of and related to the data or content itself. For example, depending on the type of media, associated meta data can differ in type and the type of information included. For example, with respect to an audio type of media signal, associated meta data may include information related to title, artist, album, year, track number, genre, publisher, composer, locale of performance (e.g., studio, concert hall [live performance]) etc. In the instance of the media being of a video type, associated meta data may include information related to title, soundtrack type and/or format, production company, actors or characters, location(s) of production, etc. Of course, additional information may be included within such meta data as well in certain embodiments. There are also a variety of other types of information that may be included within (embedded within) or accompany a signal such as electronic program guide information, closed captioning information, tele-text, etc. Such additional information may also be extracted from a signal to assist in the modification or enhancement of an audio signal and/or the playback thereof.
In even other embodiments, information employed to modify or enhance an audio signal may come from an external source (e.g., such as a database including information related to media [video and/or audio], etc.). Generally speaking, any of a variety of types of information may be employed to assist in the modification or enhancement of an audio signal and/or the playback thereof.
Regardless of the particular source of the information used for the modification or enhancement of an audio signal and/or the playback thereof, such information may be employed to modify the audio signal and/or the playback thereof to augment and enhance performance thereof. In some instance, the audio signal itself is modified to generate a modified audio signal that, when played back, will effectuate the modified acoustics via an output device in accordance with the manner in which it has been modified. In other instances, real time adjustment of an audio output device is performed during the playback of the audio signal. For example, real time adjustment of one or more settings of an audio processor (e.g., such as an audio digital signal processor (DSP)) may be made to effectuate an enhanced or improved user experience during playback of the media signal.
For example, a multi-channel audio signal (e.g., a movie soundtrack) may be employed to provide a user experience that sound is emanating from a variety of directions. However, not all audio signals are multi-channel. Some multimedia content has only 2-channel stereo audio signal associated therewith or even single channel mono audio signal associated therewith. This can be because of a variety of reasons, including bandwidth limitations, the media signal not originally be created with multi-channel audio signaling therein, etc.
In one possible embodiment, 3D effects may be added to an audio signal based on, and coordinated with, depth cues extracted from the video. Real time depth information extracted from a video signal can be used to control audio processing (e.g., equalizer setting, matrixing of mono/stereo to multiple channels, fading, balancing, parametric control of audio digital signal processor (DSP) effects, etc.) to effectuate a perceptually enhanced user experience. An audio DSP can be employed to add various effects such as reverb or chorus effects. The parameters for these effects can be tied to the various characteristics (e.g., such as depth information described with respect to one embodiment). For example, if there is a wide range of depth in the video, then the reverb level can be increased to simulate a more cavernous chamber.
Such audio processing can be performed in real time during playback of a media signal (e.g., by real time adjustment of an audio output device). Alternatively, such processing may actually modify an audio signal so that when that modified signal undergoes playback (either presently or at a future time), such audio processing effects made to the original audio signal are realized. Such a modified audio signal may be stored in a storage devices (e.g., memory, a hard disk drive (HDD), etc.) and/or be provided via a communication link to at least one other device for storage therein. Of course, the modified signal may undergo playback immediately without any intervening storage thereof.
Many different types of devices and/or components may be employed to store such a modified signal (e.g., memory, a hard disk drive (HDD), etc.) and/or be provided via a communication link to at least one other device for storage therein.
Within many devices that use digital media such as digital video (which can include both image, video, and audio information), digital audio, etc., such media maybe communicated from one location or device to another within various types of communication systems. Within certain communication systems, digital media can be transmitted from a first location to a second location at which such media can be output, played back, displayed, etc.
Generally speaking, the goal of digital communications systems, including those that operate to communicate digital video, is to transmit digital data from one location, or subsystem, to another either error free or with an acceptably low error rate. As shown in
Referring to
In some embodiments, one or both of the communication devices 110 and 120 may only include a media device. For example, the communication device 110 may include a media device 119, and/or the communication device 120 may include a media device 129. Such a media device as described herein may include a device operative to process a media signal, output a media signal (e.g., video and/or audio components thereof).
In certain embodiments, a media device may not be included within communication device 110 or communication device 120. For example, a media device may be connected and/or coupled to either of the communication device 110 or communication device 120. Also, in some embodiments, such a media device can be viewed as including or being connected and/or coupled to a display (e.g., a television, a computer monitor, and/or any other device includes some component for outputting video and/or image information for consumption by a user, etc.) and/or an audio player (e.g., a speaker, two or more speakers, a set of speakers such as in a surround sound audio system, a home theater audio system, etc.). With respect to this diagram, at each end of a communication channel, a media device may be implemented to perform processing, outputting, etc. of a media signal.
In certain embodiments, either of the communication devices 110 and 120 may only include a transmitter or a receiver. There are several different types of media by which the communication channel 199 may be implemented (e.g., a satellite communication channel 130 using satellite dishes 132 and 134, a wireless communication channel 140 using towers 142 and 144 and/or local antennae 152 and 154, a wired communication channel 150, and/or a fiber-optic communication channel 160 using electrical to optical (E/O) interface 162 and optical to electrical (O/E) interface 164)). In addition, more than one type of media may be implemented and interfaced together thereby forming the communication channel 199.
To reduce transmission errors that may undesirably be incurred within a communication system, error correction and channel coding schemes are often employed. Generally, these error correction and channel coding schemes involve the use of an encoder at the transmitter end of the communication channel 199 and a decoder at the receiver end of the communication channel 199.
Any of various types of ECC codes described can be employed within any such desired communication system (e.g., including those variations described with respect to
Generally speaking, when considering a communication system in which a media signal may be communicated from one location, or subsystem, to another, video data encoding may generally be viewed as being performed at a transmitting end of the communication channel 199, and video data decoding may generally be viewed as being performed at a receiving end of the communication channel 199.
Also, while the embodiment of this diagram shows bi-directional communication being capable between the communication devices 110 and 120, it is of course noted that, in some embodiments, the communication device 110 may include only video data encoding capability, and the communication device 120 may include only video data decoding capability, or vice versa (e.g., in a uni-directional communication embodiment such as in accordance with a video broadcast embodiment).
Referring to the communication system 200 of
Within each of the transmitter 297 and the receiver 298, any desired integration of various components, blocks, functional blocks, circuitries, etc. Therein may be implemented. For example, this diagram shows a processing module 280a as including the encoder and symbol mapper 220 and all associated, corresponding components therein, and a processing module 280 is shown as including the metric generator 270 and the decoder 280 and all associated, corresponding components therein. Such processing modules 280a and 280b may be respective integrated circuits. Of course, other boundaries and groupings may alternatively be performed without departing from the scope and spirit of the invention. For example, all components within the transmitter 297 may be included within a first processing module or integrated circuit, and all components within the receiver 298 may be included within a second processing module or integrated circuit. Alternatively, any other combination of components within each of the transmitter 297 and the receiver 298 may be made in other embodiments.
As with the previous embodiment, such a communication system 200 may be employed for the communication of video data is communicated from one location, or subsystem, to another (e.g., from transmitter 297 to the receiver 298 via the communication channel 299).
Within the receiver 298, a media device 228 may be included therein. In other embodiments, a media device may not be included within receiver 298. For example, a media device may be connected and/or coupled to the receiver 298. Also, in some embodiments, such a media device can be viewed as including or being connected and/or coupled to a display (e.g., a television, a computer monitor, and/or any other device includes some component for outputting video and/or image information for consumption by a user, etc.) and/or an audio player (e.g., a speaker, two or more speakers, a set of speakers such as in a surround sound audio system, a home theater audio system, etc.). With respect to this diagram, at each the receiver end of the communication channel 299, a media device may be implemented to perform processing, outputting, etc. of a media signal.
Processing of media signals (including the respective images within a digital video signal, the audio signal component thereof, etc.) may be performed by any of the various devices depicted below in
Based on the characteristic associated with a media signal, an audio processor 410 (e.g., such as an audio digital signal processor (DSP)) is operative to identify one or more audio playback parameters for use in playback of the audio signal to effectuate an audio effect corresponding to the characteristic. For example, a characteristic associated with media signal may be particularly associated with a video signal component thereof. In an embodiment in which the media signal is a video signal, the video signal may include a number of frames therein, and the characteristic associated with the media signal may be image information associated with one or more of the frames of the video signal. Any of a variety of aspects associated with or corresponding to one or more frames of a video signal including image information corresponding to any one of or any combination of a color, a contrast, a brightness, a background, a foreground, an object, an object location, a change of the color, a change of the contrast, a change of the brightness, a change of the background, a change of the foreground, a change of the object, and a change of the object location, etc. as may be determined from or related to one or more frames of a video signal.
Any one or any combination of audio playback parameters may be determined by the audio processor 410 for use in playback of the media signal and/or to modify the media signal itself (e.g., in accordance with generating a modified media signal). That is to say, examples of an audio playback parameter include, but are not limited to, a balance parameter, a fader parameter, an equalizer parameter, an audio effect parameter, a speaker parameter, a mono parameter, a stereo parameter, an audio high definition (HD) parameter, an audio three-dimensional (3D) parameter, and a surround sound parameter.
In addition, in some embodiments, an audio playback parameter may relate and be used for playing back an audio portion of the media signal in accordance with an audio mode that is greater than the actual properties of the audio portion of the media signal. For example, a certain audio playback parameter may direct a single channel mono audio signal to be played back in accordance with 2-channel stereo audio format. In another example, a certain audio playback parameter may direct a single channel mono audio signal or a 2-channel stereo audio format to be played back in accordance with a surround sound audio format (e.g., being a multi-channel audio format in which audio is selectively delivered to a number of speakers distributed around a given environment). In other words, based on at least one characteristic associated with a media signal, the audio processor 410 may identify an audio playback parameter to direct the playback of an audio portion of the media signal in accordance with an enhanced operational mode relative to the actual properties of the an audio portion of the media signal itself.
As mentioned elsewhere herein, some embodiments are operative to direct the playback of an audio portion of a media signal in accordance with one or more audio playback parameters as determined based on at least one characteristic of the media signal. In other embodiments, the audio processor 410 is operative to modify the audio signal of the media signal in accordance with the characteristic thereby generating a modified media signal. For example, this may be viewed, from certain perspectives, as generated an entirely new media signal having different audio format properties than the original media signal. Such a modified media signal (and if desired, the original media signal), may be stored in a memory for use in subsequent playback (e.g., a memory, hard disk drive (HDD), or other storage means within the same device including the audio processor 410, or located remotely with respect to that device). The modified media signal then includes any appropriate audio playback parameter(s) embedded therein, so that when played back, the enhancements will be effectuated. If desired in some embodiments, the modified media signal may even undergo subsequent processing to identify even additional audio playback parameter(s) that could further enhance the playback thereof (e.g., such as in multiple iteration embodiment in which additional audio playback parameter(s) could be identified in subsequent processing therein).
For example, referring to the embodiment 500 of the
In some instances, there may be some instances where the meta data associated with a media signal is less than complete (e.g., providing some information associated therewith, but missing some information). As an example with respect to audio type of media signal, perhaps the meta data associated therewith may include information related to title and artist, yet be deficient in failing to include information related to album, year, track number, and/or other meta data information. The meta data that is available could be used as a reference to identify the missing meta data from the database 520 to provide further details and characteristics associated with the media signal to assist more fully in the identification of one or more audio playback parameters for use in playing back an audio portion of the media signal.
As one possible example regarding the use of meta data in accordance with determining at least one audio playback parameter, considering the genre of an audio signal (e.g., classical, country, rock, pop, etc.), a particular equalizer setting may be selected as an one audio playback parameter based on the characteristic of genre. As another example, information within the meta data related to the artist of the audio signal may be used to select a particular equalizer setting (e.g., selecting an equalizer setting better suited for playback of pop music when the artist information from the meta data indicated a pop artist, selecting an equalizer setting better suited for playback of classical music when the artist information from the meta data indicated a classical composer, etc.). Also, information within the meta data may include information related to an environment in which the media was recorded or produced (e.g., studio recording [under very controlled conditions], live performance [such as in a stadium, concert hall, etc.], etc.). A respective audio playback parameter may relate to an equalizer setting better suited for the environment in which the audio signal portion was made (e.g., selecting a hall setting or live setting for the equalizer is the meta data indicating a live performance, selecting a studio equalizer setting of the meta data indicating a live performance, etc.).
For example, referring to the embodiment 600 of the
Somewhat analogously to other embodiments, the audio processor 610 of this embodiment 600 may process the media signal to generate a modified media signal (e.g., which may be stored in some storage means for future use, transmitted to another device for playback or storage therein, etc.).
In some instances, the one or more audio playback parameters may also be provided to a display 780 (e.g., a television, a computer monitor, and/or any other device includes some component for outputting video and/or image information for consumption by a user, etc.) so that a video portion of the media signal may be output thereby while the audio portion of the media signal is output by the audio player 790 in accordance with the one or more audio playback parameters. Some examples of audio playback parameters include, but are not limited to, a balance parameter, a fader parameter, an equalizer parameter, an audio effect parameter, a speaker parameter, a mono parameter, a stereo parameter, an audio high definition (HD) parameter, an audio three-dimensional (3D) parameter, and a surround sound parameter.
For example, with respect to the image shown in the top portion of the diagram, various characteristics could be identified such as that the image depicts an outdoor image, that the environment is sunny and bright with a clear sky, and that the image depicts trees therein, etc. Various forms of pattern recognition may be employed to make such determination regarding various aspects of an image. For example, with respect to a sky being determined as predominately blue, a determination may be made that the sky is largely cloudless. With respect to the intensity and color of the pixels of the sky, a determination may be made as to time of day (e.g., darker blue pixels indicating night, with lighter blue pixels indicating day, etc.). Based on such determinations (e.g., an outdoor environment, etc.), one possible audio playback parameter may be an equalizer setting suited well for such an environment (e.g., such as to depict a very voluminous and open environment, etc.) for a better perceptual experience of a user.
For another example, with respect to the image shown in the bottom portion of the diagram, various characteristics could be identified such as that the image depicts a speaker located on a stage such as in a concert hall/theater. When considering different frames of the video signal, changes of one or more aspects of the image may also be employed as a characteristic. For example, one image may depict the speaker to be located on the left hand side thereof, while a subsequent image may depict the speaker to be located on the right hand side thereof. Based on the frame rate, and the number of frames that effectuate this transition of the speaker from the left hand side to the right hand side, a rate of movement of the speaker may also be determined. Analogously, depth of the speaker within various images may be used to ascertain movement of the speaker forward or backward on the stage as well.
Such characteristics as related to the location of the speaker, or the movement of the speaker, may be used to determine various audio playback parameters. For example, some possible audio playback parameters may include the adjustment of balance to correspond to the location of the speaker left or right, and dynamic adjustment thereof corresponding to the movement of the speaker across the stage, etc. Analogously, some additional possible audio playback parameters may include the adjustment of fader to correspond to the location of the speaker with regards to depth, and dynamic adjustment thereof corresponding to the movement of the speaker front and back on the stage, etc. Also, if a determination is made that the environment of the one or more images is in fact in a concert hall/theater, one possible audio playback parameter may be an audio equalizer setting suited well for such an environment (e.g., such as a hall setting, a clear setting, or live setting) for a better perceptual experience of a user. An audio equalizer setting may be one selected as suited better for playback of spoken audio content (e.g., speech as opposed to music).
Of course, as described elsewhere herein, image information such as any one or more (any combination) of a color, a contrast, a brightness, a background, a foreground, an object, an object location, a change of the color, a change of the contrast, a change of the brightness, a change of the background, a change of the foreground, a change of the object, and a change of the object location, etc. may be employed as a characteristic for use in identifying one or more audio playback parameters for use in playback of an audio signal to effectuate an audio effect corresponding to the characteristic.
As may generally be understood, one or more characteristics associated with a media signal, regardless of the source or the manner by which it is generated, is operative to identify one or more audio playback parameters for use in playing back an audio signal associated with the media signal in a modified manner. In some embodiments, such determination of one or more audio playback parameters is made in real time during processing of the media signal (or a portion thereof, such as associated meta data, video or image content thereof, etc.) and the one or more audio playback parameters is used to control the actual playback of the audio signal. In certain other embodiments, the determination of one or more audio playback parameters is made and the media signal itself undergoes modification thereby generating a modified media signal, so that when the modified media signal undergoes playback, the one or more audio playback parameters are already part thereof and will be realized.
Referring to method 900 of
The method 900 continues by operating an audio processor for identifying at least one audio playback parameter based on the characteristic for use in playback of the audio signal to effectuate an audio effect corresponding to the characteristic, as shown in a block 920. For one possible example, to effectuate an audio effect associated with a characteristic of an image or video scene occurring outdoors could be effectuated by setting an equalizer setting of an audio player that is well suited for a voluminous, wide-open environment. Analogously, for yet another example, to effectuate an audio effect associated with a characteristic of an image or video scene occurring indoors [such as in a cavernous environment] could be effectuated by increasing a reverb level to simulate a more cavernous chamber.
In some embodiments, the method 900 also operates by outputting the audio signal in accordance with the at least one playback parameter, as shown in a block 930. For example, an audio player including at least one speaker (e.g., a speaker, two or more speakers, a set of speakers such as in a surround sound audio system, a home theater audio system, etc.) may be used to playback the audio signal as directed by the at least one playback parameter.
Referring to method 901 of
The method 901 then operates by operating an audio processor for identifying at least one audio playback parameter based on the characteristic for use in playback of the audio signal to effectuate an audio effect corresponding to the characteristic, as shown in a block 921.
The method 901 continues by outputting the video signal while outputting the audio signal in accordance with the at least one playback parameter, as shown in a block 931. For example, in accordance with outputting a video signal (that includes both video/image information as well as an associated audio signal), both the video signal component and the audio signal component could be output in synchronization with each other, yet the audio signal component being modified and enhanced in accordance with the at least one playback parameter.
Referring to method 1000 of
In some embodiments, the method 1000 also operates by outputting the audio signal in accordance with the at least one playback parameter, as shown in a block 1030. For example, the
Referring to method 1001 of
The method 1001 then operates by outputting the modified media signal (and/or modified audio signal), as shown in a block 1021. That is to say, the modified media signal (and/or modified audio signal) may be output via a media and/or audio player such that the audio signal component thereof being modified and enhanced in accordance with the at least one playback parameter.
In some embodiments, the method 1001 also operates by storing the modified media signal (and/or modified audio signal) in between the operations of the blocks 1011 and 1021, as shown in a block 1031. Such storage could be made in a local storage device and/or a remotely located storage device. Examples of such storage devices include hard disk drives (HDDs), read-only memory (ROM), random access memory (RAM), volatile memory, non-volatile memory, static memory, dynamic memory, flash memory, and/or any device that stores digital information.
It is noted that the various modules and/or circuitries (e.g., encoding modules and/or circuitries, decoding modules and/or circuitries, audio processors, processing modules, etc.) described herein may be a single processing device or a plurality of processing devices. Such a processing device may be a microprocessor, micro-controller, digital signal processor, microcomputer, central processing unit, field programmable gate array, programmable logic device, state machine, logic circuitry, analog circuitry, digital circuitry, and/or any device that manipulates signals (analog and/or digital) based on operational instructions. The operational instructions may be stored in a memory. The memory may be a single memory device or a plurality of memory devices. Such a memory device may be a read-only memory (ROM), random access memory (RAM), volatile memory, non-volatile memory, static memory, dynamic memory, flash memory, and/or any device that stores digital information. It is also noted that when the processing module implements one or more of its functions via a state machine, analog circuitry, digital circuitry, and/or logic circuitry, the memory storing the corresponding operational instructions is embedded with the circuitry comprising the state machine, analog circuitry, digital circuitry, and/or logic circuitry. In such an embodiment, a memory stores, and a processing module coupled thereto executes, operational instructions corresponding to at least some of the steps and/or functions illustrated and/or described herein.
It is also noted that any of the connections or couplings between the various modules, circuits, functional blocks, components, devices, etc. within any of the various diagrams or as described herein may be differently implemented in different embodiments. For example, in one embodiment, such connections or couplings may be direct connections or direct couplings there between. In another embodiment, such connections or couplings may be indirect connections or indirect couplings there between (e.g., with one or more intervening components there between). Of course, certain other embodiments may have some combinations of such connections or couplings therein such that some of the connections or couplings are direct, while others are indirect. Different implementations may be employed for effectuating communicative coupling between modules, circuits, functional blocks, components, devices, etc. without departing from the scope and spirit of the invention.
Various aspects of the present invention have also been described above with the aid of method steps illustrating the performance of specified functions and relationships thereof. The boundaries and sequence of these functional building blocks and method steps have been arbitrarily defined herein for convenience of description. Alternate boundaries and sequences can be defined so long as the specified functions and relationships are appropriately performed. Any such alternate boundaries or sequences are thus within the scope and spirit of the claimed invention.
Various aspects of the present invention have been described above with the aid of functional building blocks illustrating the performance of certain significant functions. The boundaries of these functional building blocks have been arbitrarily defined for convenience of description. Alternate boundaries could be defined as long as the certain significant functions are appropriately performed. Similarly, flow diagram blocks may also have been arbitrarily defined herein to illustrate certain significant functionality. To the extent used, the flow diagram block boundaries and sequence could have been defined otherwise and still perform the certain significant functionality. Such alternate definitions of both functional building blocks and flow diagram blocks and sequences are thus within the scope and spirit of the claimed invention.
One of average skill in the art will also recognize that the functional building blocks, and other illustrative blocks, modules and components herein, can be implemented as illustrated or by discrete components, application specific integrated circuits, processors executing appropriate software and the like or any combination thereof.
Moreover, although described in detail for purposes of clarity and understanding by way of the aforementioned embodiments, various aspects of the present invention are not limited to such embodiments. It will be obvious to one of average skill in the art that various changes and modifications may be practiced within the spirit and scope of the invention, as limited only by the scope of the appended claims.