1. Field of the Invention
The field of the invention is data processing, or, more specifically, methods, systems, and products for associating user selected content management directives with user selected ratings.
2. Description Of Related Art
Despite having more access to content from many disparate sources and having more disparate devices to access that content, retrieving content from disparate sources with disparate devices is often cumbersome. Accessing such content is cumbersome because users typically must access content of various disparate data types from various disparate data sources individually without having a single point of access for accessing content. Content of disparate data types accessed from various disparate data sources often must also be rendered on data type-specific devices using data type-specific applications without the flexibility of rendering content on user selected devices regardless of the content's original data type. There is therefore an ongoing need for consolidated content management for delivery to a particular rendering device.
Methods, systems, and products are disclosed for associating user selected content management directives with a user selected rating. Embodiments include presenting to a user a plurality of predefined content management directives; receiving from a user an identification of a particular content management directive; receiving from a user an identification of the rating to invoke the content management directive; and storing the identification of the content management directive in association with the rating to invoke the content management directives.
The foregoing and other objects, features and advantages of the invention will be apparent from the following more particular descriptions of exemplary embodiments of the invention as illustrated in the accompanying drawings wherein like reference numbers generally represent like parts of exemplary embodiments of the invention.
Exemplary methods, systems, and products for consolidated content management for delivery to a rendering device according to embodiments of the present invention are described with reference to the accompanying drawings, beginning with
Content of disparate data types are content of data of different kind and form. That is, disparate data types are data of different kinds. The distinctions that define the disparate data types may include a difference in data structure, file format, protocol in which the data is transmitted, application used to render the data, and other distinctions as will occur to those of skill in the art. Examples of disparate data types include MPEG-1 Audio Layer 3 (‘MP3’) files, eXtensible markup language documents (‘XML’), email documents, word processing documents, calendar data, and so on as will occur to those of skill in the art. Disparate data types often rendered on data type-specific devices. For example, an MPEG-1 Audio Layer 3 (‘MP3’) file is typically played by an MP3 player, a Wireless Markup Language (‘WML’) file is typically accessed by a wireless device, and so on.
The term disparate data sources means sources of data of disparate data types. Such data sources may be any device or network location capable of providing access to data of a disparate data type. Examples of disparate data sources include servers serving up files, web sites, cellular phones, PDAs, MP3 players, and so on as will occur to those of skill in the art.
The data processing system of
The exemplary system of
The system of
The system of
The system of
The rendering devices of
Each of rendering devices is capable of requesting from the consolidated content management server (114) content that has been aggregated from the disparate data sources and synthesized into content of a uniform data type. The consolidated content management server transmits in response to the request the content in a data type specific to the rendering device thereby allowing the rendering device to render the content regardless of the native data type of content as provided by the original content provider.
Consider for example, email content provided by the email server (238). The consolidated content management server (114) is capable of aggregating for a user email content and synthesizing the email by extracting the email text and inserting the email text into a header field of an MP3 file. The consolidated content management server (114) transmits the MP3 file to the DAP (104) that supports the display of information extracted from header fields. In this example of consolidated content management, the DAP (104) is capable of rendering in its display email despite being only able to render media files and without requiring modification of the DAP.
Consolidated content management of the present invention advantageously provides a single point of access to a wide variety of content to a user and wide flexibility in the manner and upon which device that content is rendered.
The arrangement of servers and other devices making up the exemplary system illustrated in
For further explanation,
The consolidated content management server (114) of
The consolidated content management server (114) of
The consolidated content management server (114) includes repository (218) of synthesized content. Maintaining a repository (218) of synthesized content provides a single point of access at the consolidated content management server for content aggregated from various disparate data sources (228) for rendering on a plurality of disparate rendering devices (104, 108, and 112). Because the content has been synthesized for delivery to the particular rendering devices (104, 108, and 112) the content may be rendered in a data format that the rendering devices support regardless of the original native data type of the content as served up by the disparate data sources (228).
Alternatively, content may be synthesized for delivery to a particular rendering device upon request for the synthesized data from a particular rendering device. Synthesizing data upon request for the data by a particular rendering device reduces the overhead of maintaining large repositories of synthesized content for a particular user and for delivery to a particular device.
The consolidated content management serer (114) also includes an action generator (222) containing a repository of actions (224). Synthesized content often has associated with it actions for execution on the rendering device. For example, content synthesized as X+V documents include grammars and actions providing voice navigation of the content thereby empowering a user to use speech to instruct the rendering of the content on the multimodal browser of a rendering device.
Consolidated content management in accordance with the present invention is generally implemented with computers, that is, with automated computing machinery. In the systems of
Stored in RAM (168) is an exemplary consolidated content management module (140), computer program instructions for consolidated content management for delivery to a rendering device capable of aggregating, for a user, content of disparate data types from disparate data sources; synthesizing the aggregated content of disparate data types into synthesized content of a data type for delivery to a particular rendering device; receiving from the rendering device a request for the synthesized content; and transmitting, in a response to the request, the requested synthesized content to the rendering device.
The consolidated content management module (140) of
The consolidated content management module (140) of
The consolidated content management module (140) of
The exemplary consolidated content management server (114) of
Also stored in RAM (168) is an operating system (154). Operating systems useful in computers according to embodiments of the present invention include UNIX™, Linux™, Microsoft Windows NT™, AIX™, IBM's i5/OS™, and others as will occur to those of skill in the art.
The exemplary consolidated content management server (114) of
The exemplary consolidated content management server (114) of
The exemplary consolidated content management server (114) of
Consolidated content management of the present invention advantageously provides a single point of access to a wide variety of content to a user and wide flexibility in the manner and upon which device that content is rendered. For further explanation,
Aggregating (402), for a user, content (404) of disparate data types from disparate data sources (228) according to the method of
The method of
Synthesizing aggregated content of disparate data types into synthesized content including data of a uniform data type for delivery to a particular rendering device is typically carried out in dependence upon device profiles (220) identifying attributes of the particular rendering device such as file formats the device supports, markup languages the devices supports, data communications protocols the device supports, and other attributes as will occur to those of skill in the art. Synthesizing aggregated content of disparate data types into synthesized content including data of a uniform data type for delivery to a particular rendering device may be carried out by identifying at least a portion of the aggregated content for delivery to the particular data rendering device; and translating the portion of the aggregated content into text content and markup associated with the text content in accordance with device profiles for the rendering device as discussed in more detail below with reference to
Synthesizing the aggregated content of disparate data types into synthesized content including data of a uniform data type for delivery to a particular rendering device may also be carried out by creating text in dependence upon the portion of the aggregated content; creating a media file for the synthesized content; and inserting the text in the header of the media file as discussed in more detail below with reference to
The method of
As discussed above, synthesized content often has associated with it actions for execution on the rendering device. For example, content synthesized as X+V documents include grammars and actions providing voice navigation of the content thereby empowering a user to use speech to instruct the rendering of the content on the multimodal browser of a rendering device. For further explanation,
A user instruction is an event received in response to an act by a user. Exemplary user instructions include receiving events as a result of a user entering a combination of keystrokes using a keyboard or keypad, receiving speech from a user, receiving an event as a result of clicking on icons on a visual display by using a mouse, receiving an event as a result of a user pressing an icon on a touchpad, or other user instructions as will occur to those of skill in the art. Receiving a speech instruction from a user may be carried out by receiving speech from a user, converting the speech to text, and determining in dependence upon the text and a grammar associated with the synthesized content the user instruction.
The method of
In the examples above, synthesizing the aggregated content of disparate data types into synthesized content including data of a uniform data type for delivery to a particular rendering device is carried out by synthesizing the aggregated content of disparate data types into synthesized content including data of a uniform data type for delivery to a particular rendering device prior to receiving from the rendering device the request for the synthesized content. That is, content is synthesized for particular devices and stored such that the content is available to those particular devices. This is for explanation, and not for limitation. In fact, alternatively synthesizing the aggregated content of disparate data types into synthesized content for delivery to a particular rendering device may also be carried out by synthesizing the aggregated content of disparate data types into synthesized content including data of a uniform data type for delivery to a particular rendering device in response to receiving from the rendering device the request for the synthesized content.
As discussed above, consolidated content management typically includes aggregating for a user, content of disparate data types from disparate data sources. For further explanation, therefore,
The method of
Some data sources may require security information for accessing data. Retrieving (508) content (404) of disparate data types from identified disparate data sources (228) associated with the user account (210) may therefore also include determining whether the identified data source requires security information to access the content and retrieving security information for the data source from the user account if the identified data source requires security information to access the content and presenting the security information to the data source to access the content.
The method of
As discussed above, aggregating content is typically carried out in dependence upon a user account. For further explanation, therefore,
Receiving (504), from the user (100), identifications (506) of a plurality of disparate data sources (228) may be carried out through the use of user account configuration screens provided by a consolidated content management server and accessible by a user though for example a browser running on a rendering device. Such configuration screens provide a vehicle for efficiently associating with a user account a plurality of disparate data sources.
The method of
As discussed above, aggregating content is typically carried out in dependence upon a user account. For further explanation,
Receiving (514), from a user, identifications (516) of one or more rendering devices (104, 106, and 112) may be carried out through the use of user account configuration screens provided by a consolidated content management server and accessible by a user though for example a browser running on a rendering device. Such configuration screens provide a vehicle for efficiently associating with a user account one or more rendering devices.
The method of
For further explanation,
The exemplary user account records (526) include user preferences (532) for synthesizing and rendering the synthesized content for the user. Examples of such user preferences include display preferences, such as font and color preferences, layout preferences, and so on as will occur to those of skill in the art.
The exemplary user account records (526) include a rendering device list (534) including one or more identifications of rendering devices. The exemplary user account records (526) also includes data source list (536) including one or more identifications of disparate data sources and data source security information (538) including any security information required to retrieve content from the identified data source.
The information in use accounts (210) may be used to identify additional data sources without requiring additional user intervention.
Identifying (540) an additional data source in dependence upon information in the user account information may be carried out by creating a search engine query in dependence upon the information in the user account and querying a search engine with the created query. Querying a search engine may be carried out through the use of URL encoded data passed to a search engine through, for example, an HTTP GET or HTTP POST function. URL encoded data is data packaged in a URL for data communications, in this case, passing a query to a search engine. In the case of HTTP communications, the HTTP GET and POST functions are often used to transmit URL encoded data. An example of URL encoded data is:
This example of URL encoded data representing a query that is submitted over the web to a search engine. More specifically, the example above is a URL bearing encoded data representing a query to a search engine and the query is the string “field1=value1&field2=value2.” The exemplary encoding method is to string field names and field values separated by ‘&’ and “=” and designate the encoding as a query by including “search” in the URL. The exemplary URL encoded search query is for explanation and not for limitation. In fact, different search engines may use different syntax in representing a query in a data encoded URL and therefore the particular syntax of the data encoding may vary according to the particular search engine queried.
Identifying (540) an additional data source in dependence upon information in the user account information may also include identifying, from the search results returned in the created query, additional sources of data. Such additional sources of data may be identified from the search results by retrieving URLs to data sources from hyperlinks in a search results page returned by the search engine.
As discussed above, consolidated content management provides single point access for content and typically includes synthesizing content of disparate data types into synthesized content of a uniform data type for delivery to a particular rendering device. For further explanation,
Identifying (602) aggregated content (404) of disparate data types typically for synthesis may also be carried out in dependence upon a user instruction. That is, identifying (602) aggregated content (404) of disparate data types for synthesis may include receiving a user instruction identifying aggregated content for synthesis and selecting for synthesis the content identified in the user instruction.
The method of
As discussed above, translating into text content may include creating text and markup for the aggregated content in accordance with an identified markup language. For further explanation, therefore,
The method of
Creating (612) text (616) and markup (618) for the aggregated content (404) in accordance with the identified markup language (610) such that a browser capable of rendering the text and markup may render from the translated content the some or all of the aggregated content prior to being synthesized may include augmenting the content during translation in some way. That is, translating aggregated content into text and markup may result in some modification to the original aggregated content or may result in deletion of some content that cannot be accurately translated. The quantity of such modification and deletion will vary according to the type of data being translated as well as other factors as will occur to those of skill in the art.
Consider for further explanation the following markup language depiction of a snippet of audio clip describing the president.
In the example above, an MP3 audio file is translated into text and markup. The header in the example above identifies the translated data as having been translated from an MP3 audio file. The exemplary header also includes keywords included in the content of the translated document and the frequency with which those keywords appear. The exemplary translated data also includes content identified as ‘some content about the president.’
As discussed above, one useful markup language for synthesizing content is XHTML plus Voice. XHTML plus Voice (‘X+V’) is a Web markup language for developing multimodal applications, by enabling speech navigation and interaction through voice markup. X+V provides speech-based interaction in devices using both voice and visual elements. Speech enabling the synthesized data for consolidated content management according to embodiments of the present invention is typically carried out by creating grammar sets for the text of the synthesized content. A grammar is a set of words that may be spoken, patterns in which those words may be spoken, or other language elements that define the speech recognized by a speech recognition engine in a multimodal browser. Such speech recognition engines are useful in rendering synthesized data to provide users with voice navigation of and voice interaction with synthesized content.
As discussed above, synthesized content may be speech enabled. For further explanation, therefore,
Dynamically creating grammar sets (628) for the text content (606) may be carried out by identifying (630) keywords (632) for the text content (606). Identifying (630) keywords (632) for the text content (606) may include identifying keywords in the text content (606) determinative of content or logical structure and including the identified keywords in a grammar associated with the text content. Keywords determinative of content are words and phrases defining the topics of the synthesized content and the information presented the synthesized content. Keywords determinative of logical structure are keywords that suggest the form in which information of the synthesized content is presented. Examples of logical structure include typographic structure, hierarchical structure, relational structure, and other logical structures as will occur to those of skill in the art.
Identifying keywords in the text determinative of content may be carried out by searching the translated text for words that occur in the text more often than some predefined threshold. The frequency of the word exceeding the threshold indicates that the word is related to the content of the translated text because the predetermined threshold is established as a frequency of use not expected to occur by chance alone. Alternatively, a threshold may also be established as a function rather than a static value. In such cases, the threshold value for frequency of a word in the translated text may be established dynamically by use of a statistical test which compares the word frequencies in the translated text with expected frequencies derived statistically from a much larger corpus. Such a larger corpus acts as a reference for general language use.
Identifying keywords in the translated text determinative of logical structure may be carried out by searching the translated text for predefined words determinative of structure. Examples of such words determinative of logical structure include ‘introduction,’ ‘table of contents,’ ‘chapter,’ ‘stanza,’ ‘index,’ and many others as will occur to those of skill in the art.
Dynamically creating (626) grammar sets (628) for the text content (606) may also be carried out by creating (634) grammars (628) in dependence upon the keywords (632) and grammar creation rules (636). Grammar creation rules are a pre-defined set of instructions and grammar form for the production of grammars. Creating grammars in dependence upon the identified keywords and grammar creation rules may be carried out by use of scripting frameworks such as JavaServer Pages, Active Server Pages, PHP, Perl, XML from translated data. Such dynamically created grammars may be stored externally and referenced, in for example, X+V the <grammar src=““/> tag that is used to reference external grammars.
The method of
The method of
In examples above, synthesis of the aggregated content results in the replacement of the original aggregated content with synthesized content. This is for explanation, and not for limitation. In fact, in some cases some or all of the original aggregated content is preserved. Creating text and markup for the aggregated content in accordance with the identified markup language may also be carried out by preserving the data type of the aggregated content and also creating a markup document for presentation of the content in a rendering device and for invoking the rendering of the content in the rendering device. For further explanation, therefore,
Some useful rendering devices do not support browsers for rendering markup documents. For example, some digital audio players play media files, such as MP3 files but have no browser. For further explanation, therefore,
The method of
The method of
Consolidated content management provides a single point of access for content aggregated and synthesized for a user. Such content may also advantageously be published. For further explanation,
The method of
Synthesizing aggregated content of disparate data types into synthesized content including data of a uniform data type for delivery to a particular RSS rendering device is typically carried out in dependence upon device profiles (220) for the RSS rendering device identifying attributes of the particular rendering device such a file formats the RSS rendering device supports, markup languages the RSS rendering device supports, data communications protocols the RSS rendering device supports and other attributes as will occur to those of skill in the art as discussed above with reference to
The method of
As discussed above, an RSS feed is typically implemented as one or more XML files containing links to more extensive versions of content. For further explanation,
The hyperlinks and associated metadata may provide an RSS channel to synthesized content. An RSS channel is typically a container for an arbitrary number of items of a similar type, having some relationship which is defined by the context of the container. An RSS channel to synthesized content may be a reverse-chronological sorted list of links to synthesized content, along with metadata describing aspects the synthesized content story often indicating the title of content and a description of the content.
Each RSS channel is designated by markup in the RSS feed's XML files and has required sub-elements which are also designated by markup. Required sub-elements of an RSS channel typically include a title to name the RSS channel, a link, and a description. The link is the URL of the synthesized content typically implemented as a web page, such as, for example, a web page written in HTML. Each RSS channel may also contain optional sub-elements. Optional sub-elements of an RSS channel include, for example, an image sub-element, which provides for an image to be displayed in connection with the RSS channel.
The method of
The method of
The method of
Consolidated content management servers usefully provide a single point of access for a wide variety of content available in many different data types. Such a consolidated content management server may also perform content management directives on the content managed by the server. Content management directives are software actions performed on synthesized content managed by the content management server. Examples of content management directives include deleting content, retrieving additional content, forwarding content, highlighting content, and many others as will occur to those of skill in the art. Such content management directives provide users increased control over the management of the wide variety of content accessible through the consolidated content management server.
As discussed above, content may be synthesized and stored in a media file for delivery to a digital audio player. Media files and digital audio players of many types support a user specified rating for the content. For example, the iPod® digital audio player and the iTunes® digital audio player application available from Apple® support a five-star rating system that provides assigning to content one of five ratings: one star, two stars, three stars, four stars, or five stars. Such ratings assigned to content in a media file may be used to communicate content management directives from a user to a consolidated content management server. For further explanation, therefore,
One specific example of synthesizing (804) content of disparate data types into synthesized content in a media file (810) for delivery to a particular digital audio player (104) includes synthesizing email content. Synthesizing (804) email content may be carried out by retrieving an email message; extracting text from the email message; creating a media file; and storing the extracted text of the email message as metadata associated with the media file as discussed below with reference to
Another specific example of synthesizing (804) content of disparate data types into synthesized content in a media file (810) for delivery to a particular digital audio player (104) includes synthesizing RSS content. Synthesizing (804) RSS content into synthesized content in a media file (810) for delivery to a particular digital audio player (104) may be carried out by retrieving, through an RSS feed, RSS content; extracting text from the RSS content; creating a media file; and storing the extracted text of the RSS content as metadata associated with the media file as discussed below with reference to
The method of
A digital media player application is an application that manages media content such as audio files and video files. Such digital media player applications are typically capable of transferring media files to a digital audio player. Examples of digital media player applications include Music Match™, iTunes®, and others as will occur to those of skill in the art.
The method of
The method of
A rating received from a user may also be user defined. .mp4 files support flexible ID3v2 tags and therefore a user defined rating scheme may be used to configure many different ratings for an .mp4 file.
The method of
The MPEG file (874) of
In the example of
Returning to the example of
As discussed above, the rating may be associated with the content through a metadata file, such as an XML file. In such cases, informing (816) the consolidated content management server (114) of the rating (818) associated with the content in the media file (815) may be carried out by sending the metadata file to the consolidated content management server.
The method of
In the exemplary content management selection rule above, if the content ID of content synthesized in a media file for delivery to a digital audio player is identified as email content and if a user has associated with the content a one-star rating with the email content, then the content management selection rule above dictates that a software algorithm named ‘deleteEmail( )’ is to be executed. Executing ‘deleteEmail( )’ in the example above deletes identified email messages.
Executing (824) the content management directives (822) results in administration of the synthesized content managed by the consolidated content management server. Executing (824) the content management directives (822) may include retrieving additional content in dependence upon the rating, deleting identified synthesized content in dependence upon the rating, highlighting identified content in dependence upon the rating, and many others as will occur to those of skill in the art.
Ratings advantageously provide a mechanism for invoking content management directives on a consolidated content server without requiring modification of a digital audio player upon which the content under management is rendered. Such content management directives provide increased flexibility in consolidated content management according to embodiments of the present invention.
As discussed above, synthesizing content of disparate data types into synthesized content in a media file for delivery to a particular digital audio player may be carried out by retrieving content, extracting text from the retrieved content, creating a media file, and storing the extracted text as metadata associated with the media file. Such content synthesized for delivery to a digital audio player may be synthesized from a variety of native data formats such as email content, calendar data, RSS content, text content in word processing documents, and so on. For further explanation,
The method of
The method of
The method of
Storing (868) the extracted text (858) of the email message (854) as metadata (862) associated with the media file (810) provides a vehicle for visually rendering the extracted email text on a display screen of a digital audio player without modification of the digital audio player. The method of
As discussed above, the extracting text from the email message may be carried out by extracting text from an email message header. Such header information may be extracted and stored in association with a predefined metadata field supported by the digital audio player upon which the extracted text is to be rendered. Consider for further explanation the following example. The identification of a sender of an email and the subject of the email is extracted from an email message and stored as metadata in association with a predefined metadata field for ‘Artist’ and ‘Song’ supported by an iPod digital audio player. In such an example, the extracted header information is rendered in predefined metadata fields on the iPod allowing a user to navigate the header information of the email as the user normally navigates the metadata of music files.
The extracted text from the email message may also include text from an email message body. Such extracted text of the body may also be associated with a predefined metadata field supported by the digital audio player upon which the extracted body text is to be rendered. Continuing with the example above, the extracted text from the body ‘may be associated in the ‘Song’ field supported by an iPod digital audio player. In such an example, the extracted text from the body is rendered in predefined metadata fields on the ipod when the user selects the file associated with the extracted body text in the same manner as a user selects a song in a media file. The user may advantageously view the email in the display screen of the iPod.
In the examples above, extracted email text is displayed on the display screen of a digital audio player for visual rendering of the email on the display screen of a digital audio player. Some or all of the extracted text may also be converted to speech for audio rendering by the digital audio player. For further explanation,
The method of
The method of
The method of
The method of
The method of
The method of
Examples of speech engines capable of converting extracted text to speech for recording in the audio portion of a media filed include, for example, IBM's ViaVoice Text-to-Speech, Acapela Multimedia TTS, AT&T Natural Voices™ Text-to-Speech Engine, and Python's pyTTS class. Each of these text-to-speech engines is composed of a front end that takes input in the form of text and outputs a symbolic linguistic representation to a back end that outputs the received symbolic linguistic representation as a speech waveform.
Typically, speech synthesis engines operate by using one or more of the following categories of speech synthesis: articulatory synthesis, formant synthesis, and concatenative synthesis. Articulatory synthesis uses computational biomechanical models of speech production, such as models for the glottis and the moving vocal tract. Typically, an articulatory synthesizer is controlled by simulated representations of muscle actions of the human articulators, such as the tongue, the lips, and the glottis. Computational biomechanical models of speech production solve time-dependent, 3-dimensional differential equations to compute the synthetic speech output. Typically, articulatory synthesis has very high computational requirements, and has lower results in terms of natural-sounding fluent speech than the other two methods discussed below.
Formant synthesis uses a set of rules for controlling a highly simplified source-filter model that assumes that the glottal source is completely independent from a filter which represents the vocal tract. The filter that represents the vocal tract is determined by control parameters such as formant frequencies and bandwidths. Each formant is associated with a particular resonance, or peak in the filter characteristic, of the vocal tract. The glottal source generates either stylized glottal pulses for periodic sounds and generates noise for aspiration. Formant synthesis generates highly intelligible, but not completely natural sounding speech. However, formant synthesis has a low memory footprint and only moderate computational requirements.
Concatenative synthesis uses actual snippets of recorded speech that are cut from recordings and stored in an inventory or voice database, either as waveforms or as encoded speech. These snippets make up the elementary speech segments such as, for example, phones and diphones. Phones are composed of a vowel or a consonant, whereas diphones are composed of phone-to-phone transitions that encompass the second half of one phone plus the first half of the next phone. Some concatenative synthesizers use so-called demi-syllables, in effect applying the diphone method to the time scale of syllables. Concatenative synthesis then strings together, or concatenates, elementary speech segments selected from the voice database, and, after optional decoding, outputs the resulting speech signal. Because concatenative systems use snippets of recorded speech, these systems have the highest potential for sounding like natural speech, but concatenative systems require large amounts of database storage for the voice database.
As discussed above, synthesizing content of disparate data types into synthesized content in a media file (810) for delivery to a particular digital audio player may include synthesizing RSS content for delivery to a digital audio player. For further explanation, therefore,
The method of
The method of
The method of
The method of
The method of
Storing the extracted RSS text and images as metadata associated with the media file provides a vehicle for visually rendering the extracted RSS content on a display screen of a digital audio player without modification of the digital audio player. The method of
In the example of
In the example of
The method of
The method of
The method of
The method of
The method of
The RSS content synthesized according to the method of
In the example of
The method of
The method of
As discussed above, ratings advantageously provide a mechanism for invoking content management directives on a consolidated content server without requiring modification of a digital audio player upon which the content under management is rendered. The particular content management directives may be user selected and those selected content management directives may be associated with a user selected rating to invoke the content management directive. For further explanation, therefore,
The method of
The method of
Receiving (908) from a user (100) an identification (910) of the rating to invoke the content management directive (906) may also include receiving a user defined rating. As discussed above, .mp4 files support flexible ID3v2 tags and therefore a user defined rating scheme may implement many ratings for an .mp4 file.
The method of
Storing the identification of the content management directive in association with the rating to invoke the content management directives may be used to create a rule associating the content management directive, the rating, and content to be managed by the content management directive. That is, embodiments of the present invention may also include creating a rule associating the content management directive, the rating, and content to be managed by the content management directive. For further explanation, therefore,
The content management directive rule creation page (930) of
The content management directive rule creation page (930) of
The content management directive rule creation page (930) of
The content management directive rule creation page (930) of
Upon receiving an identification of the rating, the content management directive and the content upon which to invoke the content management directive, a content management directive rule creation engine may create a rule associating the content management directive, the rating, and content to be managed by the content management directive. Such a rule may be stored by embedding the rule in the media file of the content. Embedding the rule in the media file containing of the content may be carried out by embedding the rule in an ID3 tag in for example an .mp4 file. Alternatively, a rule may be stored in a metadata file such as an XML library file such as those implemented by the iTunes® digital audio player application.
Exemplary embodiments of the present invention are described largely in the context of a fully functional computer system for associating user selected content management directives with a user selected rating. Readers of skill in the art will recognize, however, that the present invention also may be embodied in a computer program product disposed on signal bearing media for use with any suitable data processing system. Such signal bearing media may be transmission media or recordable media for machine-readable information, including magnetic media, optical media, or other suitable media. Examples of recordable media include magnetic disks in hard drives or diskettes, compact disks for optical drives, magnetic tape, and others as will occur to those of skill in the art. Examples of transmission media include telephone networks for voice communications and digital data communications networks such as, for example, Ethernets™ and networks that communicate with the Internet Protocol and the World Wide Web. Persons skilled in the art will immediately recognize that any computer system having suitable programming means will be capable of executing the steps of the method of the invention as embodied in a program product. Persons skilled in the art will recognize immediately that, although some of the exemplary embodiments described in this specification are oriented to software installed and executing on computer hardware, nevertheless, alternative embodiments implemented as firmware or as hardware are well within the scope of the present invention.
It will be understood from the foregoing description that modifications and changes may be made in various embodiments of the present invention without departing from its true spirit. The descriptions in this specification are for purposes of illustration only and are not to be construed in a limiting sense. The scope of the present invention is limited only by the language of the following claims.
Number | Name | Date | Kind |
---|---|---|---|
5819220 | Sarukkai et al. | Oct 1998 | A |
5892825 | Mages et al. | Apr 1999 | A |
5901287 | Bull et al. | May 1999 | A |
5911776 | Guck | Jun 1999 | A |
6032260 | Sasmazel et al. | Feb 2000 | A |
6061718 | Nelson | May 2000 | A |
6141693 | Perlman et al. | Oct 2000 | A |
6178511 | Cohen et al. | Jan 2001 | B1 |
6219638 | Padmanabhan | Apr 2001 | B1 |
6240391 | Ball et al. | May 2001 | B1 |
6266649 | Linden et al. | Jul 2001 | B1 |
6311194 | Sheth et al. | Oct 2001 | B1 |
6343329 | Landgraf | Jan 2002 | B1 |
6463440 | Hind et al. | Oct 2002 | B1 |
6519617 | Wanderski et al. | Feb 2003 | B1 |
6771743 | Butler et al. | Aug 2004 | B1 |
6912691 | Dodrill et al. | Jun 2005 | B1 |
6944591 | Raghunandan | Sep 2005 | B1 |
6965569 | Carolan et al. | Nov 2005 | B1 |
6975989 | Sasaki | Dec 2005 | B2 |
6976082 | Ostermann et al. | Dec 2005 | B1 |
6981023 | Hamilton | Dec 2005 | B1 |
6993476 | Dutta et al. | Jan 2006 | B1 |
7039643 | Sena et al. | May 2006 | B2 |
7046772 | Moore et al. | May 2006 | B1 |
7062437 | Kovales et al. | Jun 2006 | B2 |
7120702 | Huang et al. | Oct 2006 | B2 |
7130850 | Russell-Falla et al. | Oct 2006 | B2 |
7171411 | Lewis et al. | Jan 2007 | B1 |
7313528 | Miller | Dec 2007 | B1 |
7356470 | Roth et al. | Apr 2008 | B2 |
7366712 | He et al. | Apr 2008 | B2 |
7437351 | Page | Oct 2008 | B2 |
7454346 | Dodrill et al. | Nov 2008 | B1 |
7546288 | Springer, Jr. | Jun 2009 | B2 |
7657006 | Woodring | Feb 2010 | B2 |
7743009 | Hangartner et al. | Jun 2010 | B2 |
7849159 | Elman | Dec 2010 | B2 |
7908270 | Spiegelman | Mar 2011 | B2 |
8510277 | Bodin et al. | Aug 2013 | B2 |
8706731 | Cho et al. | Apr 2014 | B2 |
20010027396 | Sato | Oct 2001 | A1 |
20010040900 | Salmi et al. | Nov 2001 | A1 |
20010047349 | Easty et al. | Nov 2001 | A1 |
20010049725 | Kosuge | Dec 2001 | A1 |
20010054074 | Hayashi | Dec 2001 | A1 |
20020013708 | Walker et al. | Jan 2002 | A1 |
20020032564 | Ehsani et al. | Mar 2002 | A1 |
20020032776 | Hasegawa et al. | Mar 2002 | A1 |
20020054090 | Silva et al. | May 2002 | A1 |
20020062216 | Guenther et al. | May 2002 | A1 |
20020062393 | Borger et al. | May 2002 | A1 |
20020083013 | Rollins et al. | Jun 2002 | A1 |
20020095292 | Mittal et al. | Jul 2002 | A1 |
20020152210 | Johnson et al. | Oct 2002 | A1 |
20020178007 | Slotznick et al. | Nov 2002 | A1 |
20020184028 | Sasaki | Dec 2002 | A1 |
20020194286 | Matsuura et al. | Dec 2002 | A1 |
20020194480 | Nagao | Dec 2002 | A1 |
20020198720 | Takagi et al. | Dec 2002 | A1 |
20030028380 | Freeland et al. | Feb 2003 | A1 |
20030033331 | Sena et al. | Feb 2003 | A1 |
20030051083 | Striemer | Mar 2003 | A1 |
20030055868 | Fletcher et al. | Mar 2003 | A1 |
20030103606 | Rhie et al. | Jun 2003 | A1 |
20030110272 | du Castel et al. | Jun 2003 | A1 |
20030110297 | Tabatabai et al. | Jun 2003 | A1 |
20030115056 | Gusler et al. | Jun 2003 | A1 |
20030115064 | Gusler et al. | Jun 2003 | A1 |
20030126293 | Bushey | Jul 2003 | A1 |
20030132953 | Johnson et al. | Jul 2003 | A1 |
20030139144 | Kitajima | Jul 2003 | A1 |
20030158737 | Csicsatka | Aug 2003 | A1 |
20030160770 | Zimmerman | Aug 2003 | A1 |
20030167234 | Bodmer et al. | Sep 2003 | A1 |
20030172066 | Cooper et al. | Sep 2003 | A1 |
20030188255 | Shimizu et al. | Oct 2003 | A1 |
20030229847 | Kim | Dec 2003 | A1 |
20040003394 | Ramaswamy | Jan 2004 | A1 |
20040005040 | Owens | Jan 2004 | A1 |
20040034653 | Maynor et al. | Feb 2004 | A1 |
20040041835 | Lu | Mar 2004 | A1 |
20040046778 | Niranjan | Mar 2004 | A1 |
20040054627 | Rutledge | Mar 2004 | A1 |
20040068552 | Kotz et al. | Apr 2004 | A1 |
20040088349 | Beck et al. | May 2004 | A1 |
20040199375 | Ehsani et al. | Oct 2004 | A1 |
20040201609 | Obrador | Oct 2004 | A1 |
20040254851 | Himeno et al. | Dec 2004 | A1 |
20050002503 | Owens | Jan 2005 | A1 |
20050015254 | Bearman | Jan 2005 | A1 |
20050045373 | Born | Mar 2005 | A1 |
20050071780 | Muller et al. | Mar 2005 | A1 |
20050076365 | Popov et al. | Apr 2005 | A1 |
20050108521 | Silhavy et al. | May 2005 | A1 |
20050191994 | May | Sep 2005 | A1 |
20050192061 | May | Sep 2005 | A1 |
20050203959 | Muller et al. | Sep 2005 | A1 |
20050226217 | Logemann | Oct 2005 | A1 |
20050232242 | Karaoguz et al. | Oct 2005 | A1 |
20050251513 | Tenazas | Nov 2005 | A1 |
20060007820 | Adams | Jan 2006 | A1 |
20060008258 | Kawana et al. | Jan 2006 | A1 |
20060020662 | Robinson | Jan 2006 | A1 |
20060031364 | Hamilton | Feb 2006 | A1 |
20060048212 | Tsuruoka et al. | Mar 2006 | A1 |
20060050794 | Tan et al. | Mar 2006 | A1 |
20060052089 | Khurana et al. | Mar 2006 | A1 |
20060075224 | Tao | Apr 2006 | A1 |
20060095848 | Naik | May 2006 | A1 |
20060114987 | Roman | Jun 2006 | A1 |
20060123082 | Digate et al. | Jun 2006 | A1 |
20060136449 | Parker et al. | Jun 2006 | A1 |
20060140360 | Crago et al. | Jun 2006 | A1 |
20060149781 | Blankinship | Jul 2006 | A1 |
20060155698 | Vayssiere | Jul 2006 | A1 |
20060159109 | Lamkin et al. | Jul 2006 | A1 |
20060168507 | Hansen | Jul 2006 | A1 |
20060173985 | Moore | Aug 2006 | A1 |
20060184679 | Izdepski et al. | Aug 2006 | A1 |
20060190616 | Mayerhofer et al. | Aug 2006 | A1 |
20060193450 | Flynt | Aug 2006 | A1 |
20060195512 | Rogers et al. | Aug 2006 | A1 |
20060195540 | Hamilton | Aug 2006 | A1 |
20060206533 | MacLaurin et al. | Sep 2006 | A1 |
20060218187 | Plastina et al. | Sep 2006 | A1 |
20060224739 | Anantha | Oct 2006 | A1 |
20060233327 | Roberts et al. | Oct 2006 | A1 |
20060265503 | Jones et al. | Nov 2006 | A1 |
20060282317 | Rosenberg | Dec 2006 | A1 |
20060288011 | Gandhi et al. | Dec 2006 | A1 |
20070027958 | Haslam | Feb 2007 | A1 |
20070033239 | Beaule et al. | Feb 2007 | A1 |
20070043759 | Bodin et al. | Feb 2007 | A1 |
20070061229 | Ramer et al. | Mar 2007 | A1 |
20070061266 | Moore et al. | Mar 2007 | A1 |
20070067429 | Jain et al. | Mar 2007 | A1 |
20070073728 | Klein et al. | Mar 2007 | A1 |
20070077921 | Hayashi et al. | Apr 2007 | A1 |
20070078655 | Semkow et al. | Apr 2007 | A1 |
20070083540 | Gundla et al. | Apr 2007 | A1 |
20070091206 | Bloebaum | Apr 2007 | A1 |
20070100836 | Eichstaedt et al. | May 2007 | A1 |
20070112844 | Tribble et al. | May 2007 | A1 |
20070118426 | Barnes, Jr. | May 2007 | A1 |
20070124458 | Kumar | May 2007 | A1 |
20070124802 | Anton et al. | May 2007 | A1 |
20070130589 | Davis et al. | Jun 2007 | A1 |
20070147274 | Vasa et al. | Jun 2007 | A1 |
20070174326 | Schwartz et al. | Jul 2007 | A1 |
20070191008 | Bucher et al. | Aug 2007 | A1 |
20070192327 | Bodin | Aug 2007 | A1 |
20070192674 | Bodin | Aug 2007 | A1 |
20070192683 | Bodin | Aug 2007 | A1 |
20070192684 | Bodin et al. | Aug 2007 | A1 |
20070194286 | Kaner | Aug 2007 | A1 |
20070206738 | Patel | Sep 2007 | A1 |
20070208687 | O'Connor et al. | Sep 2007 | A1 |
20070213857 | Bodin | Sep 2007 | A1 |
20070213986 | Bodin | Sep 2007 | A1 |
20070214147 | Bodin et al. | Sep 2007 | A1 |
20070214148 | Bodin | Sep 2007 | A1 |
20070214149 | Bodin et al. | Sep 2007 | A1 |
20070214485 | Bodin | Sep 2007 | A1 |
20070220024 | Putterman et al. | Sep 2007 | A1 |
20070239713 | Leblang et al. | Oct 2007 | A1 |
20070253699 | Yen et al. | Nov 2007 | A1 |
20070276837 | Bodin | Nov 2007 | A1 |
20070276865 | Bodin | Nov 2007 | A1 |
20070276866 | Bodin et al. | Nov 2007 | A1 |
20070277088 | Bodin | Nov 2007 | A1 |
20070277233 | Bodin | Nov 2007 | A1 |
20080034278 | Tsou et al. | Feb 2008 | A1 |
20080052415 | Kellerman et al. | Feb 2008 | A1 |
20080082576 | Bodin | Apr 2008 | A1 |
20080082635 | Bodin | Apr 2008 | A1 |
20080131948 | Manzer | Jun 2008 | A1 |
20080155079 | Spiegelman | Jun 2008 | A1 |
20080161948 | Bodin | Jul 2008 | A1 |
20080162131 | Bodin | Jul 2008 | A1 |
20080275893 | Bodin et al. | Nov 2008 | A1 |
20090132453 | Hangartner et al. | May 2009 | A1 |
Number | Date | Country |
---|---|---|
2004312208 | Nov 2004 | JP |
2005149490 | Jun 2005 | JP |
2007011893 | Jan 2007 | JP |
WO 0182139 | Nov 2001 | WO |
WO 2005106846 | Nov 2005 | WO |
Entry |
---|
Text to Speech MP3 with Natural Voices 1.71, Published Oct. 5, 2004. |
Managing multimedia content and delivering services across multiple client platforms using XML, London Communications Symposium, xx, xx, Sep. 10, 2002, pp. 1-7. |
PCT Search Report and Written Opinion International Application PCT/EP2007/050594. |
Adapting Multimedia Internet Content for Universal Access, Rakesh Mohan, John R. Smith, Chung-Sheng Li, IEEE Transactions on Multimedia vol. 1, No. 1, p. 104-p. 144. |
Office Action Dated May 24, 2006 in U.S. Appl. No. 11/420,018. |
Office Action Dated Sep. 29, 2006 in U.S. Appl. No. 11/536,733. |
Office Action Dated Jan. 3, 2007 in U.S. Appl. No. 11/619,253. |
Office Action Dated May 24, 2006 in U.S. Appl. No. 11/420,016. |
Office Action Dated May 24, 2006 in U.S. Appl. No. 11/420,015. |
Office Action Dated Mar. 9, 2006 in U.S. Appl. No. 11/372,318. |
Office Action Dated Mar. 9, 2006 in U.S. Appl. No. 11/372,329. |
Office Action Dated Mar. 9, 2006 in U.S. Appl. No. 11/372,325. |
Office Action Dated Mar. 9, 2006 in U.S. Appl. No. 11/372,323. |
Office Action Dated Feb. 13, 2006 in U.S. Appl. No. 11/352,679. |
Office Action Dated Feb. 13, 2006 in U.S. Appl. No. 11/352,824. |
Office Action Dated Feb. 13, 2006 in U.S. Appl. No. 11/352,760. |
Office Action Dated Jun. 23, 2009 in U.S. Appl. No. 11/352,680. |
Office Action Dated Jul. 8, 2009 in U.S. Appl. No. 11/372,317. |
Final Office Action Dated Jul. 22, 2009 in U.S. Appl. No. 11/536,733. |
Office Action Dated Jul. 9, 2009 in U.S. Appl. No. 11/420,017. |
Office Action Dated Jul. 17, 2009 in U.S. Appl. No. 11/536,781. |
Office Action Dated Jul. 23, 2009 in U.S. Appl. No. 11/420,014. |
Final Office Action Dated Jul. 21, 2009 in U.S. Appl. No. 11/420,018. |
U.S. Appl. No. 11/352,760, filed Feb. 2006, Bodin et al. |
U.S. Appl. No. 11/352,824, filed Feb. 2006, Bodin et al. |
U.S. Appl. No. 11/352,680, filed Feb. 2006, Bodin et al. |
U.S. Appl. No. 11/352,679, filed Feb. 2006, Bodin et al. |
U.S. Appl. No. 11/372,323, filed Mar. 2006, Bodin et al. |
U.S. Appl. No. 11/372,318, filed Mar. 2006, Bodin et al. |
U.S. Appl. No. 11/372,319, filed Mar. 2006, Bodin et al. |
U.S. Appl. No. 11/536,781, filed Sep. 2006, Bodin et al. |
U.S. Appl. No. 11/420,014, filed May 2006, Bodin et al. |
U.S. Appl. No. 11/420,015, filed May 2006, Bodin et al. |
U.S. Appl. No. 11/420,016, filed May 2006, Bodin et al. |
U.S. Appl. No. 11/420,017, filed May 2006, Bodin et al. |
U.S. Appl. No. 11/420,018, filed May 2006, Bodin et al. |
U.S. Appl. No. 11/536,733, filed Sep. 2006, Bodin et al. |
U.S. Appl. No. 11/619,216, filed Jan. 2007, Bodin et al. |
U.S. Appl. No. 11/619,253, filed Jan. 2007, Bodin et al. |
U.S. Appl. No. 12/178,448, filed Jul. 2008, Bodin et al. |
Office Action Dated Apr. 15, 2009 in U.S. Appl. No. 11/352,760. |
Final Office Action Dated Nov. 16, 2009 in U.S. Appl. No. 11/352,760. |
Notice of Allowance Dated Jun. 5, 2008 in U.S. Appl. No. 11/352,824. |
Office Action Dated Jan. 22, 2008 in U.S. Appl. No. 11/352,824. |
Final Office Action Dated Dec. 21, 2009 in U.S. Appl. No. 11/352,680. |
Office Action Dated Apr. 30, 2009 in U.S. Appl. No. 11/352,679. |
Final Office Action Dated Oct. 29, 2009 in U.S. Appl. No. 11/352,679. |
Office Action Dated Oct. 28, 2008 in U.S. Appl. No. 11/372,323. |
Office Action Dated Mar. 18, 2008 in U.S. Appl. No. 11/372,318. |
Final Office Action Dated Jul. 09, 2008 in U.S. Appl. No. 11/372,318. |
Final Office Action Dated Nov. 6, 2009 in U.S. Appl. No. 11/372,329. |
Office Action Dated Feb. 25, 2009 in U.S. Appl. No. 11/372,325. |
Office Action Dated Feb. 27, 2009 in U.S. Appl. No. 11/372,329. |
Final Office Action Dated Jan. 15, 2010 in U.S. Appl. No. 11/536,781. |
Office Action Dated Mar. 20, 2008 in U.S. Appl. No. 11/420,015. |
Final Office Action Dated Sep. 3, 2008 in U.S. Appl. No. 11/420,015. |
Office Action Dated Dec. 2, 2008 in U.S. Appl. No. 11/420,015. |
Office Action Dated Mar. 3, 2008 in U.S. Appl. No. 11/420,016. |
Final Office Action Dated Aug. 29, 2008 in U.S. Appl. No. 11/420,016. |
Final Office Action Dated Dec. 31, 2009 in U.S. Appl. No. 11/420,017. |
Office Action Dated Mar. 21, 2008 in U.S. Appl. No. 11/420,018. |
Final Office Action Dated Aug. 29, 2008 in U.S. Appl. No. 11/420,018. |
Office Action Dated Dec. 3, 2008 in U.S. Appl. No. 11/420,018. |
Office Action Dated Dec. 30, 2008 in U.S. Appl. No. 11/536,733. |
Office Action Dated Jan. 26, 2010 in U.S. Appl. No. 11/619,216. |
Office Action Dated Apr. 2, 2009 in U.S. Appl. No. 11/619,253. |
Office Action Dated Jun. 25, 2010 in U.S. Appl. No. 11/619,216. |
Advertisement from Odiogo, LLC, San Franscisco, CA (author unknown); Odiogo.com (from WayBack Machine archive); pp. 1-10; Oct. 23, 2005. |
Advertisement from Tucows Inc., Toronto, Canada (author unknown) for “FeedDemon Version 1.0” software; from www.bradsoft.com website; 1-11; Dec.18, 2003. |
OdioFo, Screen Dumps from www.odiogo.com web site, 11 pafes total. Archive date Oct. 23, 2005, downloaded from WayBack Machine, <“http://web.archive.org/web/20051023004244/www.odiogo.com/”>. |
N.Bradbury, “FeedDemon Version 1.0,” © Dec. 18, 2003, bradsoft.com, series of 11 screen dumps illustrating aspects of the software, 11 pages titak nymbered 1-11 if 11. |
U.S. Appl. No. 11/352,680 Office Action mailed Jun. 23, 2009. |
U.S. Appl. No. 11/372,317 Office Action mailed Jul. 8, 2009. |
U.S. Appl. No. 11/536,733 Final Office Action mailed Jul. 22, 2009. |
U.S. Appl. No. 11/420,017 Office Action mailed Jul. 9, 2009. |
U.S. Appl. No. 11/536,781 Office Action mailed Jul. 17, 2009. |
U.S. Appl. No. 11/420,014 Office Action mailed Jul. 23, 2009. |
U.S. Appl. No. 11/420,018 Final Office Action mailed Jul. 21, 2009. |
Buchana et al., “Representing Aggregated Works in the Digital Library”, ACM, 2007, pp. 247-256. |
Office Action , U.S. Appl. No. 11/352,760 , Sep. 16, 2010. |
Office Action, U.S. Appl. No. 11/352,680, Jun. 10, 2010. |
Final Office Action, U.S. Appl. 11/352,680, Sep. 7, 2010. |
Office Action, U.S. Appl. No. 11/352,679, May 28, 2010. |
Final Office Action, U.S. Appl. No. 11/352,679, Nov. 15, 2010. |
Office Action, U.S. Appl. 11/372,317, Sep. 23, 2010. |
Final Office Action, U.S. Appl. No. 11/372,329, Nov. 6, 2009. |
Office Action, U.S. Appl. No. 11/372,319, Apr. 21, 2010. |
Final Office Action, U.S. Appl. No. 11/372,319, Jul. 2, 2010. |
Final Office Action, U.S. Appl. No. 11/420,014, Apr. 3, 2010. |
Final Office Action, U.S. Appl. No. 11/420,017, Sep. 23, 2010. |
Final Office Action, U.S. Appl. No. 11/619,216, Jun. 25, 2010. |
Final Office Action, U.S. Appl. No. 11/619,236, Oct. 22, 2010. |
Office Action, U.S. Appl. No. 12/178,448, Apr. 2, 2010. |
Final Office Action, U.S. Appl. No. 12/178,448, Sep. 14, 2010. |
Heslop et al., “Microsoft Office Word 2003 Bible”, 2003, pp. 29-30, 39, 505-517, Wiley Publishing Inc., Indianapolis, Indiana, USA. |
Number | Date | Country | |
---|---|---|---|
20070214149 A1 | Sep 2007 | US |