A system for providing a media artificial intelligence (AI) agent including content/parental controls of media content, and more particularly, a device, system and methodology of a media AI agent that allows content/parental control of specific undesired electronic content.
As the demand for and level of digital content continues to expand in our society, our youth continue to be exposed to content by producers that parents do not want their children to see. Unfortunately the movie and other industries do not police themselves nor do they successfully keep harmful content from minors. The openness of the internet provides the ability of minors to have continuous access to inappropriate content. This continues to be a problem as parents often cannot police the streaming content into their homes due to the lack of advances in digital control technology.
The problem with uncontrolled digital content is not limited to just movies. Digital media covers many aspects of our society and includes not only movies, but also music and video games. None of this digital media employs a common filtering or rating system that can be used by parents or other adults that may wish to preclude inappropriate content from coming into their homes. Search engines and media players also have no way of knowing if the content is appropriate for the value systems of their customers, other than G, PG, R ratings. And even though a movie rating may be provided on some content, such as movies, the movie ratings do not offer enough rating detail for most families nor do they provide any filtering options. Nor do the ratings break down with particularity and inform a user as to what content has caused a rating to be rated R, PG, or other rating.
It is not uncommon for one scene in a movie or one word in a video game to be the only offensive aspect of the media content. The current parental control technology may either block all PG content or none of it. It doesn't allow the user to block part of the content, and it doesn't allow the user to block content for specific offensive content. Current parental control technology blocks entire web sites, even those that offer valuable content for students because of one article or word. They block entire movies or video games because of the rating, when users might not be offended.
It would be desirable to provide a tool, system and methodology to block specific offensive content such as, but not limited to, nudity and language, without blocking specific content such as violence. Such improved media system should be flexible, selectable and work simultaneously with movies, music, video games, and other electronic mediums and products.
While the claims are not limited to a specific illustration, an appreciation of the various aspects is best gained through a discussion of various examples thereof. Referring now to the drawings, exemplary illustrations are shown in detail. Although the drawings represent the illustrations, the drawings are not necessarily to scale and certain features may be exaggerated to better illustrate and explain an innovative aspect of an example. Further, the exemplary illustrations described herein are not intended to be exhaustive or otherwise limiting or restricted to the precise form and configuration shown in the drawings and disclosed in the following detailed description. Exemplary illustrations are described in detail by referring to the drawings as follows:
An exemplary embodiment of a parental controls system has an interface and provides a numeric rating to every media element in its content database for categories including, but not limited to, sex, language, violence, drugs, nudity and other parameters. The system then allows a user to set parental controls on each of these parameters. The system will automatically block all content that includes this content or removes the offensive elements from the content, so the user can experience media without offensive content.
Another embodiment provides a system that allows the user to have a single media player that can search and access digital movies, music, news and video games, blocking inappropriate content or even skipping inappropriate elements within the content.
Another embodiment of the system allows a user to be able to block specific offensive content such as nudity without blocking specific content such as violence. This media system will simultaneously work with movies, music, video games, and other content.
Another embodiment provides a media manager with a search engine and media player, wherein the search engine is improved to include the If-Then parameters of the parental controls system according to the user settings. The media player is modified with the ability to read and act on a timeline-based edit list with all potentially offensive material marked with “in” and “out” points on the timeline. The player is modified to replace the offensive content with transition content or no content that would offer a safe user experience without too much disruption to the consumption media.
The user interface 100 also includes an add web blocking for all internet use tab 108, add ClearPlay to skip sex, vulgarity and violence on all of your DVD's tab 110, and a submit tab 112. The add web blocking tab 108 activates the system to use the filter on the content that is streaming from the internet. The add ClearPlay tab 108 permits a user to apply the desired filter on DVD products. Thus, the system 10 can be used to filter media content from multiple sources. It will be appreciated that the interface 100 may have other tabs and features.
If the person that logs in is not a parent 420, then the system 10 will display a user interface without the option to edit a search or to view the filter 422. Under this method, the result of any searches will only show filtered media 424.
The media manager module 400 provides a system of searching and sorting media so that the user will find content throughout the digital universe. If content has a rating, the user will have access to the content, based on parental controls settings. If the content has no rating, the user will only have access if the ability to view non-rated content option is selected by the user.
The media manager module 400 acts as an optional search engine tool that allows the user to search for video, audio, text, images and interactive software (“Content”) using the Web, media appliance (TV, radio), mobile devices or other digital technology. The media manager and search engine could adopt and add the rating and filtering system to its function. A video editor feature is optional and presents a feature for editing content and delivering cleaned content, on demand. Content can be acquired from distributors with edits (made for TV or airline versions), and content can be edited by third-party content providers.
The logic diagram 500 for the parental control user interface includes the exemplary step of providing a parent editing a filter 502, displaying a list of topical categories, with a range of ratings in each one 504, allowing the user to edit each entries value to filter out media below the setting 506, saving to enforce, cancel or ignore the setting 508, and if saved is selected, then saving to the system the filter to enforce in future media searches 510. It will be appreciated that this algorithm can be modified to enhance performance of the system 10.
The filter system 602 includes the following process steps. First it starts with the user does a media filtered search 604. Then the search looks at all possible media entries 606. Next it asks does individual search item match search criteria 608. If not, then the routine goes back to step 606. If yes, then the process advances to determining does the individual media item have a version which is within the ratings filter 610. If no, then the process reverts to step 606. If yes, then the process adds media item to displayed results list 612.
The filtering system 602 compares the user settings from the parental controls user interface 100 with the rating system 702 for the content. If the numeric value of the content is within the parameters of what the user wants to allow, the content will be accessible to the user. If the numeric value of the content is within the parameters allowed by the user, the content will be accessible to the user in part or in whole, based on the user settings.
The rating system logic diagram 700 includes the step of a media item gets added 704 to the master database 50. Next decision of are these sources for rating this media item is asked 706. If the answer is no, then the media item is added as an unrated item to a list 708. If the answer to the question “are these sources for rating this media item” is yes, then the rating system combines multiple rating sources into a rating list 710. Thereafter a master list of media is updated 712.
The rating system 702 provides for all media entities loaded into the master database 50 (see
The rating system 702 is maintained in the database associated with all content presented to the user. The rating system 702 includes all known public rating systems such as the MPAA, TV and video game ratings as well as a custom database for each of the parameters set in the parental controls user interface. Other database sharing is used from sources such as the Dove Foundation as well as manually entered ratings from users and technology developers as well as artificial intelligence to detect offensive content in order to get complete content ratings.
The rating system 702 provides the data to the filtering system 602 so that the parental controls settings can be applied to give access or denial to part of the content or all of the Content. The rating system 702 is helpful because without a common rating system on each parameter of sex, language, violence, drugs, nudity or other, the system would have no basis of filtering. This rating system is applied to the content as a whole. This rating is also applied to individual chunks of content as well. Without filtering the rating system 702 will not block content for the user.
With continued reference to
All of the above components of the disclosure work individually and together to perform a unique function for an integrated media system with dual-function parental controls (both on the Content and within the Content itself). If the user searches for media content including but not limited to video, music, text, images or interactive software, find all possible matches, but don't make any visible yet to the consumer. If content has no rating and user has parental controls are off, then give the user access to content and make it visible. If the user has parental controls turned on, then filter content, blocking all non-rated content first. If the content has a rating, then filter content according to user settings. If the user has a setting to block content rated 3 or higher for a particular offensive element (such as nudity) and content contains material with a rating of 4 for the same offensive category, then the system blocks the content.
If the Content is segmented into rated content chunks, then only block the chunks of content that are offensive, allowing other content to become accessible and visible. For example, if one scene in a video game presents a nude woman and if the user has settings to block all nudity, only that scene would be removed from the game. However, if the content is not segmented into rated content chunks, block the content in its entirety. If filtering is complete, then the system makes visible and available to the user any content that passes the parental controls filters, and blocks all other content from any access.
The above steps and system 10 may be modified and yet remain within the spirit of the embodiments shown. The present system is a digital application which is developed to create a search engine which operates on an internet-based platform. It could use, but is not limited to, use of a combination of HTML and Javascript database technology, with web servers and high bandwidth Internet. The search engine is able to proactively crawl the Web and create a database that is responsive to users when they come to search for media they want to consume. However, the exemplary search engine will include a highly filtered and user managed database of media that will be ranked and rated on parameters for parental controls. The system 10 will allow users and system managers to input ratings on the content database.
For example, a movie such as “Facing the Giants” is rated with 5 stars from the Dove Foundation. The Dove Foundation gave this movie a “0” rating for sex, drugs, nudity and other, but it got a “1” rating for language and violence. The search engine is operable to pick up this rating from the Dove Foundation and store the rating for this movie in a database. Under this scenario, the filter should only show “Facing the Giants” as a search result if the user sets parental controls at a “1” or higher for language and violence. Thus, the system 10 is flexible in that it lets the user to specifically set parental control ratings for different categories, search the web for content that meets that criteria, and then allow the acceptable content to pass through the filter to a consumer.
The system 10 also provides a video editor that can customize video according to user preferences. In the case of “Facing the Giants,” the users will be given the option to edit the movie for their personal consumption. They will be given the option to create a mark in and out points for each of the offensive language instances or violent scenes. The edited versions will be resubmitted to a source for rating. After rating with edits, the source will provide back an “edited” version of the ratings. If the “edited” rating for “Facing the Giants” gives it “0s” on all parameters, all users will have access to it. When the user plays this content with parental controls all set at “0,” the edit list from the user who created it will be downloaded with the full movie, and the playback works according to the user edits.
A method of operation will now be described. It will be appreciated that this is but one exemplary embodiment and that other are contemplated. First, a user would access the Media Manager through a media device such as a TV set device, a mobile device, a PC or other digital system. The user would set up the parental controls user interface settings by selecting a numeric value for sex, language, violence, drugs, nudity and other individual settings.
Next, the user would search for media. The media manager will only present media that can be experienced without any of the inappropriate content blocked within the parental controls user interface. The user will play, read, view or otherwise experience the media that has been filtered or edited by the media manager. Seamless to the user, the system will allow the user to experience content without seeing any content defined as offensive by the user. In some cases the content will be blocked in its entirety, but in others the system will edit or present an edited version of the content in such a way to remove offensive material from content that previously had offensive material in its original state.
Additionally, the user will have the option to rate or edit content for the greater community of users. For example, the user will be given the option to upload and submit a rating or apply for a rating from the community or from an approved third-party rating system for content provided by the user.
The user will also be given the opportunity to mark specific chunks of content as inappropriate based on a numeric value on sex, language, violence, drugs, nudity or other parameters to be set by the user. The user will be given the ability to edit the content and share their edits with the community. The edited content will be submitted to be rated as new content for the filtering system. The user will have the ability to share ratings, parental control settings and edited material with social media and within the media manager user community. Additionally, this rating system and parental control technology could be used as an improvement on a search engine or media player of any kind.
The system 10 could produce a rating database for content. The system 10 could also produce an edit list or a library of content that has been edited for inappropriate material. The system 10 could also produce a search engine technology that is superior to others in getting personalized results. The system 10 could produce media content.
In this embodiment, the AI media agent is configured to aggregate, curate and filter media content based on the selected content category settings for a user. The AI media agent is also configured to create block lists and/or whitelists of the media content based at least in part on the content category settings and/or user actions.
An example embodiment of a parental controls system has an interface and provides a numeric rating to every media element in its content database for the plurality of content categories including, but not limited to, sex, language, violence, drugs, nudity and other parameters. The system can automatically block all media content that includes one or more individually objectionable content category or removes the offensive elements from the media content that include one or more objectionable content category, so the user can experience media without offensive content.
Another embodiment provides a system that allows the user to have a single media player that can search and access digital movies, music, news and video games, blocking inappropriate content or even skipping inappropriate elements within the content.
Another embodiment of the system allows a user to be able to block specific offensive content such as nudity without blocking specific content such as violence. This media system will simultaneously work with movies, music, video games, and other content.
Another embodiment provides a media manager with a search engine and media player, wherein the search engine is improved to include the If-Then parameters of the parental controls system according to the user settings. The media player is modified with the ability to read and act on a timeline-based edit list with all potentially offensive material marked with “in” and “out” points on the timeline. The player is modified to replace the offensive content with transition content or no content that would offer a safe user experience without too much disruption to the consumption media.
In one embodiment, for example, the media AI agent is configured to review media content and/or metadata (such as summary information, close captioning, or the like) using artificial intelligence to identify portions of the media content that are applicable to one or more individual content categories that may be filtered on behalf of an end user. The media AI agent, for example, may scan or otherwise review video content to identify potentially offensive content by identifying one or more items that characterize one or more of the content categories. An identification of a cigarette or liquor bottle, for example, may be applicable to a content category and the media AI agent may take actions such as the following: (1) identify a numeric or other rating within a range of ratings for the identified content, (2) identify a start and stop timing indicator for one or more scenes with such potentially offensive content such that the media content may be altered during playback (e.g., skipping or replacing potentially offensive content); and (3) identify a screen location during the duration defined by the start and stop indicators that may be altered to delete or obscure the potentially offensive content material (e.g., replace a liquor bottle with a water bottle). The media AI agent may identify the rating based on an accumulation of potentially offensive content, by a level of offensiveness of the content (e.g., liquor versus illicit drugs), or the like. The media AI agent may also scan for other content categories, such as nudity or sex by automatically identifying skin tones in excess of other scenes within the content.
Where the rating information is determined to be within a user identified level of acceptable content filters, the media content may be played without alteration. Where the rating information is greater than the user-identified acceptable content filter ratings, the potentially offensive content may be altered during playback in order to provide a media content viewing experience in accordance with the user-identified acceptable content filter ratings.
Embodiments of the media AI agent are further configured to enable a user to provide a multitude of other settings and content that may also be used to train the AI media agent beyond merely the user-identified acceptable filter content ratings. The media AI agent can provide the user the ability to provide block/purge lists to remove content from a dynamic library suitable to be played for the user and to provide whitelists/Hypelists to maintain content within the dynamic library suitable to be played for the user.
The media AI agent can also provide the opportunity to rate media content, share media content and/or their ratings to others and further use that as data to train an AI model for curating content for that particular user's dynamic library.
The media AI agent can also provide the user with an AI prompt to search for or identify new content for inclusion and/or exclusion from the user's dynamic library.
The user's shared lists, lists of others shared with the user, and even social media activity may further be used within an AI model to train the media AI agent in curation of content for the user's dynamic library.
In this embodiment, the user interface also includes option selectors for the following:
In this example, the selector for “Hide unrated titles” 1030 allows a user to block titles that do not have a rating (e.g., an MPAA or other rating) that may be selected via the sliding bar for ratings.
The selector for “Skip or mute scenes” 1032 allows a user to select for media content to be played with objectionable content portions skipped or muted depending on the selected levels for the media content categories. Objectionable language, for example, may be amenable to muting, while objectionable video, such as sex, nudity, or violence may be skipped during playback or other content may be played in its place (alternate audio (e.g., dubbed language) or video content (e.g., a replacement scene)).
The user interface also includes a “Block for all devices” selector 1034 that allows the user to block content for all devices accessing the media content via the AI media agent (e.g., via an access point such as a router or WiFi access point). In this manner, one or more setting selections may be applied across all devices. The media content may be blocked at the access point or may be blocked or filtered at the user device.
An Advanced Block/Purge icon 1036 links to a series of multi-step forms configured to create one or more block list or purge list such as shown in
The user interface 1000 further includes a menu panel 1040 shown on the left side of the user interface in this example. The menu panel 1040 may be a perpetual panel (e.g., banner or side panel) that is configured to expand into a scrolling list of genres and Hypelists/whitelists.
The HypeLab options 1042 is provided to allow a user to access a HypeLab configured to create one or more Hypelists/whitelists. Selecting the triple ellipsis can expand the panel of the user interface to provide additional options.
In one embodiment, the system is configured to link to one or more streaming apps or services, such as Netflix, Amazon, or the like. The AI media agent is configured to seamlessly interact with these apps or services and can access (e.g., log in via stored credentials), transact (e.g., purchase content automatically via stored user payment information), aggregate content (e.g., search, scrape, identify and access content relevant to a user's library), curate content, categorize content, rate content, and evaluate content using the user selectable settings (e.g., average or otherwise analyze third party ratings information as applicable to the user's selected content setting selections described above such as via MPAA, IMDB, Dove, and other ratings sources).
In one embodiment, the AI media agent is configured to be trained on a combination of user generated data (e.g., reviews, block/purge lists, white lists), data shared with or from one or more other users, AI curated media content and user interaction with that AI curated media content (e.g., did the user view the suggested content, rate the suggested content, etc.).
Media content may thus be eliminated from inclusion in a user-selectable media library based on the user's customizable media content category setting selections made via the user interface of
Based on the aggregation, evaluation, and curation, the system is further configured to provide filtering of media content that includes one or more objectionable content category. By providing “safe” filtered content, the system is configured to expand the user-accessible library of media content that meets the user-customizable media content category setting selections.
The filtering operation may be configured to skip or remove objectionable media content or replace objectionable media content with alternative media content that complies with the user-customizable media content category settings selections. In one embodiment, for example, the system may identify, and store start and end locations (via a database) within the media content for each potentially objectionable portion of the media content according to potential user-selectable media content category settings. In this manner, the system can access these locations to use in an active filtering of the content during playback, such as skipping the portions of the media content and/or replacing the portions.
In one embodiment, a subset of media content available to a user has been or is available to be filtered. As discussed above, this can supplement the media library that satisfies user-customizable settings that restrict one or more media content category. Over time, the AI media agent is configured to scan additional media content (e.g., via an AI large language model or AI neural network) to identify potentially objectional content, store corresponding markers identify that potentially objectionable content, and later use the stored markers to filter the newly identified content for playback depending on the user selected settings.
In the example shown in
Once the user has selected or not selected one or more of the micro-genres 1104 from the list, the user may select the Next icon 1106 to proceed to further options to generate a block/purge list. The user may also select “Do Later” icon 1108 and the AI media agent may be configured to save the selections for later review and application.
The user interface 1100 shown in
The AI media agent can be configured to store user credential information 1128 to automatically access the services on the user's behalf and to store user payment information to transact with one or more services on the user's behalf (e.g., automatically purchase or rent media content selected from the library of available media content matching the user-selected media content category settings). The AI media agent system may provide an authorized budget (e.g., monthly) for which selections may be automatically authorized for the agent to transact directly with the third-party service provider.
In one embodiment, for example, the AI media agent includes one or more browser extensions that enable the AI media agent to automatically interact with one or more third party on the user's behalf to make the media content available as a portion of the user's media library accessible via the Ai media agent system. Different browser extensions, for example, may provide log in credentials, payment information, or other information that enable the AI media agent to operate as if the AI media agent was the user accessing the third-party media services.
Once completed, the AI media agent is configured to link the third-party account to the user's AI media agent account. Once completed, the user interface may revert to a prior page, such as the block/purge list tool.
The user may select the title/list and edit the flag or hidden categorization for the title/list.
The user interface 1140 shown in
The user interface further includes an Unhide Title/List on each hidden title/list and a Hide icon on each public title. The icons allow the user to unhide or hide individual titles or lists of titles. An Unhide Row icon allows the user to unhide an entire row of titles/lists.
An unhide/flag mode enables a user to perform a high volume of individual hiding and flagging of media content. In a purge or unhide list mode, each title includes an icon (e.g., an eye icon) on the thumbnail or other designator of a title/list. The flag, in one example, reads “Flagged” when both hidden and flagged. The flag icon may be missing if hidden but not flagged. Icons may be shown as a partial opacity (e.g., 50% opacity) and white when not hidden or flagged.
The user interface may also include “Tiny Toggle” icons on each button. The Tiny Toggle “pin light” on each button indicates whether the button/algorithm has already been activated on this Block/Purge list or Hypelist/whitelist or not. The Purge buttons 1154 on the example of
The Global Purge button may be available for one or more genres and/or micro-genres, individually.
The Purge Outdated selector allows the user to set by date (e.g., before or after a last release date).
The Purge Unrated removes all titles/lists not rated (e.g., NR) and/or below a certain predetermined or a selected rating (e.g., 0, 1, or 2 stars out of 5 stars).
The Purge Unpopular selector allows the user to remove all titles not meeting a predetermined or user selected criteria (e.g., remove all titles not in the top 2000 titles rated on IMDB).
The Purge by Studio selector allows the user to add all titles/lists to a block/purge list by studio and/or add all titles/lists to a Hypelist/whitelist by studio by leaving the selector unchecked. If any studio on a whitelist offers a title, the title may remain available even if it is offered by a blocked studio.
The Purge by AI selector is configured to activate an AI prompt window and provides an automated AI prompt to purge content (e.g., on a repeating basis, such as daily, weekly, monthly, etc.). The AI prompt may be used to remove (or add) media content that does not fit the user's preferences or parameters set for a particular list.
It should be understood that where examples describe options for adding or removing media content to or from a block/purge list, those options may be similarly used to add or remove media content to or from a Hypelist/whitelist, or vice versa.
Hypelists/whitelists may be created and edited and provide for parental controls and/or personal preferences via the Hypelists/whitelists. The Hypelists may be created in a “Hypelab.” The Hypelists may include user-defined and shareable lists. The users can set up their parental or other restrictive controls based on following lists created by others, who they trust, such as friends and family. User profiling can be integrated and keywords associated with the Hypelists, and a series of keywords can be used to define the Hypelists recommended based on a series of keywords (e.g., selected by the user from a cloud of keywords and titles. The Hypelists can alternate with the most recent Hype from the user and/or on a periodic or repeating basis.
The Duplicate this List option allows the user to copy and name the copy to create a new list. Other options link to a HypeLab mode for editing a list and/or provide a panel of genres to select from.
The user may rate the title by clicking on the stars in the upper left corner, or by clicking the rate button. The user may flag the title (e.g., mark as explicit) or review the title in which a pop-up panel is provided for a text, photo or video upload (see,
A list of hidden/private or public/visible Hypelist tags are searchable and may be indicated, such as by the eye/blocked eye icons shown in
As titles on the Hypelist have been viewed, the watched titles may be automatically identified as “Seen,” and the user has the option to automatically remove the title from recommendations and/or from the Hypelist.
The AI prompt 1022 may be used to generate a Hypelist, add content to an existing Hypelist, or add content for a Block/Purge list. In one embodiment, the AI prompt 1022 can be used to generate a recurring new list (e.g., weekly, monthly) for a particular Hypelist based on the AI prompt. After the AI media agent generates an algorithm based on the AI prompt 1022, the AI media agent can apply the algorithm to a data set including media content (e.g., a large language model “LLM”) to generate a list of media content. The list of media content may be recommended for viewing and/or used to generate a new list or added to an existing list.
Once the AI generated content is added to a new or existing list, the list may be edited, such as described above with respect to the HypeLab and/or content hidden or flagged to be removed from the list and/or added to a block/purge list.
User edits to the AI generated content to or from lists can be further used by the AI media agent for future application. Activity of the user, such as watching the content, rating the content, hiding the content or removing the content from a list by the user may be used by the AI media agent as an input to train an AI model (e.g., LLM) for future interactions with the user. Where the user hides or flags content as inappropriate, for example, the AI media agent may generate categories, genres, micro-genres, keywords, or the like to train or adjust the algorithm applied to the model.
Inputs received from the user device are received by the server via the user interface 200, interpreted by AI and activated by the AI media agent 2000. The system 10 is configured to be accessed remotely and from anywhere as long as the internet connection is available. The main user interface 200 can be on a network-based platform and is the primary interface consumers drive through to access the system 10. A server 30 has a CPU and memory and hosts a program 40 which drives the system 10. The interface 200 receives inputs, such as those shown in
The AI media agent 2000 is configured to aggregate, curate and filter media content based on the selected content category settings for a user. The AI media agent 2000 is also configured to create block lists and/or whitelists of the media content based at least in part on the content category settings.
A library 204 of content can be built within the system 10 and saved on the server 30 or accessed via the internet 20. In one embodiment, the library includes media content from third party streaming services 2001 including Content Service 1, Content Service 2, Content Service 3, . . . , Content Service N.
The AI media agent 2000 is configured to scrape and save public content data from third party platforms 2001 and rating and ranking content via algorithms driven by the users of the digital content controller.
The AI agent 2000 is configured to save user credentials, such as passwords and log in information, to enable the agent to access services on behalf of the user and to save user financial data, such as credit card or other payment information, to allow the agent to transact (e.g., purchase or rent content) from the services on behalf of the user. The AI agent may be authorized by the user to act as an agent of the user, such as via a limited power of attorney, to be able to access the services on his or her behalf.
An ADA compliant interface remote control device 2003, such as used by disabled, children, or elderly users, may be used to power the AI media agent 2000 to navigate Internet pages, including third party services, as the AI media agent on behalf of the user. The ADA compliant remote-control device 2003 may work with one or more browser extensions to allow the AI media agent 2000 to access any needed services.
In the embodiment of
In one embodiment, a method of providing an artificial intelligence (AI) media agent for a digital content controller is provided. In this embodiment, the method includes receiving user inputs via a user interface of a digital content controller comprising respective levels of user-defined content filters for a plurality of content categories; using artificial intelligence to analyze each of a plurality of media content members of a library of media content to assign ratings for each of the plurality of content categories for each media content member of the library, wherein the operation of analyzing the library comprises using artificial intelligence to review each media content member to identify content ratings for each of the plurality of content categories and identify portions of content corresponding to the content ratings for each of the plurality of content categories; identifying objectionable portions of content based on the plurality of content ratings compared to the respective levels of user-defined content filters; and automatically building a dynamic library of content appropriate for respective levels of user-defined content filters, wherein the dynamic library comprises media content meeting the respective levels of user-defined content filters and media content configured to be altered during playback by removing or replacing one or more objectionable portions of content.
The method can provide a user interface for the user to define a block list of content. The block list may further include one or more sub-genre of content (e.g., a micro-genre determined using artificial intelligence analysis of media content and/or metadata associated with the media content).
The method can further include providing curated media content specifically tailored to the user based at least in part on the respective levels of content category filters and the block list.
The method can also provide a user interface for the user to define a whitelist of content. The whitelist may further include one or more sub-genre of content (e.g., a micro-genre determined using artificial intelligence analysis of media content and/or metadata associated with the media content).
The method can further include providing curated media content specifically tailored to the user based at least in part on the respective levels of content category filters and the whitelist.
In another embodiment, the media content configured to be altered is altered during playback on a network (e.g., a local network or cloud network) that is under the control of the user.
In one embodiment, the AI media agent is instantiated on an edge AI processing network located on a network (e.g., within a local network or cloud network) that is under control of the user.
In another embodiment, a media artificial (AI) agent for a digital content controller is provided. In this embodiment, the media AI agent provides a user interface with a program in communication with the AI media agent. The media AI agent comprises an AI processor to receive user inputs via a user interface of a digital content controller comprising respective levels of user-defined content filters for a plurality of content categories. The media AI agent is also configured to cause the media AI agent to use artificial intelligence to analyze each of a plurality of media content members of a library of media content to assign ratings for each of the plurality of content categories for each media content member of the library. The operation of analyzing the library comprises using artificial intelligence to review each media content member to identify content ratings for each of the plurality of content categories and identify portions of content corresponding to the content ratings for each of the plurality of content categories. The media AI agent causes the AI agent to identify objectionable portions of content based on the plurality of content ratings compared to the respective levels of user-defined content filters. The media AI agent is also configured to automatically build a dynamic library of content appropriate for respective levels of user-defined content filters, wherein the dynamic library comprises media content meeting the respective levels of user-defined content filters and media content configured to be altered during playback by removing or replacing one or more objectionable portions of content.
The media AI agent can provide a user interface for the user to define a block list of content. The block list may further include one or more sub-genre of content (e.g., a micro-genre determined using artificial intelligence analysis of media content and/or metadata associated with the media content).
The media AI agent can further include providing curated media content specifically tailored to the user based at least in part on the respective levels of content category filters and the block list.
The media AI agent can also provide a user interface for the user to define a whitelist of content. The whitelist may further include one or more sub-genre of content (e.g., a micro-genre determined using artificial intelligence analysis of media content and/or metadata associated with the media content).
The media AI agent can further include providing curated media content specifically tailored to the user based at least in part on the respective levels of content category filters and the whitelist.
In another embodiment, the media content configured to be altered is altered during playback on a network (e.g., a local network or cloud network) that is under the control of the user.
In one embodiment, the AI media agent is instantiated on an edge AI processing network located on a network (e.g., within a local network or cloud network) that is under control of the user.
It will be appreciated that the aforementioned methods, systems and devices may be modified to have some components and steps removed, or may have additional components and steps added, all of which are deemed to be within the spirit of the present disclosure. Even though the present disclosure has been described in detail with reference to specific embodiments, it will be appreciated that the various modification and changes can be made to these embodiments without departing from the scope of the present disclosure as set forth in the claims. The specification and the drawings are to be regarded as an illustrative thought instead of merely restrictive thought.
This application is a continuation-in-part of based on and claims priority to U.S. patent application Ser. No. 16/262,397 filed on Jan. 30, 2019 entitled DIGITAL CONTENT CONTROLLER, which is a continuation application based on and that claims priority to U.S. patent application Ser. No. 14/384,973 filed on Sep. 12, 2014, which is based on and claims priority to PCT/US13/32216, filed on Mar. 15, 2013 entitled “DIGITAL PARENTAL CONTROLS INTERFACE” which is based on and claims priority to U.S. Provisional Patent Application No. 61/611,357, filed on Mar. 15, 2012 entitled “A DIGITAL PARENTAL CONTROLS INTERFACE THAT LIMITS MEDIA CONTENT RATED BY A NUMERICAL VALUE SYSTEM”, each of which is hereby incorporated by reference in its entirety.
| Number | Date | Country | |
|---|---|---|---|
| 61611357 | Mar 2012 | US |
| Number | Date | Country | |
|---|---|---|---|
| Parent | 14384973 | Sep 2014 | US |
| Child | 16262397 | US |
| Number | Date | Country | |
|---|---|---|---|
| Parent | 16262397 | Jan 2019 | US |
| Child | 19088828 | US |