Video game systems execute a wide variety of video game applications to provide interactive user gaming experiences. The playing of audio content is an important part of the interactive user gaming experience provided by many popular video game applications, especially interactive music games. Many video games are designed for use with specific, pre-configured audio content that is used to provide the interactive gaming experience. Users, however, may desire to hear, and have legal access to, a more diverse selection of audio content that would enhance the experience provided by a video game such as a interactive music game.
Systems and techniques for managing audio content for use with a video game playable via a video game system are described herein. One or more audio content sources, other than an audio content catalog pre-configured for use with a particular video game, are dynamically detected by the video game system. Examples of such audio content sources include but are not limited to: portable media players or recorders; personal computers; network-based media download or streaming services or centers; and individual computer-readable storage media such as hard drives, memory sticks, USB storage devices and the like.
Audio content items, which may have disparate formats, are aggregated from detected audio content sources by populating a data structure with data objects. The data objects are configured to store references to individual content sources and to audio content items stored thereby, including metadata information associated with individual audio content items. As data is stored in the data objects, the data objects are used to dynamically render, via a graphical user interface (“GUI”) for the video game, certain visual objects representing audio content stored on detected audio content sources. For example, for each audio content item, visual objects rendered via the GUI may include the name of the audio content item and an icon representing its source. Via the GUI, a user can browse, search/sort, and select audio content items for use with the video game, regardless of the source or original format of the selected audio content items. In one exemplary implementation, when a user selects a particular audio content item for use with the video game, the selected audio content item is translated into a format usable by the video game, if necessary, and placed into the audio content catalog pre-configured for use with the video game.
This Summary is provided to introduce a selection of concepts in a simplified form. The concepts are further described in the Detailed Description section. Elements or steps other than those described in this Summary are possible, and no element or step is necessarily required. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended for use as an aid in determining the scope of the claimed subject matter. The claimed subject matter is not limited to implementations that solve any or all disadvantages noted in any part of this disclosure.
The systems and techniques for managing audio content for use with a video game playable via a video game system that are described herein provide for a dynamic, coherent visual representation of audio content items having disparate sources and formats using a single graphical user interface. Via the graphical user interface, a user browses, sorts/searches, and selects particular audio content for use with the video game.
Turning to the drawings, where like numerals designate like components,
As shown, video game system 100 is configured to execute a video game 101 using audio content items 105 obtained from a number of audio content sources (discussed further below). Audio content items 105 are commercial or non-commercial audio samples in any compressed or un-compressed file format, including but not limited to music samples, speech samples, and the like. Audio content sources in general may be any electronic devices, systems, or services (or any physical or logical element of such devices, systems, or services), operated by commercial or non-commercial entities, which legally store DRM-free audio content items 105. Exemplary audio content sources include audio content catalog 108, network servers/services 104, and consumer electronic devices 102.
Audio content catalog 108 represents any data construct or physical device defined to store information for accessing audio content items 105 pre-configured for use with video game 101. It will be appreciated that audio content catalog 108 and audio content items 105 stored thereby need not be co-located with video game 101, and may be located in any suitable computer-readable storage medium (computer-readable storage media 404 are shown and discussed further below, in connection with
Network servers/services 104 represent any network-based computer-readable storage media from which network-based audio content items 105 may be accessed (via one or more networks 110) by video game system 100. Examples of network servers/services include but are not limited to network-based media download or streaming services or centers. Networks 110 represent any existing or future, public or private, wired or wireless, wide-area or local area, one-way or two-way data transmission infrastructures, technologies or signals. Exemplary networks 110 include: the Internet; managed wide-area networks (for example, cellular networks, satellite networks, fiber-optic networks, co-axial networks, hybrid networks, copper wire networks, and over-the-air broadcasting networks); local area networks; and personal area networks.
Consumer electronic devices 102 represent any known or later developed portable or non-portable consumer devices, including but not limited to: personal computers; telecommunication devices; personal digital assistants; media players or recorders (including such home entertainment devices as set-top boxes, game consoles, televisions, and the like); in-vehicle devices; and individual computer-readable storage media such as hard drives, memory sticks, USB storage devices and the like.
Aspects of an audio content management system (“ACMS”) 120 (discussed in further detail in connection with
With continuing reference to
As shown, ACMS 120 includes: audio source discovery engine 202; audio content aggregation engine 204 (for populating data structure 206 with data objects 207); and audio content presentation engine 208, which utilize sorting criteria 209.
Audio source discovery engine 202 detects when a particular audio content source is in communication with video game system 100, and defines the way in which ACMS 120 communicates with a particular audio content source to populate data structure 206 (discussed further below). In one possible implementation, multiple protocol adapters (not shown) are defined for a variety of known audio content sources, with each adapter configured to connect to an audio content source using a predetermined protocol, and accommodate the enumeration and/or retrieval of audio content from the audio content source. Such communication may be initiated by ACMS 120 or a particular audio content source. In an alternate embodiment, a specific protocol adapter may be defined that is generally supported by all audio content sources.
Audio content aggregation engine 204 is responsible for enumerating the audio contents of audio content sources detected by audio source discovery engine 202, and for populating data structure 206 (which may be a database, declarative language schema or document, table, array, or another data structure stored in a permanent or temporary computer-readable medium) with data objects 207, which are configured to store data regarding data content items 105 from particular audio content sources. Audio content enumeration generally involves parsing information received from a particular audio content source, and transcribing the information in accordance with the predefined structure of data objects 207. Enumeration of the audio content of detected audio content sources may occur using any known or later developed public or proprietary technique, such as media transfer protocol (“MTP”), and data push or pull techniques may be employed. Data structure 206 may be fully populated with the audio contents of a particular audio content source prior to presentation of GUI 121 to a user, or GUI 121 may present the contents of a particular audio content source “on the fly”—as such contents are discovered and enumerated. Data stored data structure 206/data objects 207 may be selectively available only according to licensing or specifications for a particular video game or system, or may be usable by any video game or system.
Data objects 207 facilitate the cataloging, searching/sorting, and presentation of audio content items 105 from a number of detected audio content sources. As shown, an audio content item reference 222 of a particular data object 207 is used to store data about a particular audio content item 105. Such data may include, but is not limited to: a direct or indirect reference to a storage location of the particular audio content item (such as a URL, a variable, a vector, or a pointer); a reference to a format of the particular audio content item; the particular audio content item itself; and/or a reference to a particular visual object 211 used for representing the particular audio content item via GUI 121.
A source reference 220 of a particular data object 207 is used to store data about a particular audio content source from which a particular audio content source originates. Such data may include but is not limited to direct or indirect references to instructions, protocols, or interfaces usable for establishing communication with the particular audio content source, or a reference to a particular visual object 211 used for representing the particular audio content source via GUI 121. Via audio content item references 222 and/or source references 220, operators in proprietary environments, such as network-based service providers (for example, online music vendors, or cable or satellite providers), may be able to identify available audio content items and still restrict access to the content, or even interact directly with a user, to provide richer user experiences via a particular video game.
Metadata items 224 associated with a particular audio content item may also be stored within one or more data objects 207. Metadata is any descriptive data or identifying information (such as title information, artist information, starting and ending time information, expiration date information, hyperlinks to websites, file size information, format information, photographs, graphics, descriptive text, and the like) in computer-usable form that is associated with an audio content item. Metadata may be provided by different audio content sources, or may be added by ACMS 120 to improve information retrieval. Generally, metadata items 224 would provide enough information to enable GUI 121 to provide a rich discovery and browse scenario of audio content items from a variety of audio content sources without requiring specific knowledge of the user interfaces normally used for managing audio content via the different audio content sources.
A catalog indicator 226 portion is a flag or other construct that indicates when a particular audio content item 105 has been added to the audio content catalog associated with a particular video game 101 (displayable as an icon or other visual object via GUI 121), so that a user knows that the audio content item does not need to be added for use within the video game.
Audio content presentation engine 208 utilizes various sorting criteria 209 to leverage associations between audio content items 105 from various audio content sources, and establishes and provide access to such audio content items via a single GUI 121. Audio content items 105 from multiple sources are generally searchable/sortable using standard search algorithms, based on user-input or automatic queries derived from sorting criteria 209. Subsets of available audio content items that meet one or more sorting criteria 209 may be displayed via the use of various visual objects 211. Because searchable information is organized/correlated in accordance with the format provided by data objects 207, efficient, accurate searching and presentation of audio content items from disparate audio content sources is possible. Virtually unlimited predetermined or dynamically created sorting criteria 209 are possible. Sorting criteria 209 may be received from users, pre-programmed into ACMS 120 in any operating environment, or received from third parties (such as audio content sources). Inferences can also be made by inspecting individual metadata items to create “intelligent” sorting criteria.
With continuing reference to
The method starts at block 300, and continues at block 302, where an audio content catalog for use with the video game is identified, such as audio content catalog 108 for use with video game 101. Next, at block 304, one or more other audio content sources accessible by the video game system are dynamically detected. In the context of ACMS 120, audio source discovery engine 202 may identify specific source adapters/interfaces to communicate with different audio content sources using appropriate communication protocols or techniques.
As indicated at block 306, audio content items on audio content sources identified at block 304 are enumerated, and at block 308, based on the enumeration, a data structure, such as data structure 206 is populated with data objects, such as data objects 207. In the context of ACMS 120, audio content aggregation engine 204 is responsible for enumeration of audio content items and population of data objects 207. Enumeration and data structure population may also involve ACMS 120 adding certain useful computer-usable descriptors or links data structure 206/data objects 207, which can facilitate the identification of relationships between audio content items from different audio content sources.
At block 310, based on the data objects, certain visual objects are rendered on a graphical user interface, such as GUI 121. In the context of ACMS 120, audio content presentation engine 208 displays visual objects 211 associated with audio content item references 222 and/or source references 220, in a manner that enables a user to browse specific visual objects based on a variety of sorting criteria 209, and to select specific visual o objects 211 representing audio media content items for use with video game 101. Sorting/searching generally involves identifying and evaluating relationships between user-input information and metadata items 224, audio content item references 222, and/or source references 220. Sorting criteria 209 may be used in the identification and evaluation of such relationships, and such relationships between may be pre-established or established on the fly. For example, relationships defined by metadata items 224 that meet certain sorting criteria 209 may be pre-established or may be established in response to user input.
In the case where GUI 121 presents the contents of a particular audio content source as such contents are discovered and enumerated, the visual objects of GUI 121 are automatically updated to present to a user the actual available audio content sources and/or audio content items for further interaction. In addition, a counter that tallies the total number of available audio content items may be displayed and dynamically updated. In one possible implementation, an icon is prominently displayed (inline or inline or in another manner) with a visual object representing a particular audio content item, which denotes which source the item originated from. For ease of use, the source indicator icon can be toggled on or off by a user. Any combination of sources toggled on or off is handled. Additionally, if the source is a network-based service, the audio content item may also include other material, such as lyrics and/or a music video, and possibly a price. Icons denoting which of these materials is included with the audio content item may also be displayed inline (or in another manner) with the visual object representing the audio content item.
As indicated at block 312, upon selection of a particular audio content item 105 for use with the video game (from a source other than the audio content catalog), the audio content item is placed into the audio content catalog. It will be appreciated that in the process of enumeration and/or data object population, the audio content item may have already been placed into temporary or permanent memory accessible by video game system 100, or alternatively information within a data object (such as a URL, pointer, vector, or variable) may be used to retrieve the audio content item from the particular audio content source at the time of user selection. Additionally, the process of placing the audio content item into the audio content catalog may involve translating the format of the audio content item to a different format, and/or interacting with network-based services to purchase, license, or otherwise use the audio content item. Any known or developed technique for such format translation may be employed.
In this manner, it is possible to provide a single video game GUI for user selection of audio content items from disparate audio content sources and/or formats. A wide variety of fresh audio content may be discovered and accessed, even when the audio content is not pre-configured for use with the video game. The flexible architecture of ACMS 120 enables efficient yet complex searching and data storage models that accommodate frequently changing audio sources and audio content.
With continued reference to
As shown, operating environment 400 includes processor 402, computer-readable media 404, input interfaces 111, output interfaces 103 (input and/or output interfaces implement GUI 121, not shown), network interfaces 418, and specialized hardware/firmware 442. Computer-executable instructions 406 are stored on computer-readable media 404, as are, among other things, data objects 207, visual objects 211, sorting criteria 209, and audio content catalog 108. One or more internal buses 421 may be used to carry data, addresses, control signals and other information within, to, or from operating environment 400 or elements thereof.
Processor 402, which may be a real or a virtual processor, controls functions of operating environment 400 by executing computer-executable instructions 406. Processor 402 may execute instructions 406 at the assembly, compiled, or machine-level to perform a particular process.
Computer-readable media 404 represent any number and combination of local or remote devices, in any form, now known or later developed, capable of recording, storing, or transmitting computer-readable data, such as computer-executable instructions 406, data objects 207, visual objects 211, sorting criteria 209, or audio content catalog 108. In particular, computer-readable media 404 may be, or may include, a semiconductor memory (such as a read only memory (“ROM”), any type of programmable ROM (“PROM”), a random access memory (“RAM”), or a flash memory, for example); a magnetic storage device (such as a floppy disk drive, a hard disk drive, a magnetic drum, a magnetic tape, or a magneto-optical disk); an optical storage device (such as any type of compact disk or digital versatile disk); a bubble memory; a cache memory; a core memory; a holographic memory; a memory stick; a paper tape; a punch card; or any combination thereof. Computer-readable media 404 may also include transmission media and data associated therewith. Examples of transmission media/data include, but are not limited to, data embodied in any form of wireline or wireless transmission, such as packetized or non-packetized data carried by a modulated carrier signal.
Computer-executable instructions 406 represent any signal processing methods or stored instructions. Generally, computer-executable instructions 406 are implemented as software components according to well-known practices for component-based software development, and encoded in computer-readable media (such as computer-readable media 404). Computer programs may be combined or distributed in various ways. Computer-executable instructions 406, however, are not limited to implementation by any specific embodiments of computer programs, and in other instances may be implemented by, or executed in, hardware, software, firmware, or any combination thereof.
As shown, certain computer-executable instructions 406 implement source discovery functions 408, which implement aspects of audio source discovery engine 202; certain computer-executable instructions 406 implement aggregation functions 410, which implement aspects of audio content aggregation engine 204; and certain computer-executable instructions 406 implement presentation functions 412, which implement aspects of audio content presentation engine 208.
Network interface(s) 418 are one or more physical or logical elements that enable communication by operating environment 400 via one or more protocols or techniques usable in connection with networks 110.
Specialized hardware 442 represents any hardware or firmware that implements functions of operating environment 400. Examples of specialized hardware includes encoder/decoders (“CODECs”), decrypters, application-specific integrated circuits, secure clocks, optical disc drives, and the like.
It will be appreciated that particular configurations of operating environment 400 or ACMS 120 may include fewer, more, or different components or functions than those described. In addition, functional components of operating environment 400 or ACMS 120 may be implemented by one or more devices, which are co-located or remotely located, in a variety of ways.
Although the subject matter herein has been described in language specific to structural features and/or methodological acts, it is also to be understood that the subject matter defined in the claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
It will further be understood that when one element is indicated as being responsive to another element, the elements may be directly or indirectly coupled. Connections depicted herein may be logical or physical in practice to achieve a coupling or communicative interface between elements. Connections may be implemented, among other ways, as inter-process communications among software processes, or inter-machine communications among networked computers.
The word “exemplary” is used herein to mean serving as an example, instance, or illustration. Any implementation or aspect thereof described herein as “exemplary” is not necessarily to be constructed as preferred or advantageous over other implementations or aspects thereof.
As it is understood that embodiments other than the specific embodiments described above may be devised without departing from the spirit and scope of the appended claims, it is intended that the scope of the subject matter herein will be governed by the following claims.