1. Field of the Invention
The present invention generally relates to the creation of musical content. More specifically, the present invention relates to a system infrastructure that allows for the social composition and production of music and musical thoughts in real-time or near real-time.
2. Description of the Related Art
The music industry generates tens of billions of dollars on an annual basis. Contributing to this multi-billion dollar industry are any number of “players” or “industries within the industry.” Artists and other creative talents generate musical content for performance and distribution. Content providers and distributors make this musical content available to consumers for enjoyment through traditional record and compact disc sales as well as downloadable services such as iTunes and streaming services such as Spotify. Intermediate “middleware” providers such as Gracenote are now a critical part of the music industry ecosystem. These providers offer a wide variety of services including music recognition technologies, metadata, and any number of other identification, discovery, and connection services.
Notwithstanding the existence and financial success of multiple contributors and multiple strata of the music industry, social media remains an unnaturally silent partner For example, there is no social medium for the online creation of music in real time by amateurs or professionals. Messaging has mediums such as Twitter and Facebook; still visual images (e.g., digital photography) have Instagram and Flickr; and video content has Vine and YouTube. But there is no equivalent medium for music nor is there a social venue allowing for collaborative digital musical content creation in real-time or near real-time.
There is a need in the art for a system and method that allows for the social composition and production of music and musical thoughts in real-time or near real-time by both amateurs and professionals. There is a corresponding need for a musical composition and production infrastructure to support the aforementioned social creation of music and musical thoughts.
A first claimed embodiment concerns a system for developing musical content. The system includes a front-end application executing on a computing device and that provides a musical contribution. The system also includes an API server that receives a musical contribution from the front-end application and that generates a job ticket for creation of social co-created musical content. The system further includes a messaging server that receives the job ticket from the API server and is communicatively coupled to the database, composition server, and production server. The system also includes a database that maintains the musical contribution from the front-end application, data related to the generation of the musical contribution, musical blueprints, and rendered musical content all associated within a job ticket from the messaging server. A composition server creates a musical blueprint using the musical contribution and a production server renders musical content using the musical blueprint, the rendered musical content is provided through the front-end application for playback and interaction.
Embodiments of the present invention provide an infrastructure that allows for social composition and production of musical thoughts in real-time or near real-time by both amateurs and professionals. Embodiments of the present invention may be utilized to allow for the combination of individual musical contributions into single, cohesive musical thoughts. These musical thoughts may be presented to collaborating creators or another audience for approval, refinement, or further derivation. The present invention may be extended to other creative endeavors including but not limited to the written word, video, and digital imagery.
The front end application 110 operating on mobile devices and work stations as illustrated in
Such social contributions of musical thought may occur on a mobile device as might be common amongst amateur or non-professional content creators. Social contributions may also be provided at a professional workstation or server system executing an enterprise version of the application 110 as might be more common amongst music or industry professionals. The front end application 110 connects to the API server 120 over a wired, wireless, or heterogeneous communication network that may be public, proprietary, or a combination of the foregoing.
The API server 120 of
The API server 120 of
Messaging server 130 of
Database servers 140 provide storage for infrastructure 100. Database 140 maintains instances of user musical thoughts from various users such as “hums” and “taps.” Such musical thoughts may be stored on web accessible storage services such as the Amazon Web Services (AWS) Simple Storage Service (AWS S3) whereby the database server 140 stores web accessible addresses to sound and other data files representing those musical thoughts. Database 140 may also maintain user information, including but not limited to user profiles and data associated with those profiles, including user tastes, search preferences, and recommendations. Database 140 may also maintain information concerning genres, compositional grammar rules and styles as might be used by composition server 150 and instrumentation information as might be utilized by production server 160.
Database 140 may further maintain executable files and information related to music information retrieval (MIR). MIR files might include extraction tools that process “hums” and peak detection for identifying “taps.” Database 140 can also correlate tickets to various data elements. For example, database 140 identifies which “hums” relate to which “taps” by way of job tickets.
Composition server 150 “listens” for tickets that are queued by messaging server 130 and maintained by database 140. Such tickets reflect the need for execution of the composition and production process. Composition server 150 maintains a composition module that is executed to generate a musical blueprint that incorporates a “hum” and “tap” in the context of a given musical genre. Multiple tickets may be issued by the API server 120 to the composition server 150 to produce scores or blueprints for each hum or tap individually and store these in the database 140. The scores of a pair of “hums” and “taps” are then used by the composition server 150 to produce the score or blueprint for rendering to sound data by the production server 160.
The composition server 150 will next create rendering tickets on the messaging server 130. The production server 160 retrieves tickets for rendering and the score or blueprint as generated through the execution of the composition module. Production server 160 then applies instrumentation to the same. The end result of the composition process is maintained in database 140.
Production server 160 utilizes data in database 140 that corresponds to a given ticket and corresponding musical blueprints. Utilizing this information, production server 160 may render collaborative and socially co-created musical content such as the combination of a “hum” and “tap” in the context of a given genre. Production server 160 listens to a ticket queue on messaging server 130 and when a ticket identifies the need for rendering by the production server 160, the requisite data is acquired from the database 140 to allow for the rendering process to take place.
The rendered sound data is then provided from the production server 160 to the front end application 110 over network. Once received at the front end application 110 of a mobile device or workstation, the sound data may be played back or subjected to further manipulation or interaction. Such delivery may occur using web addressable storage such as AWS S3.
The rendered data is retrieved from API server 120 using the sound data file address stored in database server 140. Production server 160 may use dedicated hardware components to improve performance of the rendering processing. This dedicated hardware may include multiple very fast central processing units (CPU), dedicated digital signal processing (DSP) units, or graphics processing units (GPU).
The system infrastructure 100 illustrated in
Load balancer 180 and autoscaler 190 may operate in conjunction with one another to help maintain optimal system infrastructure 100 operability. For example, load balancer 180 may help distribute traffic to specific instances of auto scaling groups. Those specific group instances may be managed by autoscaler 190. Alternatively, the use of the messaging server 130 enables the autoscaling of the composition and production servers to occur decided by the number of pending tickets in the messaging queues without requiring a load-balancer to distribute tickets to the composition 150 and production 160 servers.
The aforementioned system architecture may be implemented in a cloud computing environment. That environment may be hosted by a third-party like the aforementioned AWS cloud computing environment. A cloud computing environment allows for a simplified means of accessing servers, storage, databases and a broad set of application services over the Internet. Third-party hosts like AWS own and maintain the network-connected hardware required for these application services, while end-users provision and use specifically what is required to implement an service offering by means of a web application.
Various network protocols, architectures, and hosting arrangements have been described in the context of
The web tier 210 of
The application tier 220 of
Also shown in
Multiple versions of method 300 may be executed on a single computing device. Various elements of the processing chain may likewise be executed in parallel across multiple computing devices. For example, multiple compositional instances may be taking place at the same time of various instances of production alongside various instances of rendering. Further, multiple computing devices may allow for the parallel processing of multiple processing chains (i.e. method 300) or portions thereof on each of those computing devices at once. This processing chain 300 allows for asynchronous musical synthesis and creation of socially co-created musical content.
Various tools, which may be from a third-party, may be plugged into or integrated with various portions of the system infrastructure 100. Such integration may allow for a more cohesive operating environment and the creation of a common production framework. Data generated as a result of execution and utilization of such tools can be harvested and utilized to improve operation of composition, production, and rendering.
For example, hardware device 400 may be utilized to implement musical information retrieval, composition, and production in a system architecture like that illustrated in
Hardware device 400 as illustrated in
The aforementioned components of
Mass storage 430 may be implemented as tape libraries, RAID systems, hard disk drives, solid-state drives, magnetic tape drives, optical disk drives, and magneto-optical disc drives. Mass storage 430 is non-volatile in nature such that it does not lose its contents should power be discontinued. As noted above, mass storage 430 is non-transitory in nature although the data and information maintained in mass storage 430 may be received or transmitted utilizing various transitory methodologies.
Information and data maintained in mass storage 430 may be utilized by processor 410 or generated as a result of a processing operation by processor 410. Mass storage 430 may store various software components necessary for implementing one or more embodiments of the present invention by loading various modules, instructions, or other data components into memory 420.
Portable storage 440 is inclusive of any non-volatile storage device that may be introduced to and removed from hardware device 400. Such introduction may occur through one or more communications ports, including but not limited to serial, USB, Fire Wire, Thunderbolt, or Lightning. While portable storage 440 serves a similar purpose as mass storage 430, mass storage device 430 is envisioned as being a permanent or near-permanent component of the device 400 and not intended for regular removal. Like mass storage device 430, portable storage device 440 may allow for the introduction of various modules, instructions, or other data components into memory 420.
Input devices 450 provide one or more portions of a user interface and are inclusive of keyboards, pointing devices such as a mouse, a trackball, stylus, or other directional control mechanism, including but not limited to touch screens. Various virtual reality or augmented reality devices may likewise serve as input device 450. Input devices may be communicatively coupled to the hardware device 400 utilizing one or more the exemplary communications ports described above in the context of portable storage 440
Display system 470 is any output device for presentation of information in visual or occasionally tactile form (e.g., for those with visual impairments). Display devices include but are not limited to plasma display panels (PDPs), liquid crystal displayus (LCDs), and organic light-emitting diode displays (OLEDs). Other displays systems 470 may include surface conduction electron emitters (SEDs), laser TV, carbon nanotubes, quantum dot displays, and interferometric modulator displays (MODs). Display system 570 may likewise encompass virtual or augmented reality devices as well as touch screens that might similarly allow for input and/or output as described above.
Peripherals 480 are inclusive of the universe of computer support devices that might otherwise add additional functionality to hardware device 400 but not otherwise be specifically addressed above. For example, peripheral device 480 may include a modem, wireless router, or otherwise network interface controller. Other types of peripherals 480 might include webcams, image scanners, or microphones although the foregoing might in some instances be considered an input device
The foregoing detailed description has been presented for purposes of illustration and description. The foregoing description is not intended to be exhaustive or to the present invention to the precise form disclosed. Many modifications and variations of the present invention are possible in light of the above description. The embodiments described were chosen in order to best explain the principles of the invention and its practical application to allow others of ordinary skill in the art to best make and use the same. The specific scope of the invention shall be limited by the claims appended hereto.
The present application is a continuation-in-part and claims the priority benefit of U.S. patent application Ser. No. 14/920,846 filed Oct. 22, 2015, which claims the priority benefit of U.S. provisional application No. 62/067,012 filed Oct. 22, 2014; the present application is also a continuation-in-part and claims the priority benefit of U.S. patent application Ser. No. 14/931,740 filed Nov. 3, 2015, which claims the priority benefit of U.S. provisional application No. 62/074,542 filed Nov. 3, 2014; the present application also claims the priority benefit of U.S. provisional application No. 62/075,160 filed Nov. 4, 2014. The disclosure of each of the aforementioned applications is incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
62067012 | Oct 2014 | US | |
62075160 | Nov 2014 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14931740 | Nov 2015 | US |
Child | 14920846 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14920846 | Oct 2015 | US |
Child | 14932893 | US | |
Parent | 62074542 | Nov 2014 | US |
Child | 14931740 | US |